Splunk Add-on for OPC (Legacy)

Use the Splunk Add-on for OPC

Acrobat logo Download manual as PDF


Acrobat logo Download topic as PDF

Set up the Splunk Add-on for OPC

Before you use the Splunk Add-on for OPC to ingest data from your OPC servers, you need to perform several set up steps.

These directions refer to steps for your indexers and your data ingestion management node. If you are using a single-instance deployment of Splunk Enterprise, perform all of these steps on that single instance.

  1. Create one or more indexes to store your data.
  2. Configure an HTTP event collector token.
  3. Set environment variables.
  4. Set up encrypted communication between your OPC servers and the add-on.
  5. Connect to your OPC servers.
  6. Set up HTTP event collector tokens to route incoming data.

Create one or more indexes to store your data

Work with your Splunk Enterprise administrator to create the indexes you need to store your OPC data, if they do not already exist. Dividing your data streams into separate indexes helps ensure good performance when you search, allows you to set appropriate data retention policies, and provides data security by controlling who has access to the data.

If you are using this add-on to collect OPC metrics data for use in Splunk for Industrial Asset Intelligence (IAI) you must create at least one metrics index to store your metrics data. You can also create one or more event indexes to store alarm or other event data from OPC servers. If you want to apply role-based access control to data in Splunk IAI, create separate indexes to hold the data that each separate role needs to access in Splunk IAI. See Manage role-based access to Splunk IAI in the Splunk Industrial Asset Intelligence documentation.

In a distributed environment, create your indexes on your indexers or indexer cluster and on your heavy forwarder.

Configure an HTTP event collector token

The Splunk Add-on for OPC uses HTTP event collector tokens to route data in your Splunk deployment.

Configure at least one HTTP event collector token. There are several reasons that you might want to configure multiple tokens:

  • If you want to test data ingestion, routing, and role-based access using one set of indexes and then later switch to another set of indexes for production use, configure separate tokens. Configure each token to route data to a different set of allowed indexes.
  • If you have multiple unclustered indexers that you want to manage separately, you can control which data goes to which indexer by defining separate tokens.

Configure the token on the node of your deployment where you want to handle data ingestion. In a distributed deployment, that is usually the heavy forwarder on which you installed the Splunk Add-on for OPC. However, if you want to handle data ingestion separately from data ingestion management, perform these steps on each of the unclustered indexers that you want to use to ingest data and ensure that the Splunk Add-on for OPC is installed on those indexers. If you have an indexer cluster, use your heavy forwarder for data ingestion.

To configure a token, follow these steps on your data ingestion node:

  1. Log in as a Splunk Enterprise administrator.
  2. Click Settings > Data Inputs.
  3. Click HTTP Event Collector.
  4. Click Global Settings.
  5. In the All Tokens toggle button, select Enabled.
  6. Review the remaining settings and make a note of the Port and SSL settings. Make changes if you wish.
  7. Click Save.
  8. Click New Token.
  9. Give your token a Name, and then click Next.
  10. Set the App Context to Splunk Add-on for OPC (Splunk_TA_opc).
  11. (Optional) Click on index names in the Select Allowed Indexes section to add them to the list of indexes to which this token permits data to be sent. Leave blank if you want to allow the token to send data to all indexes.
  12. Click Review, and then click Submit.
  13. Copy the token value that Splunk Web displays and paste it into another document for reference later.

Set environment variables

On your data ingestion management node, usually a heavy forwarder, set the required environment variables using the scripts provided by the add-on.

On a *nix machine:

  1. Go to $SPLUNK_HOME/etc/apps/Splunk_TA_opc/bin/.
  2. Run the following command: sh EnvSetupLinux.sh.

On a Windows machine:

  1. Go to %SPLUNK_HOME%\etc\apps\Splunk_TA_opc\bin\.
  2. Right click on the EnvSetupWindows.bat file and select Run as administrator.

Set up encrypted communication between your OPC servers and the add-on

Perform the following one-time configuration to set up encrypted communication via a certificate between the add-on on your data ingestion management node, usually a heavy forwarder, and your OPC servers. To perform these steps, you must have the admin_all_objects capability.

You perform this configuration in Splunk Web or in the configuration files. If you use the configuration files, you must complete the final step in Splunk Web to create the certificate.

If either of the following rare cases is true, you must manage at least some configurations in the configuration files:

  • If your RabbitMQ is installed on a different server than the one you are using to handle data ingestion management with the Splunk Add-on for OPC, you must configure the broker_url setting in your local splunk_ta_opc_settings.conf file.
  • If you want to override the maximum number of worker processes dedicated to data ingestion management, you must configure the max_worker_process setting in your local splunk_ta_opc_settings.conf file. See Set up encrypted communication using the configuration files for instructions.

If neither of these cases applies to you, it will be more efficient for you to use Splunk Web to set up encrypted communication. Follow the set of instructions that work best for your use case:

Set up encrypted communication using Splunk Web

Follow these steps on your data ingestion management node, usually a heavy forwarder.

  1. Log in to Splunk Web.
  2. In the app bar on the left, click Splunk Add-on for OPC.
  3. Click Configuration.
  4. In the Add-on Configuration tab, fill out the fields to configure a client certificate that allows encrypted communication between the Splunk Add-on for OPC and your OPC servers.
    Field Description
    Application Name A name that identifies this application. For example, SplunkOPCClient.
    Host Name The server IP address or host name where you are configuring this add-on. For example, 192.0.2.1 or win-spl-example.com.
    Key Length The private key length of the SSL certificate. It should be either 1024, 2048, 3072, or 4096.
    Common Name A common name to use for generating the license. For example, hostname1@SplunkOPCClient.
    Organization Name The name of your organization.
    Organization Unit The name of your unit or department in your organization.
    Locality Your locality.
    Country Code The two-letter country code for your country.
    Email Address An email address for a person or group responsible for certificate validation.
    Validity (Sec) The number of seconds for which this certificate must remain valid. The certificate must remain valid for as long as you want to collect data from your OPC servers with this add-on. You must enter a value greater than 0.
  5. Click Save.
  6. Navigate to the certs folder and copy the generated certificate.
    • $SPLUNK_HOME/etc/apps/Splunk_TA_opc/bin/pyuaf_pki/client/certs/ on *nix machines.
    • %SPLUNK_HOME%\etc\apps\Splunk_TA_opc\bin\pyuaf_pki\client\certs\ on Windows.
  7. Ask your OPC server administrator to add this certificate to the trust list for each OPC server from which you want to collect data.

Next: Connect to your OPC servers.

Set up encrypted communication using the configuration files

Follow these steps on your data ingestion management node, usually a heavy forwarder.

  1. Create a splunk_ta_opc_settings.conf file in the local folder of the add-on:
    • $SPLUNK_HOME/etc/apps/Splunk_TA_opc/local/ on *nix.
    • %SPLUNK_HOME%\etc\apps\Splunk_TA_opc\local\ on Windows.
  2. Add a configuration stanza following the guidelines in splunk_ta_opc_settings.conf.spec.
    The following is an example of a splunk_ta_opc_settings.conf configuration stanza:
    [configuration]
    app_name = SplunkOPCClient
    host_name = win-spl-example.com
    key_length = 1024
    common_name = Splunk OPC TA
    organization = ExampleOrganization
    organization_unit = Example Division
    locality = San Francisco
    country_name = US
    email_address = example@example.com
    validity = 157680000
    
  3. (Optional) If your RabbitMQ is installed on a different server, add a celery stanza following the guidelines in the splunk_ta_opc_settings.conf.spec file. If you do not add this stanza, the broker_url is set to the local server.
  4. (Optional) If you need to specify the maximum number of worker processes to use for data ingestion management, add a max_worker_process setting to your celery stanza. If you do not include this setting, the max number of worker processes is automatically set to the number of cores of your server.
    The following is an example splunk_ta_opc_settings.conf celery stanza:
    [celery]
    broker_url = amqp://myuser:mypass@10.0.3.24:5672
    max_worker_process = 5
    
  5. (Optional) Add a logging stanza to change the log level of the TA. The permitted values are DEBUG, INFO, WARNING, ERROR, and CRITICAL. The default is INFO. Example splunk_ta_opc_settings.conf logging stanza:
    [logging]
    loglevel = INFO
    
  6. Save the file.
  7. Restart the Splunk Enterprise instance.
  8. Log in to Splunk Web.
  9. In the app bar on the left, click Splunk Add-on for OPC.
  10. Click Configuration.
  11. In the Add-on Configuration tab, the values you configured in splunk_ta_opc_settings.conf are listed. Click Save.
  12. Navigate to the certs folder and copy the generated certificate:
    • $SPLUNK_HOME/etc/apps/Splunk_TA_opc/bin/pyuaf_pki/client/certs/ on *nix machines
    • %SPLUNK_HOME%\etc\apps\Splunk_TA_opc\bin\pyuaf_pki\client\certs\ on Windows
  13. Ask your OPC server administrator to add this certificate to the trust list for each OPC server from which you want to collect data.

Connect to your OPC servers

To perform these steps, you must have the admin_all_objects capability in Splunk Enterprise. You must also be or have access to your OPC server administrator.

You can set up the connection to your servers using Splunk Web or using the configuration files:

Connect to your OPC servers using Splunk Web

Follow these steps on your data ingestion management node, usually a heavy forwarder.

  1. Click Servers to configure connections to each of your OPC servers.
  2. Click Add.
  3. Fill out the fields to establish a secure connection to an OPC server.
    Field Description
    Server Name Enter a unique name to identify this server.
    Discovery URL The full URL where the server is hosted. If you are connecting to a DA or AE server, provide the discovery URL for the UaGateway. For example, opc.tcp://abc.local:48010. GDS and LDS functionality are not supported.
    Security Policy Select a security policy from the list that matches the SecurityPolicy value in the tag <SecuritySetting><SecurityPolicy></SecurityPolicy></SecuritySetting> in the file UnifiedAutomation\UaSdkCppBundleEval\bin\ServerConfig.xml.
    For example, if the value is http://opcfoundation.org/UA/SecurityPolicy#Basic192Rsa15, choose either Basic192Rsa15 - Sign - opc.ua or Basic192Rsa15 - Sign & Encrypt - opc.ua.
    Authentication Type The method to authenticate the add-on as a client device.
    • If you select Credentials, enter a Username and Password of a user with read access to the server data.
    • If you select Certificate, enter the full Certificate Path and Key Path, including the filenames, to indicate where the certificate and key files reside on the local machine.
  4. Click Test Connection to confirm connectivity between the OPC server and the add-on. If you get a Test Failed message, troubleshoot the Server Details fields with your OPC server administrator.
  5. In the Server Settings section, enter the timeout and interval settings for the connection. All of these settings are optional.
    Field Description
    Session Timeout (Sec) The number of seconds a session remains valid on the server after a connection error. The default is 1200.0.
    Watchdog Time (Sec) The number of seconds to wait for a response from the server before treating the server as unreachable. The default is 5.0.
    Watchdog Timeout (Sec) The number of seconds in between watchdog calls. The default is 2.0.
    Sampling Interval (Sec) The rate in seconds at which the monitored items is sampled by the server. The default is 0.0, which monitors as fast as possible. You can override this value on an item-by-item basis when you configure data ingestion.
    Publishing Interval (Sec) The rate in seconds at which the server publishes the data to the add-on. The default is 1.0.
    Keep Alive Count The maximum number of connections allowed in keep-alive mode simultaneously. The default is 5.
  6. Click Add.
  7. Add additional servers by repeating the steps above for each server.

Next: Set up HTTP event collector tokens to route incoming data.

Connect to your OPC servers using the configuration files

Follow these steps on your data ingestion management node, usually a heavy forwarder.

  1. Create a splunk_ta_opc_server.conf file in the local folder of the add-on:
    • $SPLUNK_HOME/etc/apps/Splunk_TA_opc/local/ on *nix.
    • %SPLUNK_HOME%\etc\apps\Splunk_TA_opc\local\ on Windows.
  2. Add server stanzas following the guidelines in splunk_ta_opc_server.conf.spec The following is an example splunk_ta_opc_server.conf file:
    [server_public_1] 
    discovery_url = opc.tcp://10.0.3.32:58810
    security_policy = Basic256SHA256 - Sign - opc.tcp
    authentication_type = Certificate
    user_cert = /root/opc/certificates/certificate.pem
    user_key = /root/certs/privateKey
    session_timeout = 1200.0
    watchdog_time = 5.0
    watchdog_timeout = 2.0
    default_sampling_interval = 0.0
    publishing_interval = 1.0
    keep_alive_count = 5
    local_namespace_array = uri1, uri2, uri3
    
  3. Save the file.

Set up HTTP event collector tokens to route incoming data

Configure one or more HTTP event collector tokens to route data into your Splunk Enterprise deployment.

Prerequisites
To perform these steps, the following must be true:

You can configure HTTP event collector tokens in Splunk Web or in the configuration files.

Set up tokens using Splunk Web

Follow these steps on your data ingestion management node, usually a heavy forwarder.

  1. In the add-on Configuration tab, click HTTP Collector.
  2. Click Add.
  3. Fill out the fields to configure a token.
    Field Description
    Name A unique name for your HTTP event collector configuration.
    URI The URI of the endpoint where this HTTP event collector receives data in your Splunk platform deployment. Include the port on which the HTTP event collector is listening. Use https rather than http if your HTTP event collector had Enable SSL checked. For example, https://192.0.2.1:8088.
    Token The HTTP event collector token that you or your Splunk Enterprise administrator configured.
    Refresh Interval The number of seconds to wait before refreshing the HTTP event collector configuration to check for changes.
  4. Click Add.
  5. Add additional tokens by repeating these steps for each token.

Next: Configure data ingestion with the Splunk Add-on for OPC.

Set up tokens using the configuration files

Follow these steps on your data ingestion management node, usually a heavy forwarder.

  1. Create a splunk_ta_opc_hec.conf file in the local folder of the add-on:
    • $SPLUNK_HOME/etc/apps/Splunk_TA_opc/local/ on *nix.
    • %SPLUNK_HOME%\etc\apps\Splunk_TA_opc\local\ on Windows.
  2. Add stanzas following the guidelines in splunk_ta_opc_hec.conf.spec The following is an example splunk_ta_opc_hec.conf file:
    [MyHECConfiguration]
    hec_uri = https://192.0.2.1:8088
    hec_token = 240bc08a-7b67-46be-9f36-a6c5f455c1d9
    refresh_interval = 60
    
  3. Save the file.

Next: Configure data ingestion with the Splunk Add-on for OPC.

Last modified on 03 January, 2020
PREVIOUS
Install the Splunk Add-on for OPC in a distributed deployment
  NEXT
Configure data ingestion with the Splunk Add-on for OPC

This documentation applies to the following versions of Splunk Add-on for OPC (Legacy): 1.0.0, 1.0.1


Was this documentation topic helpful?


You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters