Splunk Add-on for OPC (Legacy)

Use the Splunk Add-on for OPC

Troubleshoot the Splunk Add-on for OPC

Stuck? Use these resources and tips to troubleshoot the Splunk Add-on for OPC.

Support and other resources

Splunk provides support for the latest version of this add-on if you have a Splunk for Industrial IoT license. Upgrade to the latest version of the add-on available on Splunkbase before logging a case using the Splunk Support Portal.

For assistance installing, upgrading, or scaling a Splunk Enterprise deployment, contact the Splunk Professional Services team.

Access additional resources for Splunk software:

Use the Troubleshooting dashboard

The add-on contains a Troubleshooting dashboard that you can use to view connection issues, current performance, and recent errors from logs. Access the dashboard on your data ingestion management node by accessing the add-on and clicking Troubleshooting.

Adjust the log level in the add-on

If you want to increase the verbosity of the add-on logs, adjust the log level.

  1. Log in to Splunk Web on your data ingestion management node.
  2. Click Configuration, and then click Logging.
  3. Adjust the log level using the drop-down menu.

Search the add-on logs

Search the internal index for logs specific to this add-on.

To search all internal logs for this add-on, run this search:

index = _internal source=splunk_ta_opc*

To search a specific log, specify the log file name in the source field of your search.

Log file Description
splunk_ta_opc_producer.log This log includes errors related to communication with the OPC Server and how the add-on creates subscriptions to data source groups.
splunk_ta_opc_transformer.log This log includes errors if there are any issues in converting the server notifications into JSON events.
splunk_ta_opc_consumer.log This log includes errors if the HEC configuration is invalid. For example, the HEC endpoint or token are incorrect, or the HTTP endpoint requires SSL but the connection is configured over http instead of https.

These logs are stored in the $SPLUNK_HOME/var/log/splunk directory on *nix or $SPLUNK_HOME$\var\log\splunk directory on Windows.

Data source groups are configured but no data is ingested

If you complete all the steps in Set up the Splunk Add-on for OPC and Configure data ingestion with the Splunk Add-on for OPC, but no data is reaching your indexes perform the following checks:

  1. Check that the index names you configured in the data source group configurations match indexes that exist on your forwarder and indexers.
  2. Check that the inputs are enabled on the Data Source Group listing screen.
  3. Check for errors in Splunk Web and the add-on logs.
  4. Check your RabbitMQ queue size. See New data is not arriving on my receiver node.

New data is not arriving on my receiver node

If new data stops arriving in your Splunk Enterprise deployment from your OPC servers:

  1. Check how many queues are pending in RabbitMQ by opening an SSH connection and running rabbitmqctl list_queues.
  2. Examine the output and check the number that appears after "celery".
    • If you see the number 0, then your data is not stuck in the queue. Search the log files to look for other issues.
    • If you see a number higher than zero, your data is queued and may take some time to arrive in your Splunk Enterprise deployment. If this happens frequently, consider scaling your data collection with additional data collection management nodes, each configured to collect from different data source groups.

If your RabbitMQ queue size consistently increases and never goes to zero, check that the nodes you selected in your data source group configurations are available in your OPC servers.

Error message "Unable to initialize modular input"

If you receive a message in Splunk Web that says "Unable to initialize modular input "opcua_collect" defined inside the app "Splunk_TA_opcua": Introspecting scheme=opcua_collect: script running failed (exited with code 1)." the message means that your environment variables are not set or are set with an incorrect path. See Set environment variables.

Error message when filling out Configuration tab

If you see error messages such as "Invalid input value" when you are filling out the Splunk Add-on for OPC Configuration tab, the message means that the add-on was unable to generate a certificate with the information you provided in the fields. To locate the cause of the error, search the log file splunk_ta_opc_certificate.log.

Server connection issues when configuring a data source group using a DA or AE server

If you connected successfully to your OPC server but see errors when using that server in a data source group configuration, check that you placed your oem.ini license file in the same directory as the UaGateway when you installed it on that server. If it was not correctly installed, delete the UaGateway and reinstall it with the license. See UaGateway requirements for OPC DA or AE servers.

Error message "Received event for unconfigured/disabled/deleted index"

If you receive a message in Splunk Web that says "Received event for unconfigured/disabled/deleted index=<your index name> with source="<your source>" host="<your host>" sourcetype="sourcetype::opc:metrics". So far received events from 1 missing index(es)." the message means that your index is not configured. See Create one or more indexes to store your data.

Celery processes are still running even when Splunk Enterprise is not running

If Splunk Enterprise triggers the celery process to initiate a connection to your RabbitMQ server and then you stop Splunk Enterprise before that connection succeeds, the celery process keeps running. This is likely to occur if your RabbitMQ server is running on a different machine than your data ingestion management node and the value for broker_url is incorrect in your local splunk_ta_opc_settings.conf file.

To resolve the issue, manually kill the orphan celery process, and then revise the broker_url setting to match this format: amqp://<username>:<password>@<IP address of RabbitMQ service>:<port>.

Last modified on 12 November, 2018
Configure data ingestion with the Splunk Add-on for OPC  

This documentation applies to the following versions of Splunk Add-on for OPC (Legacy): 1.0.0, 1.0.1


Was this topic useful?







You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters