Configure data ingestion with the Splunk Add-on for OPC
Configure data ingestion by defining data source groups from which you want to collect data. You can create multiple data source groups for each server. For each data source group, you can configure different HTTP event collectors and indexes to which to send the data.
You can configure data source groups using Splunk Web or using the configuration files:
- Configure a data source group in Splunk Web.
- Configure a data source group in the configuration files.
Configure a data source group in Splunk Web
Perform the following configurations on your data ingestion management node, usually a heavy forwarder.
Prerequisites
- Complete all the set up steps described in Set up the Splunk Add-on for OPC.
- Be logged in as a user with the
admin_all_objects
capability.
Steps
- Log in to Splunk Web.
- In the app bar on the left, click Splunk Add-on for OPC.
- Click Data Source Groups.
- Click Create New Input.
- Give your data source group a Name.
- Select either Metric or Event to select which type of data you want to collect in this data source group.
- Select a Server. Based on your data type and server choices, the Splunk Add-on for OPC performs a search for relevant assets in your OPC server.
- While the search runs, enter the name of an Index where you want to store the data from this data source group. If you are using this add-on together with Splunk Industrial Asset Intelligence (IAI), you must specify a metrics index for metrics data and an event index for event data. Work with your Splunk Enterprise administrator to specify an appropriate index for the data you are collecting in this group.
- Select an HTTP Event Collector from the list.
- In the Data to Collect section, browse or search to find the metric data points or events that you want to collect in this data source group.
- Preview the property names and values for the data points by clicking on them.
- Select the check boxes for the metric data points or events from which you want to collect data.
- (Optional) In the Properties section, override the values for the items that you selected. You can edit individual fields directly in the table. For metrics, you can also make bulk edits to all fields other than asset and metric name. For example, if you are ingesting data for use in Splunk IAI, you might want to update your asset names to match the user-friendly asset names in the asset hierarchy that you or a Splunk IAI administrator defined to model your industrial asset structure. See Model your asset hierarchy in Splunk IAI in Administer Splunk Industrial Asset Intelligence for information about modeling asset structures.
- When you are satisfied with your data ingestion configuration for this group, click Create.
Your data ingestion group is enabled and data ingestion begins immediately. You can disable it at any time using the slider in the Data Collection column.
Next: Because you configured data source groups in Splunk Web, you do not need edit the configuration files. Continue to Validate data ingestion.
Configure a data source group in the configuration files
If you cannot connect to your OPC servers and set up inputs in Splunk Web, you can do so in the configuration files. Perform the following steps on your data ingestion management node, usually a heavy forwarder.
Prerequisites
- Before you begin, identify the nodes that you want to monitor using a tool such as the Prosys OPC UA client. Identify the NameSpaceIndex, IdentifierType, Identifier, and other parameters so that you can prepare a JSON object describing the nodes from which you want to collect data. For the
IdentifierType
field, numeric = 0, string = 1, GUID = 2, and ByteString = 3. - Complete all the set-up steps described in Set up the Splunk Add-on for OPC.
Steps
- Create an
inputs.conf
file in thelocal
folder of the add-on:$SPLUNK_HOME/etc/apps/Splunk_TA_opc/local/
on *nix.%SPLUNK_HOME%\etc\apps\Splunk_TA_opc\local\
on Windows.
- Add input stanzas following the guidance in the
inputs.conf.spec
in the add-on README folder. The following example is aninputs.conf
stanza for metrics:[opc_collect://MyMetricsDataSourceGroupName] datasources = [{"NodeID":{"IdentifierType":0,"NamespaceIndex":0,"Identifier":"AirConditioner_1.Temperature"},"SourceNodeId":{"IdentifierType":1,"NamespaceIndex":1,"Identifier":"AirConditioner_2.Temperature"}}] enable_data_collection = 0 hec_name = MyHECName index = MyMetricsIndexName node_type = metric server = MyServerName
The following example is an
inputs.conf
stanza for events:[opc_collect://MyEventDataSourceGroupName] datasources = [{"SourceNodeID":{"Identifier":"Furnace_1","NamespaceIndex":0,"IdentifierType":1},"Event":{"EventFields": ["EventId", "EventType", "SourceNode", "SourceName", "Time", "ReceiveTime", "Message", "Severity", "Client Handle"], EventNotifier": "True"}}] enable_data_collection = 0 hec_name = MyHECName index = MyEventIndexName node_type = event server = MyServerName
Validate data ingestion
To validate that data ingestion is working as expected, run the following searches on your search head.
For metrics data:
| mstats count(_value) where index=<yourindexname> AND sourcetype=opc:metrics AND metric_name=* by metric_id,metric_name,asset,quality,metric_type,opc_connection_id,unit
or
index=<yourindexname> AND sourcetype=opc:metrics
For event data:
index=<yourindexname> sourcetype=opc:alarms_events
If you do not see any search results, see the troubleshooting section: Data source groups are configured but no data is ingested.
Edit or clone data source groups
To change the metric data points or events or adjust the property value overrides, click Edit in the Actions column.
To clone a data collection group configuration, click Clone in the Actions column. Cloning a data source group is useful if you want to send data to a staging environment and a production environment, or if you want to test a small change to a configuration in a test index temporarily.
Set up the Splunk Add-on for OPC | Troubleshoot the Splunk Add-on for OPC |
This documentation applies to the following versions of Splunk Add-on for OPC (Legacy): 1.0.0, 1.0.1
Feedback submitted, thanks!