Index routing configurations for Splunk Connect for Kafka
Index routing is an optional Splunk Connect for Kafka configuration that can be done in either your Splunk software or your Kafka deployment. If you want to route indexes based on your Kafka topics, configure index routing in your Kafka deployment. If you want to route indexes based on the data stream, configure index routing in your Splunk platform deployment.
Configure index routing in your Kafka deployment
Use the following formatting to configure index routing when creating Splunk Connect for Kafka tasks:
topics splunk.indexes
topics
are the Kafka topics where data is stored. You can specify multiple topics in the same SinkTask. For each topic, you can also specify a Splunk platform index, source, and source type.
In this case, one data stream of a particular topic will be delivered to the specified index. If only one index is specified, data stream from all the topics will be delivered to the specified index.
In this example, there are three topics (test-1, test-2, test-3) delivering all data to one Kafka index.
"topics": "test-1,test-2,test-3", "splunk.indexes": "kafka"
To deliver the data to kafka-1, kafka-2, kafka-3 indexes, respectively, add each index to the task configuration.
"topics": "test-1,test-2,test-3", "splunk.indexes": "kafka-1,kafka-2,kafka-3"
Depending on your deployment, there could be another layer of index configuration on the Splunk platform through the use of HEC configurations. If your deployment's HTTP Event Collector (HEC) configurations are set to not overwrite any indexers, the rules configured by Splunk Connect for Kafka will be followed. Otherwise, your HEC will overwrite the indexes specified by the task.
Configure index routing in your Splunk platform deployment
On the Splunk platform of your Kafka connector deployment, logs can be routed using any indexed fields. Configure the Splunk configuration files props.conf
and transforms.conf
on your Splunk platform indexers or Splunk platform heavy forwarders.
The following examples uses files and fields relevant to an AWS CloudWatch event.
{ "owner": "123456789012", "logGroup": "CloudTrail", "logStream": "123456789012_CloudTrail_us-east-1", "subscriptionFilters": ["Destination"], "messageType": "DATA_MESSAGE", "logEvents": { "id": "31953106606966983378809025079804211143289615424298221570", "timestamp": 1432826855000, "message": { "eventVersion": "1.03", "userIdentity": { "type": "Root" } } } }
Example 1: Route owner with ID 123456789012 to a Splunk production index
- Navigate to
$SPLUNK_HOME/etc/system/local/
, and create aprops.conf
file and atransforms.conf
file. - Update the props.conf with this configuration:
[kafka:events] TRANSFORMS-index_routing = route_data_to_index_by_field_owner_id
- Update the
$SPLUNK_HOME/etc/system/local/transforms.conf
file with this configuration:[route_data_to_index_by_field_owner_id] REGEX = "(\w+)":"123456789012" DEST_KEY = _MetaData:Index FORMAT = prod
- Save your changes.
Example 2: Route AWS CloudWatch logs from a certain region to an index dedicated to that region
- Navigate to
$SPLUNK_HOME/etc/system/local/
, and create aprops.conf
file. - Update the props.conf with this configuration:
[kafka:events] TRANSFORMS-index_routing = route_data_to_index_by_aws_region
- Update the
$SPLUNK_HOME/etc/system/local/transforms.conf
file with this configuration:[route_data_to_index_by_aws_region] REGEX = "logStream":"(.*us-east-1)" DEST_KEY = _MetaData:Index FORMAT = aws-cloudwatch-us-east-1
- Save your changes.
If your Splunk platform deployment has index clustering set up, make sure your props.conf
and transforms.conf
files are in sync on each indexer.
See the Managing Indexers and Clusters of Indexers manual to learn more about managing and updating indexer cluster configurations.
Load balancing configurations for Splunk Connect for Kafka | Configuration examples for Splunk Connect for Kafka |
This documentation applies to the following versions of Splunk® Connect for Kafka: 2.0.1, 2.0.2
Feedback submitted, thanks!