Splunk® Data Stream Processor

Use the Data Stream Processor

Acrobat logo Download manual as PDF


Acrobat logo Download topic as PDF

Processing data in motion using the

The is a data stream processing solution that collects data in real time, processes it, and provides at-least-once delivery of that data to one or more destinations of your choice.

As a user, you can enrich, transform, and analyze your data during the processing stage, gaining increased control and visibility into your data before it reaches its destination. If your data contains noisy or sensitive information, you can use the to remove or sanitize your data before that data is indexed, reducing security risks and focusing on only the data that you care about. You can enrich your data with data found in CSV files or even with machine learning. When your data looks the way that you want it to look, you can use the to route that data to the destination where it has the most value.

You can also use the to design custom data pipelines that transform data and route it between a wide variety of data sources and data destinations, even if they support different data formats. For example, you can route data from a Splunk forwarder to an Amazon S3 bucket. The following diagram summarizes the data sources that you can collect data from and the data destinations that you can send your data to when using the :

The Splunk Data Stream Processor can collect data from sources such as Splunk forwarders, the Ingest service, the HTTP Event Collector (HEC), and Syslog data sources. The Splunk Data Stream Processor can send data to destinations such as Splunk Enterprise, Amazon Kinesis Data Streams, and Amazon S3.

Get started with the

Before you can get started with the , you might need to perform a few setup steps. See the Install and administer the manual for instructions.

To get familiar with basic features and workflows, check out the Tutorial.

To start routing and processing your data, you need to create connections to your data sources and destinations of choice, and then build data pipelines that define how to move the data and how to transform it along the way. The following table lists the documentation that you can refer to for more information about common workflows and use cases:

For information on how to do this Refer to this documentation
Connect to specific data sources and destinations Connect to Data Sources and Destinations with the
Summarize data based on specific conditions Summarize records with the stats function
Format and organize data
Filter and remove unwanted data
Mask or obfuscate sensitive data Masking sensitive data in the
Enrich your streaming data with data from a CSV file or a Splunk Enterprise KV Store About lookups
Use machine learning to enrich your streaming data About the Streaming ML Plugin
Send data to multiple destinations
Last modified on 02 September, 2021
  NEXT
terminology

This documentation applies to the following versions of Splunk® Data Stream Processor: 1.2.1


Was this documentation topic helpful?

You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters