Splunk® Data Stream Processor

Use the Data Stream Processor

On April 3, 2023, Splunk Data Stream Processor will reach its end of sale, and will reach its end of life on February 28, 2025. If you are an existing DSP customer, please reach out to your account team for more information.
This documentation does not apply to the most recent version of Splunk® Data Stream Processor. For documentation on the most recent version, go to the latest release.

Performance expectations for sending data from a data pipeline to Splunk Enterprise

This page provides reference information about the performance testing of the Splunk Data Stream Processor performed by Splunk, Inc when sending data to a Splunk indexer with the Write to Splunk Enterprise sink function. Use this information to optimize your "Write to Splunk Enterprise" pipeline performance.

Many factors affect performance results, including file compression, event size, number of concurrent pipelines, deployment architecture, and hardware. These results represent reference information and do not represent performance in all environments.

To go beyond these general recommendations, contact Splunk Services to work on optimizing performance in your specific environment.

Improve performance

To maximize your performance, consider taking the following actions:

  • Enable batching of events.
  • Do not use an SSL-enabled Splunk Enterprise server.
  • Enable async = true in the Write to Splunk Enterprise function.
  • Disable HEC acknowledgments in the Write to Splunk Enterprise function.
  • Run DSP on a 5 GigE full duplex network.
  • Parallelization of the Data Stream Processor with your data source. Parallelization of Data Stream Processor jobs is determined by how many partitions or shards are in the upstream source.
    • When using Kafka as a data source, use multiple partitions (example: 16) in the Kafka topic that your DSP pipeline reads from.
    • When using Kinesis as a data source, use multiple shards (example: 16) in the Kinesis stream that your DSP pipeline reads from.

Your Write to Splunk Enterprise sink function should have the following additional parameters for performance optimization.


This screen image shows a screenshot of the Write to Splunk Enterprise sink function with the appropriate parameters filled out.

Last modified on 14 January, 2020
Batch Events to optimize throughput to Splunk Enterprise indexes in DSP   Monitor your pipeline with data preview and real-time metrics in DSP

This documentation applies to the following versions of Splunk® Data Stream Processor: 1.0.0


Was this topic useful?







You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters