Splunk® Data Stream Processor

Use the Data Stream Processor

Acrobat logo Download manual as PDF


On October 30, 2022, all 1.2.x versions of the Splunk Data Stream Processor will reach its end of support date. See the Splunk Software Support Policy for details. For information about upgrading to a supported version, see the Upgrade the Splunk Data Stream Processor topic.
Acrobat logo Download topic as PDF

Troubleshoot lookups to the Splunk Enterprise KV Store

Use this page to troubleshoot common issues with lookup connections to the Splunk Enterprise KV Store.

You are experiencing latency or performance issues with a KV Store lookup

If you are experiencing performance or latency issues in an active pipeline with a connection to a Splunk Enterprise KV Store, make sure you are sizing your Splunk Enterprise environment appropriately.

Cause: You do not have an appropriately sized distributed Splunk Enterprise environment

If you want to connect the to a Splunk Enterprise KV Store on a distributed Splunk Enterprise environment, you must make sure that your Splunk Enterprise environment is sized appropriately. The Splunk Enterprise KV store can support approximately 45,000 requests per second per search head cluster node. For example, a search head cluster with three nodes can handle approximately 135,000 requests per second. To perform lookups to a Splunk Enterprise KV Store, the makes repeated requests to the Splunk Enterprise KV Store. If you do not have an appropriately sized distributed Splunk Enterprise environment, your DSP pipeline might receive data at a higher rate than it can process, resulting in backpressure. Therefore, best practices are to scale your Splunk Enterprise search head cluster appropriately to handle your peak pipeline throughput.

Use the following steps as a reference on how to calculate how many search head cluster nodes you need. These steps assume that you have already created a connection to a Splunk Enterprise KV Store and are using that connection in an active pipeline.

  1. From the UI, open the active pipeline containing your KV Store lookup.
  2. Find the lookup function in your pipeline and copy the Events Per Second (EPS) number to a preferred location.
  3. Estimate the cache miss rate of your lookup connection. For assistance with this, contact Splunk Support.
  4. Get the batch_size of your KV Store lookup.
    1. Log in to the Splunk Cloud Services CLI.
      ./scloud login
    2. Get details about your connections. Locate the KV Store connection from the returned list and copy the batch_size value to a preferred location. If you do not see the batch_size value, then your connection uses the default batch_size of 1000.
      ./scloud streams list-connections

Now that you have the Events Per Second, the batch_size, and Cache Miss Rate you can calculate approximately how many search head cluster nodes you need using the following formula.

( Events Per Second / Batch Size) * Cache Miss Rate = Requests per second to the KV Store

As an example, assume that you have the following:

  • A lookup function processing 8,000,000 Events Per Second (EPS).
  • A lookup batch size of 100 records.
  • A cache miss rate of 70%.

Using the formula above, your pipeline is sending (8,000,000 / 100) * .7 = 56,000 requests per second to the Splunk Enterprise KV Store. Since each search head cluster node can handle approximately 45,000 requests per second, a request load of 56,000 requests per second would require a Splunk Enterprise cluster that contains at least 2 nodes. You can also reduce the request load by increasing the batch_size or the cache_size of the KV store lookup. See the Connect to the Splunk Enterprise KV Store using the Streams API section for more information on these two settings, and see About lookup cache quotas for more information about lookup cache sizes.

Last modified on 14 June, 2021
PREVIOUS
About lookup cache quotas
  NEXT
About the Streaming ML Plugin

This documentation applies to the following versions of Splunk® Data Stream Processor: 1.2.1, 1.2.2-patch02, 1.2.4, 1.2.5, 1.3.0, 1.3.1, 1.4.0, 1.4.1, 1.4.2, 1.4.3


Was this documentation topic helpful?


You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters