Splunk® Data Stream Processor

Use the Data Stream Processor

DSP 1.2.0 is impacted by the CVE-2021-44228 and CVE-2021-45046 security vulnerabilities from Apache Log4j. To fix these vulnerabilities, you must upgrade to DSP 1.2.4. See Upgrade the Splunk Data Stream Processor to 1.2.4 for upgrade instructions.

On October 30, 2022, all 1.2.x versions of the Splunk Data Stream Processor will reach its end of support date. See the Splunk Software Support Policy for details.
This documentation does not apply to the most recent version of Splunk® Data Stream Processor. For documentation on the most recent version, go to the latest release.

Troubleshoot the

Review this topic if you are having difficulties with the .

Support

To report bugs or receive additional support, do the following:

When contacting Splunk Customer Support, provide the following information:

Information to provide Notes
Pipeline ID To view the ID of a pipeline, open the pipeline in DSP, then click the options button DSP Ellipses button and click Update Pipeline Metadata.
Pipeline name N/A
DSP version To view your DSP version, in the product UI, click Help & Feedback > About.
DSP diagnostic report A DSP diagnostic report contains all DSP application logs as well as system and monitoring logs.


To generate this report, navigate to the working directory of your DSP master node and then run the following command: sudo ./report. This command creates a diagnostic report named dsp-report-<timestamp>.tar.gz in the working directory.

Summary of the problem and any additional relevant information N/A

Share your pipeline for troubleshooting

You can use the SPL2 Pipeline Builder UI to send the full SPL2 of your pipeline to someone who isn't a member of your tenant. Splunk Support may ask you to do this in order to assist you in troubleshooting your pipeline.

  1. From the Data Management page, click on the pipeline that you want to get the SPL2 for.
  2. (Optional) If this pipeline is currently active, click Edit to enter the Canvas view.
  3. From the Canvas view, click on the SPL button to toggle to the SPL2 Pipeline Builder.
  4. Copy the SPL2 and send it to Splunk Support.

You can now share your pipeline with people who aren't in your tenant.

The UI shows deprecated components

The UI shows deprecated components.

Cause

The contains deprecated functions for beta users. These functions are labeled as deprecated in the UI.

Solution

Use the supported functions instead.

Output of aggregate function is delayed

When you are previewing or sending data for an aggregate function, you might notice slow or no data output past your aggregate function.

Causes and solutions

The following table lists possible causes and solutions for the delay in aggregate output.

Cause Solution
The volume of data you are sending is too low. Send more data to your pipeline.
If you are using the Amazon Kinesis Data Stream source function, you might be using too many shards in your Kinesis stream. Decrease the number of shards in your Kinesis stream in your AWS console.
If you are using the Kafka source function, you might be using too many Kafka partitions. Lower the parallelism of the Flink job by setting the consumer property dsp.flink.parallelism in the Kafka function to a lower setting. The dsp.flink.parallelism setting defaults to the number of Kafka partitions available in the Kafka topic you are reading from.

Pipeline fails validation with "compile script error" and "mismatched input" messages

When you try to validate or activate your pipeline, the validation step fails, and you see an error message that starts with Pipeline is invalid. and ends with mismatched input [<field name>] expecting {'as', EXISTS, 'true', 'false', NULL, 'type', '[', '"', ''', '`', '{', '(', NOT, IN, LIKE, '+', '-', LOG_SPAN, TIME_SPAN, RELATIVE_TIME, IDENTIFIER, AT_IDENTIFIER, PARAMETER, RAW_STRING, REGEX, INTEGER, LONG, DOUBLE, FLOAT}.

Cause

Some words are reserved for the SPL2 syntax and have predefined meanings in the language. This error message is shown when a reserved keyword is used as a field name.

Solution

Enclose the field name in single quotations marks. For example, use 'dedup' as the field name instead of dedup. Alternatively, change the field name to something else. For a list of all reserved keywords, see Reserved words in the SPL2 Search Reference.

Pipeline fails validation with "Error type checking arguments" and "Argument cannot be assigned" messages

When you try to validate or activate your pipeline, the validation step fails, and you see an error message that starts with Error type checking arguments to function [<function name>] with ID: <function ID value> and has the following as the second-last statement: Argument [<input>] cannot be assigned to argument [<argument name>] for function [<function signature>].

Cause

In the function mentioned in the error message, one of the arguments is set to a value that is the wrong data type. For example, an argument that only accepts integer values might be set to a string value.

One common cause of this error is when an argument refers to the body field, which is a union of multiple data types by default, but the body field hasn't been cast to a specific data type. See Casting a union data type in the Use the manual.

Solution

Confirm the accepted data type for each function argument, and make sure to specify values that are the correct data type.

See the Function Reference manual for detailed information about each function, including the accepted data types for each argument. For more information about data types, see data types in the Use the .

Pipeline fails validation with "Error type checking arguments" and "Function is not defined" messages

When you try to validate or activate your pipeline, the validation step fails, and you see an error message similar to the following: Error type checking arguments to function [<streaming function name>] with ID: [<function ID>]: Function [<scalar function name>] is not defined.

Cause

In the streaming function mentioned in the error message, one of the arguments is configured to use an invalid scalar function.

Solution

In the configuration settings of the streaming function, correct the invalid scalar function name. See the Function Reference manual for information about the supported scalar functions.

Pipeline is stuck with the RESTARTING or FAILED status

When you navigate to the Pipelines tab of the Data Management page and check the Status column, you see a pipeline that has the RESTARTING or FAILED status.

For more information about pipeline statuses, see Interpreting pipeline statuses.

Cause

Reasons why a pipeline might get stuck in the RESTARTING or FAILED status include, but are not limited to, the following:

  • One of the functions in the pipeline is misconfigured. For example, an Amazon Kinesis Data Streams source function specifies the wrong Kinesis stream name.
  • The connection to the data source or destination is misconfigured, or the credentials specified in the connection don't have the right permissions for connecting to the source or destination.

Solution

If your pipeline has the RESTARTING status, wait and monitor the pipeline status. In many situations, a pipeline will automatically recover from a RESTARTING state. However, if a pipeline is stuck in the RESTARTING status, cycling between RESTARTING and other statuses, or has the FAILED status, then try the following remediation steps:

  1. Check to see if your functions are configured correctly.
  2. Check to see if your connections are configured correctly, and verify that you have the right permissions to connect the third-party service to the . See Getting started with DSP data connections in the Connect to Data Sources and Destinations with DSP manual for more information.
  3. Clone the pipeline, and then activate the cloned pipeline. Once the cloned pipeline is activated, deactivate the original pipeline. If the original pipeline cannot be deactivated, deactivate the pipeline with Skip Savepoint enabled. See Using activation checkpoints to activate your pipeline for more information.

If the pipeline is still stuck with the RESTARTING or FAILED status, contact Splunk Customer Support for assistance.

Data is not streaming through your pipeline as expected

You successfully activate your pipeline, but data is not streaming through as expected. For example, data might not be entering the pipeline, or data might be getting dropped from the pipeline.

Cause

One or more functions in the pipeline might be configured incorrectly, preventing data from streaming through the pipeline.

Solution

  1. To identify the function that the data is failing to stream through, check the metrics displayed on the pipeline functions.
  2. Review and modify the configuration of the function as needed. See the Function Reference manual for details about the function.

The Splunk DSP Firehose, Forwarders Service, or Ingest Service source function is not receiving data

You successfully activate a pipeline that uses the Splunk DSP Firehose, Forwarders Service, or Ingest Service source function, but your data does not stream into the pipeline as expected.

Cause

Your data source might not be correctly configured to send data to the . For example, the outputs.conf file for your universal forwarder might contain incorrect settings.

Solution

Review the configuration settings in your data source to make sure that everything is correct. For additional guidance, see the Connect to Data Sources and Destinations with the manual.

The attributes field in your data contains unexpected fields

When previewing or sending data with the DSP event or metrics schema, you might see unexpected fields nested in the top-level attributes field.

Cause

Your data has the DSP event or metrics schema, but the top-level fields in your data do not have the expected data types. For example, if you converted the timestamp field from long to string, then the timestamp field is inserted as an attribute named timestamp and the current time is used for the timestamp field.

Solution

Make sure that the following reserved field names match the expected type:

Field name Expected data type
timestamp long
nanos integer
id string
host string
source string
source_type string
attributes map
kind string

Duplicate events are arriving in your data destination

After sending data to the Splunk platform or to a supported third-party platform, you notice that your data contains duplicate events.

Cause

The guarantees at least once delivery of your data, and duplicates can occur. If a failure causes the to stop while processing data, upon restart, your data is reprocessed to ensure that no data is lost. This may result in some data being duplicated in your data destinations.

Solution

This is expected behavior. For performance reasons and to minimize duplicate events, best practices are to have as few pipelines as possible delivering the same events.

The shows that my data is making it through my pipeline, but I can't find my data in my Splunk index

The Monitoring Console provides prebuilt dashboards with detailed topology and performance information about your Splunk Enterprise deployment. See About the Monitoring Console.

HTTP Event Collector dashboards

The Monitoring Console comes with pre-built dashboards for monitoring the HTTP Event Collector. To interpret the HTTP event collector dashboards information panels correctly, be aware that the Data Received and Indexed panel shows data as "indexed" even when the data is sent to a deleted or disabled index. The HEC dashboards show the data that is acknowledged by the indexer acknowledgment feature, even if that data isn't successfully indexed.

For more information about the specific HTTP event collector dashboards, see HTTP Event Collector dashboards.

The HTTP event collector dashboards show all indexes, even if they are disabled or have been deleted.

Pipeline fails to deactivate with error "Failed to queue pipeline for deactivation"

When you try to deactivate a pipeline, the deactivation fails and the returns an error message similar to the following: Failed to stop pipeline <pipeline ID>: <error code>: Could not complete the operation. Number of retries has been exhausted.

Cause

Reasons why a pipeline might fail to deactivate include, but are not limited to, the following:

  • The deactivation request was sent before the pipeline was completely activated. The pipeline must be completely activated before it can be deactivated.
  • The could not create a savepoint upon deactivation. Savepoints ensure that your pipeline picks up right where it left off when you reactivate it by saving the progress state of each function in the pipeline. However, if the pipeline cannot create a savepoint, then the pipeline fails to deactivate. To learn more about savepoints, see Using activation checkpoints to activate your pipeline.

Solution

Wait until the pipeline status says "Activated" and you see pipeline metrics for the activated pipeline indicating that data is flowing through the pipeline before deactivating the pipeline.

If your pipeline still cannot be deactivated, then deactivate the pipeline with the Skip Savepoint enabled. This skips the creation of a new savepoint. Because the also adds checkpoints frequently and automatically as an additional recovery mechanism, skipping the savepoint on deactivation should not result in major data loss. If you deactivate the pipeline with Skip Savepoint enabled, then you might need to reactivate the pipeline with the Skip Restore State setting enabled.

  1. Log in to the Splunk Cloud Services CLI.
    ./scloud login
  2. Deactivate the pipeline with Skip Savepoint enabled.
    ./scloud streams deactivate-pipeline --id <pipeline_id> --skip-savepoint true
  3. (Optional) Modify your pipeline as needed.
  4. Save and activate the pipeline. If the pipeline cannot be activated, activate it with Skip Restore State enabled. This will reactivate the pipeline without attempting to restore any state from a savepoint or a checkpoint. Data loss may occur when you use Skip Restore State, see Data pipeline activation options for more details.
    • Using the Splunk Cloud Services CLI:
    ./scloud streams activate-pipeline --id <pipeline_id> --skip-restore-state true
    • Using the UI:
      • From the Canvas View, click Activate and select Skip Restore State.

You stop sending data, but your data destination is still receiving data from the

After you stop sending data from DSP to a destination, you might still see some events being ingested into your data destination.

Cause

This behavior might occur if DSP has a backlog of data that has not been processed.

Solution

This is expected behavior. DSP will continue to send data until the backlog has been processed, even if the pipeline is inactive or in the RESTARTING state.

Cannot login to SCloud

When you login to SCloud, you might see this error.

error: failed to get session token: failed to get valid response from csrfToken endpoint: failed to get valid response from csrfToken endpoint: parse <ipaddr>:31000/csrfToken: first path segment in URL cannot contain colon

Cause

You are using standard HTTP authentication instead of HTTPS.

Solution

Confirm that your auth-url and host-url include https. See Get Started with SCloud.

Last modified on 16 March, 2022
Upgrade a plugin from 1.1.0 to 1.2.0  

This documentation applies to the following versions of Splunk® Data Stream Processor: 1.2.0, 1.2.1-patch02, 1.2.1, 1.2.2-patch02, 1.2.4, 1.2.5


Was this topic useful?







You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters