All DSP releases prior to DSP 1.4.0 use Gravity, a Kubernetes orchestrator, which has been announced end-of-life. We have replaced Gravity with an alternative component in DSP 1.4.0. Therefore, we will no longer provide support for versions of DSP prior to DSP 1.4.0 after July 1, 2023. We advise all of our customers to upgrade to DSP 1.4.0 in order to continue to receive full product support from Splunk.
Troubleshoot the
Review this topic if you are having difficulties with the .
Support
To report bugs or receive additional support, do the following:
- Ask questions and get answers through community support at Splunk Answers.
- If you have a support contract, file a case using the Splunk Support Portal. See Support and Services.
- If you have a support contract, contact Splunk Customer Support.
- To get professional help with optimizing your Splunk software investment, see Splunk Services.
When contacting Splunk Customer Support, provide the following information:
- Pipeline ID. To view the ID of a pipeline, open the pipeline in the , then click the pipeline options button and click Update pipeline metadata.
- Pipeline name.
- Summary of the problem and any additional relevant information.
The UI shows deprecated components
The UI shows deprecated components.
Cause
The contains deprecated functions for beta users. These functions are labeled as deprecated in the UI.
Solution
Use the supported functions instead.
Output of aggregate function is delayed
When you are previewing or sending data for an aggregate function, you might notice slow or no data output past your aggregate function.
Causes and solutions
The following table lists possible causes and solutions for the delay in aggregate output.
Cause | Solution |
---|---|
The volume of data you are sending is too low. | Send more data to your pipeline. |
If you are using the Amazon Kinesis Data Stream source function, you might be using too many shards in your Kinesis stream. | Decrease the number of shards in your Kinesis stream in your AWS console. |
If you are using the Kafka source function, you might be using too many Kafka partitions. | Lower the parallelism of the Flink job by setting the consumer property dsp.flink.parallelism in the Kafka function to a lower setting. The dsp.flink.parallelism setting defaults to the number of Kafka partitions available in the Kafka topic you are reading from.
|
Pipeline fails validation with "compile script error" and "mismatched input" messages
When you try to validate or activate your pipeline, the validation step fails, and you see an error message that starts with Pipeline is invalid.
and ends with mismatched input [<field name>] expecting {'as', EXISTS, 'true', 'false', NULL, 'type', '[', '"', ''', '`', '{', '(', NOT, IN, LIKE, '+', '-', LOG_SPAN, TIME_SPAN, RELATIVE_TIME, IDENTIFIER, AT_IDENTIFIER, PARAMETER, RAW_STRING, REGEX, INTEGER, LONG, DOUBLE, FLOAT}
.
Cause
Some words are reserved for the SPL2 syntax and have predefined meanings in the language. This error message is shown when a reserved keyword is used as a field name.
Solution
Enclose the field name in single quotations marks. For example, use 'dedup'
as the field name instead of dedup
. Alternatively, change the field name to something else. For a list of all reserved keywords, see Reserved words in the SPL2 Search Reference.
Pipeline fails validation with "Error type checking arguments" and "Argument cannot be assigned" messages
When you try to validate or activate your pipeline, the validation step fails, and you see an error message that starts with Error type checking arguments to function [<function name>] with ID: <function ID value>
and has the following as the second-last statement: Argument [<input>] cannot be assigned to argument [<argument name>] for function [<function signature>]
.
Cause
In the function mentioned in the error message, one of the arguments is set to a value that is the wrong data type. For example, an argument that only accepts integer values might be set to a string value.
One common cause of this error is when an argument refers to the body
field, which is a union of multiple data types by default, but the body
field hasn't been cast to a specific data type. See Casting a union data type in the Use the manual.
Solution
Confirm the accepted data type for each function argument, and make sure to specify values that are the correct data type.
See the Function Reference manual for detailed information about each function, including the accepted data types for each argument. For more information about data types, see data types in the Use the .
Pipeline fails validation with "Error type checking arguments" and "Function is not defined" messages
When you try to validate or activate your pipeline, the validation step fails, and you see an error message similar to the following: Error type checking arguments to function [<streaming function name>] with ID: [<function ID>]: Function [<scalar function name>] is not defined
.
Cause
In the streaming function mentioned in the error message, one of the arguments is configured to use an invalid scalar function.
Solution
In the configuration settings of the streaming function, correct the invalid scalar function name. See the Function Reference manual for information about the supported scalar functions.
Pipeline is stuck with the RESTARTING or FAILED status
When you navigate to the Pipelines tab of the Data Management page and check the Status column, you see a pipeline that has the RESTARTING or FAILED status.
For more information about pipeline statuses, see Interpreting pipeline statuses.
Cause
Reasons why a pipeline might get stuck in the RESTARTING or FAILED status include, but are not limited to, the following:
- One of the functions in the pipeline is misconfigured. For example, an Amazon Kinesis Data Streams source function specifies the wrong Kinesis stream name.
- The connection to the data source or destination is misconfigured, or the credentials specified in the connection don't have the right permissions for connecting to the source or destination.
Solution
If your pipeline has the RESTARTING status, wait and monitor the pipeline status. In many situations, a pipeline will automatically recover from a RESTARTING state. However, if a pipeline is stuck in the RESTARTING status, cycling between RESTARTING and other statuses, or has the FAILED status, then try the following remediation steps:
- Check to see if your functions are configured correctly.
- Check to see if your connections are configured correctly, and verify that you have the right permissions to connect the third-party service to the . See Getting started with DSP data connections in the Connect to Data Sources and Destinations with DSP manual for more information.
- Clone the pipeline, and then activate the cloned pipeline. Once the cloned pipeline is activated, deactivate the original pipeline. If the original pipeline cannot be deactivated, deactivate the pipeline with Skip Savepoint enabled. See Using activation checkpoints to activate your pipeline for more information.
If the pipeline is still stuck with the RESTARTING or FAILED status, contact Splunk Customer Support for assistance.
Data is not streaming through your pipeline as expected
You successfully activate your pipeline, but data is not streaming through as expected. For example, data might not be entering the pipeline, or data might be getting dropped from the pipeline.
Cause
One or more functions in the pipeline might be configured incorrectly, preventing data from streaming through the pipeline.
Solution
- To identify the function that the data is failing to stream through, check the metrics displayed on the pipeline functions.
- Review and modify the configuration of the function as needed. See the Function Reference manual for details about the function.
The Splunk DSP Firehose, Forwarders Service, or Ingest Service source function is not receiving data
You successfully activate a pipeline that uses the Splunk DSP Firehose, Forwarders Service, or Ingest Service source function, but your data does not stream into the pipeline as expected.
Cause
Your data source might not be correctly configured to send data to the . For example, the outputs.conf file for your universal forwarder might contain incorrect settings.
Solution
Review the configuration settings in your data source to make sure that everything is correct. For additional guidance, see the Connect to Data Sources and Destinations with the manual.
The attributes field in your data contains unexpected fields
When previewing or sending data with the DSP event or metrics schema, you might see unexpected fields nested in the top-level attributes
field.
Cause
Your data has the DSP event or metrics schema, but the top-level fields in your data do not have the expected data types. For example, if you converted the timestamp
field from long to string, then the timestamp
field is inserted as an attribute named timestamp
and the current time is used for the timestamp
field.
Solution
Make sure that the following reserved field names match the expected type:
Field name | Expected data type |
---|---|
timestamp | long |
nanos | integer |
id | string |
host | string |
source | string |
source_type | string |
attributes | map |
kind | string |
Duplicate events are arriving in your data destination
After sending data to the Splunk platform or to a supported third-party platform, you notice that your data contains duplicate events.
Cause
The guarantees at least once delivery of your data, and duplicates can occur. If a failure causes the to stop while processing data, upon restart, your data is reprocessed to ensure that no data is lost. This may result in some data being duplicated in your data destinations.
Solution
This is expected behavior. For performance reasons and to minimize duplicate events, best practices are to have as few pipelines as possible delivering the same events.
The shows that my data is making it through my pipeline, but I can't find my data in my Splunk index
The Monitoring Console provides prebuilt dashboards with detailed topology and performance information about your Splunk Enterprise deployment. See About the Monitoring Console.
HTTP Event Collector dashboards
The Monitoring Console comes with pre-built dashboards for monitoring the HTTP Event Collector. To interpret the HTTP event collector dashboards information panels correctly, be aware that the Data Received and Indexed panel shows data as "indexed" even when the data is sent to a deleted or disabled index. The HEC dashboards show the data that is acknowledged by the indexer acknowledgment feature, even if that data isn't successfully indexed.
For more information about the specific HTTP event collector dashboards, see HTTP Event Collector dashboards.
The HTTP event collector dashboards show all indexes, even if they are disabled or have been deleted.
Pipeline fails to deactivate with error "Failed to queue pipeline for deactivation"
When you try to deactivate a pipeline, the deactivation fails and the returns an error message similar to the following: Failed to stop pipeline <pipeline ID>: <error code>: Could not complete the operation. Number of retries has been exhausted.
Cause
Reasons why a pipeline might fail to deactivate include, but are not limited to, the following:
- The deactivation request was sent before the pipeline was completely activated. The pipeline must be completely activated before it can be deactivated.
- The could not create a savepoint upon deactivation. Savepoints ensure that your pipeline picks up right where it left off when you reactivate it by saving the progress state of each function in the pipeline. However, if the pipeline cannot create a savepoint, then the pipeline fails to deactivate. To learn more about savepoints, see Using activation checkpoints to activate your pipeline.
Solution
Wait until the pipeline status says "Activated" and you see pipeline metrics for the activated pipeline indicating that data is flowing through the pipeline before deactivating the pipeline.
If your pipeline still cannot be deactivated, then deactivate the pipeline with the Skip Savepoint enabled. This skips the creation of a new savepoint. Because the also adds checkpoints frequently and automatically as an additional recovery mechanism, skipping the savepoint on deactivation should not result in major data loss. If you deactivate the pipeline with Skip Savepoint enabled, then you might need to reactivate the pipeline with the Skip Restore State setting enabled.
- Log in to the Splunk Cloud Services CLI.
./scloud login
- Deactivate the pipeline with Skip Savepoint enabled.
./scloud streams deactivate-pipeline --id <pipeline_id> --skip-savepoint true
- (Optional) Modify your pipeline as needed.
- Save and activate the pipeline. If the pipeline cannot be activated, activate it with Skip Restore State enabled. This will reactivate the pipeline without attempting to restore any state from a savepoint or a checkpoint. Data loss may occur when you use Skip Restore State, see Data pipeline activation options for more details.
- Using the Splunk Cloud Services CLI:
./scloud streams activate-pipeline --id <pipeline_id> --skip-restore-state true
- Using the UI:
- From the Canvas View, click Activate and select Skip Restore State.
You stop sending data, but your data destination is still receiving data from the
After you stop sending data from DSP to a destination, you might still see some events being ingested into your data destination.
Cause
This behavior might occur if DSP has a backlog of data that has not been processed.
Solution
This is expected behavior. DSP will continue to send data until the backlog has been processed, even if the pipeline is inactive or in the RESTARTING state.
Cannot login to SCloud
When you login to SCloud, you might see this error.
error: failed to get session token: failed to get valid response from csrfToken endpoint: failed to get valid response from csrfToken endpoint: parse <ipaddr>/csrfToken: first path segment in URL cannot contain colon
Cause
You are using standard HTTP authentication instead of HTTPS.
Solution
Confirm that your auth-url
and host-url
include https
. See Get Started with SCloud.
Create custom functions with the SDK |
This documentation applies to the following versions of Splunk® Data Stream Processor: 1.4.0, 1.4.1, 1.4.2, 1.4.3, 1.4.4, 1.4.5
Feedback submitted, thanks!