Splunk® Data Stream Processor

Use the Data Stream Processor

On April 3, 2023, Splunk Data Stream Processor reached its end of sale, and will reach its end of life on February 28, 2025. If you are an existing DSP customer, please reach out to your account team for more information.

All DSP releases prior to DSP 1.4.0 use Gravity, a Kubernetes orchestrator, which has been announced end-of-life. We have replaced Gravity with an alternative component in DSP 1.4.0. Therefore, we will no longer provide support for versions of DSP prior to DSP 1.4.0 after July 1, 2023. We advise all of our customers to upgrade to DSP 1.4.0 in order to continue to receive full product support from Splunk.

Back up, restore, and share pipelines

All (DSP) pipelines can be represented in JSON. The JSON representation of a pipeline is also known as Streams JSON. You can back up a pipeline by saving the JSON associated with the pipeline. Currently, there is no pipeline versioning mechanism, so you may find it helpful to manually back up and save your pipelines in a version control system in case something unexpected happens.

Create a backup of a pipeline

Follow these steps to create a backup of a pipeline. This is useful in case you want to share a pipeline with someone outside of your organization or you want to save an older version of a pipeline for troubleshooting purposes.

Export a backup of your pipeline, either for storage outside of your DSP environment or to share with someone outside of your organization:

  1. From the home page, select Pipelines and find the pipeline that you want to save a copy of.
  2. Click the pipeline to view it in the Canvas View. If the pipeline is active, click Edit before continuing.
  3. Click the options button DSP Ellipses button, and click Export pipeline.
  4. Save the downloaded JSON file to your preferred location for storing backups.

Create a backup of your pipeline by cloning it:

  1. From the home page, click Pipelines and find the pipeline that you want to save a copy of.
  2. Click the options button DSP Ellipses button in the pipeline row, and click Clone.
  3. Assign a name to the cloned pipeline and click Save. Make any modifications that you want to the cloned pipeline, and maintain the original pipeline as a precautionary measure.

You now have a backup of a pipeline that you can share with someone outside of your organization or keep in case something goes wrong with your pipeline.

Restore a pipeline

You can restore any exported pipelines by importing the pipeline.

  1. From the Pipelines page, select a Source to enter the Canvas View.
  2. From the Canvas View, click the options button DSP Ellipses button, and click Import pipeline.
  3. Browse to and select the JSON file that you previously exported. Click Import.
  4. Give your pipeline a name by clicking the pencil button Splunk Data Stream Processor rename button. You can additionally give your pipeline a description by clicking the pipeline options button DSP Ellipses button and then selecting Update Pipeline Metadata.
  5. Click Save.

Backup and restore a pipeline using the Splunk Cloud Services CLI

You can back up and restore a pipeline using the Splunk Cloud Services CLI. Use the CLI to export the Streams JSON of the pipeline, and then recreate the pipeline based on the exported Streams JSON. The Streams JSON is an abstract representation of a data pipeline using JSON syntax.

The following steps assume that you have jq installed.

  1. From the command line, log in to the Splunk Cloud Services CLI.
    ./scloud login
    
  2. To back up a pipeline by exporting the corresponding Streams JSON, do the following:
    1. List the pipelines that are currently in your tenant.
      ./scloud streams list-pipelines
    2. Find the pipeline that you want to create, and copy its id.
    3. Export the Streams JSON of the pipeline. The following command saves the underlying Streams JSON associated with your pipeline in a designated JSON file.
      ./scloud streams get-pipeline --id <ID> | jq .data > <pipeline-name>.json
  3. To restore a pipeline by recreating it based on the Streams JSON, run the following command.
    ./scloud streams create-pipeline --name "<pipeline-name>" --input-datafile <path-to-json-file> --bypass-validation true
Last modified on 22 March, 2022
Send data from a pipeline to multiple destinations   Using activation checkpoints to activate your pipeline

This documentation applies to the following versions of Splunk® Data Stream Processor: 1.3.0, 1.3.1, 1.4.0, 1.4.1, 1.4.2, 1.4.3, 1.4.4, 1.4.5


Was this topic useful?







You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters