Splunk® Data Stream Processor

Use the Data Stream Processor

Acrobat logo Download manual as PDF


On April 3, 2023, Splunk Data Stream Processor will reach its end of sale, and will reach its end of life on February 28, 2025. If you are an existing DSP customer, please reach out to your account team for more information.
This documentation does not apply to the most recent version of Splunk® Data Stream Processor. For documentation on the most recent version, go to the latest release.
Acrobat logo Download topic as PDF

Backup, restore, and share pipelines using Streams JSON

You can backup a pipeline by saving the Streams JSON associated with the pipeline. This can be helpful in case something happens to your pipeline.

Create a backup of a pipeline

To create a backup of a pipeline, do the following steps.

  1. From the Data Stream Processor UI, find the pipeline that you want to save a copy of.
  2. Choose one of the following two steps.
  3. Click on the three dots in the pipeline row, and select Clone.
  4. Click on the desired pipeline to go into the pipeline canvas, click on the More options menu, and select Update Pipeline Metadata. If desired, you can share the Streams JSON of a pipeline to other people outside of your organization by sending them the Streams JSON shown in the Import/Export box.
    • Expand the Streams JSON Import/Export box.
    • Copy and save the Streams JSON somewhere to back up the pipeline.

You now have a backup of a pipeline in case something goes wrong with your pipeline.

Create or restore a pipeline using Streams JSON in the UI

Follow these steps to create or restore a pipeline using Streams JSON in the Data Stream Processor UI. If you backed up your pipeline by cloning your original pipeline, navigate to your cloned pipeline.

  1. From the Data Stream Processor homepage, click Build pipeline and select any data source.
  2. From the Data Stream Processor canvas, click on the More Options menu, and select Update Pipeline Metadata.
  3. Expand the Streams JSON Import/Export box, and delete the text that is in the box.
  4. Paste the Streams JSON of your desired pipeline into the Streams JSON Import/Export box. For example, try using the following Streams JSON. This JSON represents a pipeline that reads data from the Splunk Firehose and sends it to the default Splunk Enterprise instance associated with the Data Stream Processor.
    {
      "edges": [
        {
          "sourceNode": "458132a3-04c4-4246-8f60-8fcb7e5d8516",
          "sourcePort": "output",
          "targetNode": "e5b5a571-95fd-486a-9654-446373b2d435",
          "targetPort": "input"
        }
      ],
      "nodes": [
        {
          "op": "read_splunk_firehose",
          "id": "458132a3-04c4-4246-8f60-8fcb7e5d8516",
          "attributes": {},
          "resolvedId": "read_splunk_firehose"
        },
        {
          "op": "write_index",
          "id": "e5b5a571-95fd-486a-9654-446373b2d435",
          "attributes": {
            "dsl": {
              "dataset": "literal(\"main\");",
              "module": "literal(\"\");"
            }
          },
          "resolvedId": "write_index:collection<record<R>>:expression<string>:expression<string>"
        }
      ],
      "rootNode": [
        "e5b5a571-95fd-486a-9654-446373b2d435"
      ]
    }
  5. Give your pipeline a name.
  6. Click Update.

Create or restore a pipeline using SCloud

You can create or restore a pipeline using SCloud.

  1. From the command line, log in to SCloud.
    scloud login
    
  2. If you already have the Streams JSON of a pipeline that you want to restore, skip to step 6. Otherwise, continue to the next step.
  3. List the pipelines that are currently in your tenant.
     scloud streams list-pipelines
  4. Find the pipeline that you want to create, and copy its id.
  5. Export a pipeline.
    scloud streams get-pipeline <ID> | jq .data > <pipeline-name>.json
  6. Create a pipeline using SCloud.
    scloud streams create-pipeline -name "<pipeline-name>" -data-file <path-to-json-file> -bypass-validation true
Last modified on 06 December, 2019
PREVIOUS
Send data to multiple destinations in a pipeline
  NEXT
Create a pipeline with multiple data sources

This documentation applies to the following versions of Splunk® Data Stream Processor: 1.0.0


Was this documentation topic helpful?


You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters