You can backup a pipeline by saving the Streams JSON associated with the pipeline. This can be helpful in case something happens to your pipeline.
Create a backup of a pipeline
To create a backup of a pipeline, do the following steps.
- From the Data Stream Processor UI, find the pipeline that you want to save a copy of.
- Choose one of the following two steps.
- Click on the three dots in the pipeline row, and select Clone.
- Click on the desired pipeline to go into the pipeline canvas, click on the More options menu, and select Update Pipeline Metadata. If desired, you can share the Streams JSON of a pipeline to other people outside of your organization by sending them the Streams JSON shown in the Import/Export box.
- Expand the Streams JSON Import/Export box.
- Copy and save the Streams JSON somewhere to back up the pipeline.
You now have a backup of a pipeline in case something goes wrong with your pipeline.
Create or restore a pipeline using Streams JSON in the UI
Follow these steps to create or restore a pipeline using Streams JSON in the Data Stream Processor UI. If you backed up your pipeline by cloning your original pipeline, navigate to your cloned pipeline.
- From the Data Stream Processor homepage, click Build pipeline and select any data source.
- From the Data Stream Processor canvas, click on the More Options menu, and select Update Pipeline Metadata.
- Expand the Streams JSON Import/Export box, and delete the text that is in the box.
- Paste the Streams JSON of your desired pipeline into the Streams JSON Import/Export box. For example, try using the following Streams JSON. This JSON represents a pipeline that reads data from the Splunk Firehose and sends it to the default Splunk Enterprise instance associated with the Data Stream Processor.
{ "edges": [ { "sourceNode": "458132a3-04c4-4246-8f60-8fcb7e5d8516", "sourcePort": "output", "targetNode": "e5b5a571-95fd-486a-9654-446373b2d435", "targetPort": "input" } ], "nodes": [ { "op": "read_splunk_firehose", "id": "458132a3-04c4-4246-8f60-8fcb7e5d8516", "attributes": {}, "resolvedId": "read_splunk_firehose" }, { "op": "write_index", "id": "e5b5a571-95fd-486a-9654-446373b2d435", "attributes": { "dsl": { "dataset": "literal(\"main\");", "module": "literal(\"\");" } }, "resolvedId": "write_index:collection<record<R>>:expression<string>:expression<string>" } ], "rootNode": [ "e5b5a571-95fd-486a-9654-446373b2d435" ] }
- Give your pipeline a name.
- Click Update.
Create or restore a pipeline using SCloud
You can create or restore a pipeline using SCloud.
- From the command line, log in to SCloud.
scloud login
- If you already have the Streams JSON of a pipeline that you want to restore, skip to step 6. Otherwise, continue to the next step.
- List the pipelines that are currently in your tenant.
scloud streams list-pipelines
- Find the pipeline that you want to create, and copy its
id
. - Export a pipeline.
scloud streams get-pipeline <ID> | jq .data > <pipeline-name>.json
- Create a pipeline using SCloud.
scloud streams create-pipeline -name "<pipeline-name>" -data-file <path-to-json-file> -bypass-validation true
Send data to multiple destinations in a pipeline | Create a pipeline with multiple data sources |
This documentation applies to the following versions of Splunk® Data Stream Processor: 1.0.0
Feedback submitted, thanks!