You can back up a Data Stream Processor (DSP) pipeline by saving the SPL2 associated with the pipeline. Currently, there is no pipeline versioning mechanism, so you may find it helpful to manually back up and save your pipelines in a version control system in case something unexpected happens.
Create a backup of a pipeline
Follow these steps to create a backup of a pipeline.
- From the DSP homepage, click Data Management and find the pipeline that you want to save a copy of.
- Open the pipeline, and choose one of the following steps. If you select an active pipeline, click Edit before continuing.
- Click the three dots in the pipeline row, and click Clone. Make any modifications that you want to the cloned pipeline, and maintain the original pipeline as a precautionary measure.
- Click SPL next to the pipeline name to toggle to the SPL2 builder. Copy the SPL2 for your pipeline and save it to your preferred location for storing backups.
If you selected a previously active pipeline, do not save this pipeline after toggling to the SPL2 builder. Simply save the SPL2 for this pipeline in your preferred location, and close the pipeline without saving your changes. There is a known issue that can cause data loss when you use the toggle on previously active pipelines. See the known issues page.
You now have a backup of a pipeline in case something goes wrong with your pipeline.
Restore a pipeline using SPL2 in the UI
Follow these steps to restore a pipeline using SPL2 in the DSP UI.
- From the DSP homepage, click Data Management > Create New Pipeline.
- Select the SPL2 Builder.
- Paste the SPL2 of your desired pipeline into the SPL2 Pipeline Builder box. For an example, try pasting the following SPL2 into the SPL2 Pipeline Builder box.
| from splunk_firehose() | fields - nanos, id | into dev_null();
- Click Build pipeline
- Click Save.
- Give your pipeline a name, and click Save again.
Backup and restore a pipeline using SCloud
You can restore a pipeline using SCloud. These steps assume that you have jq
installed.
- From the command line, log in to SCloud.
./scloud login
- If you already have the Streams JSON of the pipeline that you want to restore, skip to step 6. Otherwise, continue to the next step.
- List the pipelines that are currently in your tenant.
./scloud streams list-pipelines
- Find the pipeline that you want to create, and copy its
id
. - Export the pipeline's Streams JSON by running the following command. This saves the underlying Streams JSON associated with your pipeline in a designated JSON file.
./scloud streams get-pipeline --id <ID> | jq .data > <pipeline-name>.json
- (Optional) Restore the pipeline using SCloud.
./scloud streams create-pipeline --name "<pipeline-name>" --input-datafile <path-to-json-file> --bypass-validation true
About Splunk Data Stream Processor regular expressions | Manage connections to external data sources |
This documentation applies to the following versions of Splunk® Data Stream Processor: 1.1.0
Feedback submitted, thanks!