On October 30, 2022, all 1.2.x versions of the Splunk Data Stream Processor will reach its end of support date. See the Splunk Software Support Policy for details.
Upgrade the Splunk Data Stream Processor from 1.1.0 to 1.2.0
This topic describes how to upgrade the Splunk Data Stream Processor (DSP) from 1.1.0 to 1.2.0.
Before you upgrade
Before you upgrade DSP, review the known issues related to the upgrade process. Depending on what functions you have in your pipelines, you might need to do some additional steps to restore those pipelines after the upgrade is complete. In addition, there are some workarounds for these known issues.
As an alternative, you can uninstall DSP 1.1.0 and do a clean install of DSP 1.2.0. To do this, see the following topics:
- Back up your Splunk Data Stream Processor deployment
- Back up, restore, and share pipelines using SPL2
- Uninstall the Splunk Data Stream Processor
- Install the Splunk Data Stream Processor
Step 1: Prepare universal forwarder pipelines for upgrading without data loss
In DSP 1.2.0, the partition key used by the Forwarders service has been updated. If you are ingesting data into DSP with a Splunk Forwarder, then you must perform the following steps to upgrade gracefully from 1.1.0 to 1.2.0. If you are not using the Merge Events in any active pipelines to process Universal Forwarder data, then you do not need to follow these prerequisite steps.
- Check to see how many S2S pod replicas are currently running. Copy this number down somewhere as you will need it later on.
kubectl -n dsp get deploy/ingest-s2s
- Scale the number of S2S pod replicas to 0. Enter the following from a master node.
kubectl -n dsp scale --replicas=0 deploy/ingest-s2s
- Open the DSP UI and navigate to Data Management > Pipelines.
- Open all active pipelines that are ingesting data from a Splunk Forwarder.
- Click on the Merge Events function and wait until the function metrics show that there are no records being passed through the function.
- Click on the Key By function and configure the function to key by
_partition_key
.cast(map_get(attributes, "_partition_key"), "string") AS _partition_key
- In the same Key By function, remove the previously required key fields:
host
,source
,source_type
, andforwarder_channel_id
.There is a new Universal Forwarder template in 1.2.0 that adds built-in rules to split-and-merge Universal Forwarder data based on the location of timestamps. If you want to use the new template, you can replace this pipeline with the Universal Forwarder template after upgrading. See Process data from a universal forwarder in DSP for information on the Universal Forwarder template.
- Click Activate > Allow non-restored state and reactivate the pipeline.
- Scale the number of S2S pod replicas back to its original value. This is the number that you copied in step 1.
kubectl -n dsp scale --replicas=X deploy/ingest-s2s
Step 2: Disable the scheduled jobs
The scheduled jobs in each Amazon CloudWatch Metrics, Amazon S3, AWS Metadata, Google Cloud Monitoring, Microsoft 365, and Microsoft Azure Monitor source connector must be disabled before you upgrade DSP. If you do not deactivate all scheduled jobs in these connectors before upgrading your DSP deployment, the Kubernetes container image name used by these connectors is not updated. See the ImagePullBackoff status shown in Kubernetes after upgrading DSP troubleshooting topic for more information.
- Open the DSP UI and navigate to Data Management > Connections.
- Deactivate the schedule for each Amazon CloudWatch Metrics, Amazon S3, AWS Metadata, Google Cloud Monitoring, Microsoft 365, and Microsoft Azure Monitor source connector.
- Select the connection you want to edit.
- Toggle the Scheduled parameter off.
- Save your changes
Step 3: Upgrade the Splunk Data Stream Processor
- Download the new Data Stream Processor tarball on the master node of your cluster.
- Extract the tarball.
tar xf <dsp-version>.tar
- Navigate to the extracted file.
cd <dsp-version>
- (Optional) If your environment has a small root volume (6GB or less of free space) in
/tmp
, your upgrade may fail when you run out of space. Choose a different directory to write temporary files to during the upgrade process.export TMPDIR=/<directory-on-larger-volume>
- From the extracted file directory, run the upgrade script.
./upgrade
- Upgrading can take a while. Upon completion, the following message is shown.
Waiting for DSP to startup .................... DSP startup completed
- (Optional) While waiting for the upgrade to complete, you can use the following command to monitor the progress of your upgrade. Run this command after you see the
Waiting for DSP to startup
message.kubectl get pods -n dsp
When the following services have statusRUNNING
, then the upgrade is complete:dsp
,ingest-hec
,ingest-s2s
,splunk-streaming-rest
,usr-mgmt-svc
.
You are now ready to use the latest version of DSP.
Step 4: Validate the upgrade
The Splunk Data Stream Processor upgrade is now complete. Any pipelines that were active before the upgrade is reactivated. When the upgrade is completed, DSP shows the following message: DSP startup completed
.
- In the browser you use to access the DSP UI, clear the browser cache.
- Log in to DSP to confirm that your upgrade was successful.
https://<DSP_HOST>:30000/ User: dsp-admin Password: <the dsp-admin password>
After upgrading
Perform the following steps after upgrading the Splunk Data Stream Processor.
- On each node, delete the directories containing the old version of the Splunk Data Stream Processor.
rm -r <dsp-version-upgraded-from>
- Re-enable the schedules for the Amazon CloudWatch Metrics, Amazon S3, AWS Metadata, Google Cloud Monitoring, Microsoft 365, and Microsoft Azure Monitor connectors that were disabled in Step 2.
- There are some known issues that occur when you upgrade from 1.1.0 to 1.2.0. Review the Known issues for DSP topic, and follow any workarounds that apply to you.
- The default version of SCloud has been changed from SCloud 1.0 to SCloud 4.0. If you were previously using SCloud 1.0, you will need to re-configure the SCloud tool. See Get started with SCloud.
- The DSP SDK has been upgraded to 1.2.0 and includes backwards incompatible changes. If you are using a custom-built plugin in your functions, see What's new in the DSP SDK and the Upgrade the DSP plugin from 1.1.0 to 1.2.0 topics.
After upgrading to the latest version of the Splunk Data Stream Processor, any command-line operations must be performed in the new upgraded directory on the master node.
Install the Splunk Data Stream Processor | Uninstall the Splunk Data Stream Processor |
This documentation applies to the following versions of Splunk® Data Stream Processor: 1.2.0
Feedback submitted, thanks!