Splunk® Data Stream Processor

Install and administer the Data Stream Processor

Acrobat logo Download manual as PDF


DSP 1.2.0 is impacted by the CVE-2021-44228 and CVE-2021-45046 security vulnerabilities from Apache Log4j. To fix these vulnerabilities, you must upgrade to DSP 1.2.4. See Upgrade the Splunk Data Stream Processor to 1.2.4 for upgrade instructions.

On October 30, 2022, all 1.2.x versions of the Splunk Data Stream Processor will reach its end of support date. See the Splunk Software Support Policy for details.
This documentation does not apply to the most recent version of Splunk® Data Stream Processor. For documentation on the most recent version, go to the latest release.
Acrobat logo Download topic as PDF

Upgrade the Splunk Data Stream Processor from 1.1.0 to 1.2.0

This topic describes how to upgrade the Splunk Data Stream Processor (DSP) from 1.1.0 to 1.2.0.

Before you upgrade

Before you upgrade DSP, review the known issues related to the upgrade process. Depending on what functions you have in your pipelines, you might need to do some additional steps to restore those pipelines after the upgrade is complete. In addition, there are some workarounds for these known issues.

As an alternative, you can uninstall DSP 1.1.0 and do a clean install of DSP 1.2.0. To do this, see the following topics:

Step 1: Prepare universal forwarder pipelines for upgrading without data loss

In DSP 1.2.0, the partition key used by the Forwarders service has been updated. If you are ingesting data into DSP with a Splunk Forwarder, then you must perform the following steps to upgrade gracefully from 1.1.0 to 1.2.0. If you are not using the Merge Events in any active pipelines to process Universal Forwarder data, then you do not need to follow these prerequisite steps.

  1. Check to see how many S2S pod replicas are currently running. Copy this number down somewhere as you will need it later on.
    kubectl -n dsp get deploy/ingest-s2s
  2. Scale the number of S2S pod replicas to 0. Enter the following from a master node.
    kubectl -n dsp scale --replicas=0 deploy/ingest-s2s 
  3. Open the DSP UI and navigate to Data Management > Pipelines.
  4. Open all active pipelines that are ingesting data from a Splunk Forwarder.
  5. Click on the Merge Events function and wait until the function metrics show that there are no records being passed through the function.
  6. Click on the Key By function and configure the function to key by _partition_key.
    cast(map_get(attributes, "_partition_key"), "string") AS _partition_key
    
  7. In the same Key By function, remove the previously required key fields: host, source, source_type, and forwarder_channel_id.

    There is a new Universal Forwarder template in 1.2.0 that adds built-in rules to split-and-merge Universal Forwarder data based on the location of timestamps. If you want to use the new template, you can replace this pipeline with the Universal Forwarder template after upgrading. See Process data from a universal forwarder in DSP for information on the Universal Forwarder template.

  8. Click Activate > Allow non-restored state and reactivate the pipeline.
  9. Scale the number of S2S pod replicas back to its original value. This is the number that you copied in step 1.
    kubectl -n dsp scale --replicas=X deploy/ingest-s2s 

Step 2: Disable the scheduled jobs

The scheduled jobs in each Amazon CloudWatch Metrics, Amazon S3, AWS Metadata, Google Cloud Monitoring, Microsoft 365, and Microsoft Azure Monitor source connector must be disabled before you upgrade DSP. If you do not deactivate all scheduled jobs in these connectors before upgrading your DSP deployment, the Kubernetes container image name used by these connectors is not updated. See the ImagePullBackoff status shown in Kubernetes after upgrading DSP troubleshooting topic for more information.

  1. Open the DSP UI and navigate to Data Management > Connections.
  2. Deactivate the schedule for each Amazon CloudWatch Metrics, Amazon S3, AWS Metadata, Google Cloud Monitoring, Microsoft 365, and Microsoft Azure Monitor source connector.
    1. Select the connection you want to edit.
    2. Toggle the Scheduled parameter off.
    3. Save your changes

Step 3: Upgrade the Splunk Data Stream Processor

  1. Download the new Data Stream Processor tarball on the master node of your cluster.
  2. Extract the tarball.
    tar xf <dsp-version>.tar
  3. Navigate to the extracted file.
    cd <dsp-version>
  4. (Optional) If your environment has a small root volume (6GB or less of free space) in /tmp, your upgrade may fail when you run out of space. Choose a different directory to write temporary files to during the upgrade process.
    export TMPDIR=/<directory-on-larger-volume>
  5. From the extracted file directory, run the upgrade script.
    ./upgrade
  6. Upgrading can take a while. Upon completion, the following message is shown.
    Waiting for DSP to startup
    ....................
    DSP startup completed
    
  7. (Optional) While waiting for the upgrade to complete, you can use the following command to monitor the progress of your upgrade. Run this command after you see the Waiting for DSP to startup message.
    kubectl get pods -n dsp
    
    When the following services have status RUNNING, then the upgrade is complete: dsp, ingest-hec, ingest-s2s, splunk-streaming-rest, usr-mgmt-svc.

You are now ready to use the latest version of DSP.

Step 4: Validate the upgrade

The Splunk Data Stream Processor upgrade is now complete. Any pipelines that were active before the upgrade is reactivated. When the upgrade is completed, DSP shows the following message: DSP startup completed.

  1. In the browser you use to access the DSP UI, clear the browser cache.
  2. Log in to DSP to confirm that your upgrade was successful.
    https://<DSP_HOST>:30000/
    
    User: dsp-admin
    Password: <the dsp-admin password>
    

After upgrading

Perform the following steps after upgrading the Splunk Data Stream Processor.

  1. On each node, delete the directories containing the old version of the Splunk Data Stream Processor.
    rm -r <dsp-version-upgraded-from>
    
  2. Re-enable the schedules for the Amazon CloudWatch Metrics, Amazon S3, AWS Metadata, Google Cloud Monitoring, Microsoft 365, and Microsoft Azure Monitor connectors that were disabled in Step 2.
  3. There are some known issues that occur when you upgrade from 1.1.0 to 1.2.0. Review the Known issues for DSP topic, and follow any workarounds that apply to you.
  4. The default version of SCloud has been changed from SCloud 1.0 to SCloud 4.0. If you were previously using SCloud 1.0, you will need to re-configure the SCloud tool. See Get started with SCloud.
  5. The DSP SDK has been upgraded to 1.2.0 and includes backwards incompatible changes. If you are using a custom-built plugin in your functions, see What's new in the DSP SDK and the Upgrade the DSP plugin from 1.1.0 to 1.2.0 topics.

After upgrading to the latest version of the Splunk Data Stream Processor, any command-line operations must be performed in the new upgraded directory on the master node.

Last modified on 13 December, 2021
PREVIOUS
Install the Splunk Data Stream Processor
  NEXT
Uninstall the Splunk Data Stream Processor

This documentation applies to the following versions of Splunk® Data Stream Processor: 1.2.0


Was this documentation topic helpful?


You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters