Splunk® Data Stream Processor

Install and administer the Data Stream Processor

Acrobat logo Download manual as PDF


On April 3, 2023, Splunk Data Stream Processor reached its end of sale, and will reach its end of life on February 28, 2025. If you are an existing DSP customer, please reach out to your account team for more information.

All DSP releases prior to DSP 1.4.0 use Gravity, a Kubernetes orchestrator, which has been announced end-of-life. We have replaced Gravity with an alternative component in DSP 1.4.0. Therefore, we will no longer provide support for versions of DSP prior to DSP 1.4.0 after July 1, 2023. We advise all of our customers to upgrade to DSP 1.4.0 in order to continue to receive full product support from Splunk.
This documentation does not apply to the most recent version of Splunk® Data Stream Processor. For documentation on the most recent version, go to the latest release.
Acrobat logo Download topic as PDF

Upgrade the Splunk Data Stream Processor to 1.4.1

This topic describes how to upgrade the Splunk Data Stream Processor (DSP) to 1.4.1.

Upgrading to DSP 1.4.1 requires a new installation of DSP and migrating pipelines from a 1.3.x cluster to the new one. This requires implementation of new infrastructure for your DSP environment due to the simultaneous running of both the old and new clusters. Ensure that you adequately plan for the time and resources that this upgrade requires.

To upgrade to DSP 1.4.1 successfully, you must complete several prerequisite tasks before starting the upgrade. Make sure to read through all of the Before you upgrade sections in this topic and complete the relevant prerequisite tasks before upgrading DSP.

DSP does not provide a means of downgrading to previous versions. If you need to revert to an older DSP release, uninstall the upgraded version and reinstall the version you want.

Before you upgrade

Complete the following tasks before upgrading DSP. If you don't complete these tasks, you might encounter issues such as pipeline failures.

Make sure you are using a DSP 1.3.x version

You can upgrade to DSP 1.4.1 from any DSP 1.3.x version. If you are using DSP 1.2.4, then you must upgrade to a DSP 1.3.x version first. You can upgrade to either DSP 1.3.0 or DSP 1.3.1 from DSP 1.2.4. See Upgrade the Splunk Data Stream Processor from 1.2.4 to 1.3.0 or Upgrade the Splunk Data Stream Processor from 1.3.0 to 1.3.1.

Review known issues

Review the known issues related to the upgrade process. Depending on what functions you have in your pipelines, you might need to complete some additional steps to restore those pipelines after the upgrade is complete.

Review the features planned for deprecation or removal

Review the Features planned for deprecation or removal to see what features are scheduled for future deprecation.

Due to removal of pull-based connectors in DSP 1.4.0, pipelines using pull-based connectors cannot be activated and used in your DSP 1.4.1 cluster.

Install DSP version 1.4.1

See Install the Splunk Data Stream Processor to install DSP version 1.4.1. Review the Installation checklist for DSP to ensure that your environment meets all the requirements for installation. You must install DSP 1.4.1 on a new cluster that can also handle the same data load and configurations as your old cluster. For example, if your 1.3.x cluster has 17 total nodes, then your 1.4.1 cluster should have the same number of nodes with a similar hardware profile.

Back up and restore

Once you've installed DSP 1.4.1 on a new cluster, follow these steps to back up your existing pipelines on your 1.3.x cluster and restore them in DSP 1.4.1. You do not need to deactivate your pipelines during backup. During the following steps, you will update your data sources to begin sending data to the new cluster and your restored pipelines will begin to read data, while the old cluster will finish reading any remaining data already ingested by the cluster that has yet to be processed.

  1. After completing an installation of 1.4.1, copy the dsp-gravity tool from your 1.4.1 tarball in the util/ directory and move it into the working directory of your 1.3.x cluster. You will use this tool to generate a backup in your 1.3.x cluster and restore that backup in your 1.4.1 cluster.
  2. Navigate to a controller node on your DSP 1.3.x cluster and run the following command to generate a backup tarball. For a list of all available flags, see Upgrade flags.
    ./dsp-gravity backup --output <name of backup tarball> --target-version=1.4.1
  3. Once your backup is complete, you will be prompted to create and confirm a password to encrypt your 1.3.x backup tarball.
    Please create a password to encrypt the tarball with:
    Enter password: <password>
    Confirm password: <re-entered password>
  4. Move your backup tarball from your 1.3.x cluster to the working directory of your 1.4.1 cluster.
  5. Return to your DSP 1.4.1 cluster and run the following command to restore your backup. This restores most cluster configurations and secrets, omitting obsolete configurations or printing out a warning for unexpected configurations. as well as connection secrets, Postgres contents (including pipeline definitions and the DSP license), lookup files, and plugins. For a list of all available flags, see Upgrade flags.
    dsp restore <tarball name>
  6. When prompted, enter your password from Step 3 to decrypt your 1.3.x backup tarball and begin restoration in your 1.4.1 cluster.
  7. Create new topics or partitions in your external data source to send data into your restored DSP 1.4.1 pipelines.
  8. Adjust your restored connections to direct data flow to a new topic or partition in your desired destination.
  9. Activate your DSP 1.4.1 pipelines.
  10. Configure your external data sources to begin sending data to the new topics or partitions one at a time. Check that the corresponding DSP pipeline correctly receives and processes your data. Continue until you verify that all pipelines are moving data to the correct location and your 1.3.x cluster does not ingest any new data.

    Both the DSP 1.4.1 and DSP 1.3.x clusters will actively send data to the same sinks during this step, so ensure that your destination can handle the extra data ingest.

  11. Continue to monitor both your DSP 1.4.1 and DSP 1.3.x clusters. Eventually your DSP 1.3.x cluster will finish processing its remaining ingested data and can be decommissioned.

After upgrading

After successfully upgrading DSP, complete the following tasks:

Review known issues and apply workarounds

There are some known issues that can occur when upgrading. Review the Known issues for DSP topic, and follow any workarounds that apply to you.

Reference

Upgrade flags

The following table lists the main flags you can use with the backup and restore commands and a description of how to use them:

Flag Description Associated command
--skip-password Skips the password encryption of your backup tarball. dsp backup
--target-version=<version> Specifies which DSP 1.4.x version is the target of your backup tarball. If you do not provide a version, your target version will default to DSP 1.4.0. dsp backup
--output=<name of tarball> Gives the backup TAR file a name. If you do not enter a name, the default name is a standard date-based time. dsp backup
--cluster-provider=<target cluster provider> Specifies the target environment where your cluster restores. Cluster provider values include gke or k0s. Only use the gke value if your cluster is based in Google Kubernetes Engine. k0s is the default value if you do not specify a cluster provider. dsp restore
Last modified on 14 November, 2023
PREVIOUS
Install the Splunk Data Stream Processor on Google Kubernetes Engine
  NEXT
Upgrade the Splunk Data Stream Processor on the Google Cloud Platform

This documentation applies to the following versions of Splunk® Data Stream Processor: 1.4.1


Was this documentation topic helpful?


You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters