Splunk® Data Stream Processor

Connect to Data Sources and Destinations with DSP

On April 3, 2023, Splunk Data Stream Processor reached its end of sale, and will reach its end of life on February 28, 2025. If you are an existing DSP customer, please reach out to your account team for more information.

All DSP releases prior to DSP 1.4.0 use Gravity, a Kubernetes orchestrator, which has been announced end-of-life. We have replaced Gravity with an alternative component in DSP 1.4.0. Therefore, we will no longer provide support for versions of DSP prior to DSP 1.4.0 after July 1, 2023. We advise all of our customers to upgrade to DSP 1.4.0 in order to continue to receive full product support from Splunk.

Connecting Amazon S3 to your DSP pipeline as a data destination

When creating a data pipeline in the , you can connect to Amazon S3 and use it as a data destination. You can get data into a data pipeline, transform it, and then send the transformed data to an Amazon S3 bucket.

To connect to Amazon S3 as a data destination, you must complete the following tasks:

  1. If the bucket that you want to send data to doesn't already exist in your Amazon S3 instance, create it. Don't include any periods ( . ) in the bucket name. For information about creating an S3 bucket, search for "How do I create an S3 Bucket?" in the Amazon Simple Storage Service Console User Guide.

    If you try to send data to a bucket that doesn't already exist, or to a bucket that has a period ( . ) in its name, the pipeline fails to send data to Amazon S3 and restarts indefinitely.

  2. Create a connection that allows DSP to send data to your Amazon S3 bucket. See Create a DSP connection to send data to Amazon S3.
  3. Create a pipeline that ends with the Send to Amazon S3 sink function. See the Building a pipeline chapter in the Use the Data Stream Processor manual for instructions on how to build a data pipeline.
  4. Configure the Send to Amazon S3 sink function to use your Amazon S3 connection and send data to an existing S3 bucket. See Send data to Amazon S3 in the Function Reference manual.
  5. If you're planning to send data to Amazon S3 in Parquet format, make sure to extract relevant data from union-typed fields into explicitly typed top-level fields. See Formatting DSP data for Parquet files in Amazon S3.

When you activate the pipeline, the sink function starts sending data from the pipeline to the specified S3 bucket.

If your data fails to get into Amazon S3, check the connection settings to make sure that you have the correct credentials for accessing Amazon S3.

Last modified on 25 March, 2022
Formatting data from Amazon Kinesis Data Streams for indexing in the Splunk platform   Create a DSP connection to send data to Amazon S3

This documentation applies to the following versions of Splunk® Data Stream Processor: 1.2.0, 1.2.1-patch02, 1.2.1, 1.2.2-patch02, 1.2.4, 1.2.5, 1.3.0, 1.3.1, 1.4.0, 1.4.1, 1.4.2, 1.4.3, 1.4.4, 1.4.5, 1.4.6


Was this topic useful?







You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters