Splunk® Data Stream Processor

Release Notes

Download manual as PDF

Download topic as PDF

New features for DSP

Here's what's new in each version of the Splunk Data Stream Processor

Version 1.1.0

What's New

The following table describes new features or enhancements in DSP 1.1.0.

New Feature or Enhancement Description
SPL2 Support DSP now supports creating and configuring DSP pipelines using SPL2 (Search Processing Language version 2). For more information, see SPL2 for DSP.
SPL2 Builder DSP now supports an additional pipeline builder experience allowing you to write pipelines in SPL2. For more information, see SPL2 Pipeline Builder.
DSP HTTP Event Collector You can send events and metrics to a DSP data pipeline using the DSP HTTP Event Collector (DSP HEC). The DSP HEC supports the Splunk HTTP Event Collector (HEC) /services/collector, /services/collector/event, and /services/collector/event/1.0 endpoints allowing you to quickly redirect your existing Splunk HEC workflow into DSP via the DSP Firehose. For more information, see Send events to a DSP data pipeline using the DSP HTTP Event Collector.
Syslog support You can now easily ingest syslog data into DSP using Splunk Connect for Syslog (SC4S). For more information, see Send Syslog events to a DSP data pipeline using SC4S with DSP HEC.
Amazon Linux 2 support DSP now supports Amazon Linux 2. For more information, see Hardware and Software requirements.
Upgraded Streams REST API endpoints to v3beta1 See the Splunk Data Stream Processor REST API Reference.
Apache Pulsar messaging bus DSP now uses Apache Pulsar as its messaging bus for data sent via the Ingest, Collect, and Forwarders Services.
Splunk Enterprise sink function with Batching You can now do index-based routing even while batching records. This function performs the common workflow of mapping the DSP event schema to Splunk HEC metrics or events schema, turning records into JSON payloads, and batching the bytes of those payloads for better throughput. See Write to the Splunk platform with Batching.
Splunk Enterprise sink function This function replaces Write Splunk Enterprise. This function adds out of the box support for index-based routing while batching. See Write to the Splunk platform.
Batch Bytes streaming function DSP now supports batching your data as byte payloads for increased throughput. For more information, see Batch Bytes.
To Splunk JSON streaming function You can now perform automatic mapping of DSP events schema to Splunk HEC events or metrics schema. For more information, see To Splunk JSON.
Write to S3-compatible storage sink function DSP now supports sending data to an Amazon S3 bucket. For more information, see Write to S3-compatible storage.
Write to SignalFx sink function DSP now supports sending data to a SignalFx Endpoint. For more information, see Write to SignalFx.
Microsoft 365 Connector DSP now supports collecting data from Microsoft 365 and Office 365 services using the Microsoft 365 Connector. For more information, see Use the Microsoft 365 Connector with Splunk DSP.
Google Cloud Monitoring Metrics Connector DSP now supports collecting metrics data from Google Cloud Monitoring. For more information, see Use the Google Cloud Monitoring Metrics Connector with Splunk DSP.
Amazon S3 Connector The Amazon S3 Connector now supports Parquet format as a File Type. For more information, see Use the Amazon S3 Connector with Splunk DSP.
Write to Azure Event Hubs Using SAS Key sink function (Beta) DSP now supports sending data to an Azure Event Hubs namespace using an SAS key. This is a beta function and not ready for production. For more information, see Write to Azure Event Hubs.
DSP Plugins SDK You can now create your own custom functions using the DSP Plugins SDK. See Create custom functions with the DSP SDK.
Bug fixes The Splunk Data Stream Processor 1.1.0 includes several bug fixes. For details, see Fixed Issues for DSP.

New endpoints

The Streams v3beta1 REST API adds the following new endpoints:

  • /streams/v3beta1/pipelines/compile
  • /streams/v3beta1/pipelines/decompile

The Ingest v1beta2 REST API adds the following new endpoint:

  • /ingest/v1beta2/collector/tokens

The SCloud that ships with the DSP installer does not contain the new Streams v3beta1 API or the new Ingest v1beta2 endpoints. If you want to use either of these two services, you will need to upgrade SCloud. See Authenticate with SCloud.

Renamed functions in version 1.1.0

The following functions were renamed in 1.1.0.

Original function name Updated function name
Batch Events Batch Records
Group Key By
Projection Select
Filter Where
Aggregate Stats
join mvjoin
dedup mvdedup
map_flatten flatten

Removed features in version 1.1.0

The following features were removed in version 1.1.0.

Removed in this version What do I need to know?
literal and list scalar functions The literal and list functions have been removed.
Streams DSL support in the UI The Data Stream Processor UI no longer supports Streams DSL. Saving a DSL pipeline in the UI automatically triggers a migration from DSL to SPL2.
Write Splunk Enterprise sink function The Write_Splunk_Enterprise sink function has been replaced by the Splunk_Enterprise sink function.
The connectionless Read from Apache Kafka with SSL and Write to Kafka with SSL source and sink functions Use the Apache Kafka Connector using SSL or the Apache Kafka Connector (no SSL) connectors instead.

Removed endpoints

  • The /streams/v3beta1/pipelines/expand endpoint has been removed.
  • The /streams/v3beta1/groups endpoint has been removed.
  • The POST /streams/v3beta1/pipelines/merge endpoint has been removed.

Version 1.0.1

  • Bug fixes. For details, see Fixed issues.

Version 1.0.0

This is the first release of the Splunk Data Stream Processor.

Last modified on 12 May, 2020
  NEXT
Known issues for DSP

This documentation applies to the following versions of Splunk® Data Stream Processor: 1.1.0


Was this documentation topic helpful?

Enter your email address, and someone from the documentation team will respond to you:

Please provide your comments here. Ask a question or make a suggestion.

You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters