All DSP releases prior to DSP 1.4.0 use Gravity, a Kubernetes orchestrator, which has been announced end-of-life. We have replaced Gravity with an alternative component in DSP 1.4.0. Therefore, we will no longer provide support for versions of DSP prior to DSP 1.4.0 after July 1, 2023. We advise all of our customers to upgrade to DSP 1.4.0 in order to continue to receive full product support from Splunk.
New features for DSP
Here's what's new in each version of the Splunk Data Stream Processor (DSP).
Planning to upgrade from an earlier version?
See Upgrade the Splunk Data Stream Processor to 1.4.4.
The Deprecated and removed features topic lists features for which Splunk has deprecated or removed support in this release.
Version 1.4.4
This release contains bug fixes. See Fixed Issues for DSP for more details.
What's new in the docs
- Updated instructions for uninstalling DSP. See Uninstall the Splunk Data Stream Processor for the updated steps.
Version 1.4.3
What's new in the Splunk Data Stream Processor
The following table describes new features or enhancements in 1.4.3.
New Feature or Enhancement | Description |
---|---|
Changing default password after initial UI sign-on | You are now immediately prompted to change your installer-generated password when you sign into the DSP UI for the first time. See Change default password for more information. |
Version 1.4.2
This release contains bug fixes. See Fixed Issues for DSP for more details.
What's new in the Splunk Data Stream Processor
The following table describes new features or enhancements in 1.4.2.
New Feature or Enhancement | Description |
---|---|
Proxy server option in Splunk Observability sink connector | You can now send data through the Splunk Observability connector using a proxy server. See Create a DSP connection to Splunk Observability for more information. |
Pulsar configuration for ingest support | See Configure Pulsar to expose with loadbalancer for instructions on how to configure the Apache Pulsar connector to be a loadbalancer for ingest support on your processing cluster. |
Version 1.4.1
This release contains bug fixes. see Fixed Issues for DSP for more details.
What's new in the Splunk Data Stream Processor
The following table describes new features or enhancements in 1.4.1.
New Feature or Enhancement | Description |
---|---|
You can now install DSP on Google Kubernetes Engine (GKE). | You can now install DSP on Google Cloud Platform's Google Kubernetes Engine. See Install the Splunk Data Stream Processor on Google Kubernetes Engine. |
Google Kubernetes Engine cluster autoscaler available for use with the Data Stream Processor. | You can now use the Google Kubernetes Engine cluster autoscaling feature with the Data Stream Processor. See Cluster autoscaling for DSP on Google Kubernetes Engine for more information and disclaimers on how this feature can be used with your DSP environment on GKE. |
Version 1.4.0
This release contains bug fixes. See Fixed Issues for DSP for more details.
What's new in the Splunk Data Stream Processor
The following table describes new features or enhancements in 1.4.0.
New Feature or Enhancement | Description |
---|---|
Data Stream Processor CLI | The DSP CLI is a collection of commands that replaces the scripts previously included in the installer package. The tool allows you to administer and configure your DSP deployment. See Get started with the Data Stream Processor CLI for more information. |
Removal of select source connectors | DSP will no longer support the following source connectors:
Amazon S3 will continue to be supported as a destination. |
Gravity removal and replacement | k0s replaces Gravity due to Gravity's scheduled end of life on June 30, 2023. See the Install and administer the Data Stream Processor manual for updated installation requirements and DSP administration processes. |
MinIO removal and replacement | Seaweedfs replaces MinIO as a backend component. |
Ingress solution in port configurations | Ports 3000, 31000, and 30002 are no longer required for DSP. All HTTP-based traffic now goes to the standard port 443. Port 9997 replaces 30001 for the Splunk forwarder service. See Port configuration requirements for more information. |
Decreased expiration time in working DSP UI sessions | The expiration time for a working session in the DSP UI has decreased to 2 hours. After two hours, you will not be able to perform any actions in the DSP UI until you log in again.
|
Version 1.3.1
This release contains bug fixes. See Fixed Issues for DSP for more details.
Starting in version 1.3.0, Kubernetes has deprecated the pod_name
and container_name
metrics labels. If you are using these labels in any queries or dashboards, change them to pod
and container
respectively. See https://github.com/kubernetes/kubernetes/pull/80376 on GitHub.
Version 1.3.0
What's new in the docs
The following content has been added to the Install and administer the Data Stream Processor manual.
- Instructions on how to install the Splunk Data Stream Processor on the Google Cloud Platform. See Preparing Google Cloud Platform to install the Splunk Data Stream Processor.
- In the next version of DSP, you will be able to perform upgrades using a blue-green upgrade model if you are running DSP on the Google Cloud Platform. A blue-green upgrade model is an upgrade technique that reduces downtime and risk by setting up a second "green" DSP environment with the version of DSP that you want to upgrade to. You can validate this second cluster, and then switch traffic from the original "blue" environment to the "green" environment when you are ready to do so. If you have a complex DSP environment and traditional upgrades are frequently too risky for you to pursue, consider installing the Splunk Data Stream Processor on the Google Cloud Platform.
What's new in the Splunk Data Stream Processor
The following table describes new features or enhancements in 1.3.0.
New Feature or Enhancement | Description |
---|---|
SASL-authenticated connections to Apache Kafka and Confluent Kafka | You can now use SASL mechanisms for authentication when connecting to your Kafka brokers.
|
DSP now automatically checks for updates to CSV lookup files | You no longer need to manually restart pipelines to pick up changes to lookup files. By default, active pipelines now automatically pick up the most recent version of the lookup file. |
Source-specific and sink-specific connectors | To provide more clarity for connection management, connectors that supported both source and sink functions have been replaced by connectors that specifically support source functions only or sink functions only.
|
Case scalar function | Added support for the case scalar function which is a function that goes through conditions and returns the first value when the condition is met, similar to an "if-then-else" statement.
|
Export and import pipelines | You can now export pipelines to share them with other users or add them to version control. Later, you can import these pipelines to restore a backup copy or create a new pipeline from an exported pipeline. |
You can now install DSP on the Google Cloud Platform | You can now install DSP on the Google Cloud Platform. See Preparing Google Cloud Platform to install the Splunk Data Stream Processor. |
DSP UI enhancements | The DSP UI has been revamped with a more modern look and feel. |
Updates to the Splunk App for DSP | The Splunk App for DSP now includes an add-on that improves how DSP metrics are displayed in the pre-built dashboards. This add-on makes the pipeline names associated with the metrics human-readable so that it is easier to identify which pipeline the metrics are associated with. Install the latest version of the Splunk App for DSP to use this add-on. |
Changes to Kubernetes metrics labels | Kubernetes has deprecated the pod_name and container_name metrics labels. If you are using these labels in any queries or dashboards, change them to pod and container respectively. See https://github.com/kubernetes/kubernetes/pull/80376 on GitHub.
|
Common Vulnerabilities and Exposures (CVE) Fixes | This release contains several security updates. |
Removed deprecated features | The Streaming ML Plugin, and the machine learning functions included in the plugin, have been removed. |
REST API updates
This release includes these new REST API endpoints.
New endpoints:
New endpoint | Description |
---|---|
/streams/v3beta1/lookups/files | Upload a new CSV lookup file. This endpoint replaces the /streams/v3beta1/files endpoint. See Upload a CSV file to the Splunk Data Stream Processor to enrich data with a lookup.
|
Deprecated endpoints:
- The
/streams/v3beta1/files
endpoint has been deprecated.
Version 1.2.4
What's new in the Splunk Data Stream Processor
The following table describes new features or enhancements in 1.2.4.
New Feature or Enhancement | Description |
---|---|
You can now adjust the 's default CORS policy. | See Cross-Origin Resource Sharing Policy. |
Improved prevention against data loss | To provide durability in the event of a failure, the messaging bus now writes to a write-ahead log by default to prevent data loss when bookies restart. |
Common Vulnerabilities and Exposures (CVE) Fixes | This release contains several security updates, including upgrading Apache log4j to 2.17.1. |
Version 1.2.2-patch02
This version of the fixes the CVE-2021-44228 and CVE-2021-45046 product security issues for DSP 1.2.1. For more information, see Splunk Security Advisory for Apache Log4j (CVE-2021-44228 and CVE-2021-45046).
Version 1.2.1-patch02
This version of the fixes the CVE-2021-44228 and CVE-2021-45046 product security issues for DSP 1.2.0. For more information, see Splunk Security Advisory for Apache Log4j (CVE-2021-44228 and CVE-2021-45046).
Version 1.2.1
What's new in the Splunk Data Stream Processor
The following table describes new features or enhancements in 1.2.1.
New Feature or Enhancement | Description |
---|---|
Support for sending data to Google Cloud Storage | You can now send data to a Google Cloud Storage bucket using the Google Cloud Storage connector.
|
Caching with Splunk Enterprise KV Stores | Caching is now enabled by default for lookups to Splunk Enterprise KV Stores. |
Support for the %s time variable | The Apply Timestamp Extraction function now supports the %s time variable, which represents a Unix Epoch Time timestamp. |
Monitor JVM heap metrics | You can now monitor JVM heap metrics in the JM and TM views in the Splunk App for DSP. |
Filter logs by cluster name | You can now filter logs by DSP cluster name in the Splunk App for DSP. |
Connector deprecation notice
We are currently working on replacing the following connectors with a more efficient alternative.
- Amazon CloudWatch
- Amazon Metadata
- Amazon S3
- Microsoft Azure Monitor
- Google Cloud Monitoring
- Microsoft 365
During this time, best practices are to limit the use of these connectors from DSP 1.2.1 onwards.
Version 1.2.0
What's new in the docs
The DSP documentation was refactored in version 1.2.0 to present information in a more intuitive manner and better reflect the end-to-end user experience of the product. As a result, the titles and locations of some topics have changed.
- The Getting Data In manual has been replaced by the Connecting to Data Sources and Destinations manual, which provides complete information about how to connect your DSP pipeline to a given data source or data destination.
- The contents of the Use the Data Stream Processor manual have been reorganized.
- The contents of the following chapters from the Use the Data Stream Processor manual have been moved into the new Connecting to Data Sources and Destinations manual:
- "Pipeline requirements for specific data sources in DSP"
- "Format data in DSP to send to the Splunk platform"
- "Send data from DSP to other destinations"
- The Source functions (Data Sources) and Sink functions (Data Destinations) topics in the Function Reference manual have been moved and rewritten. The source and sink functions are now in dedicated chapters of the Function Reference manual.
- The Send data from Splunk DSP to SignalFx topic has been rewritten and now includes a detailed example demonstrating how to send metrics.log data from the Splunk universal forwarder to SignalFx.
- Added TLS/Cipher suite information.
- Updated the DSP HEC examples, and added documentation about multi-metric support.
What's new in the Splunk Data Stream Processor
The following table describes new features or enhancements in DSP 1.2.0.
New Feature or Enhancement | Description |
---|---|
Support for CSV and Splunk Enterprise KV Store lookups | DSP now supports lookups to Splunk Enterprise KV Stores or CSV files for increased data enrichment.
|
Support for sending data to a Splunk Enterprise KV Store collection | DSP now supports writing data from DSP into a Splunk Enterprise KV Store collection.
|
Streaming ML | Streaming ML is the Splunk Enterprise machine learning framework designed specifically for online learning. This framework includes a library of operators that enable users to apply machine learning models to streaming data, without requiring offline batch training jobs. Steaming ML in DSP 1.2 includes three new functions for Time Series Decomposition (STL), Pairwise Categorical Outlier Detection, Percentiles, and more. You must install the Streaming ML plugin to access these functions.
|
Apply Line Break | You can now perform line breaking and merging for universal forwarder data in one function. In addition, you can now migrate and reuse existing props.conf line_breaking configurations in DSP.
|
Apply Timestamp Extraction | You can now extract additional timestamp formats using strptime() and regular expressions. In addition, you can now migrate and reuse existing props.conf timestamp extraction configurations in DSP.
|
Apache Pulsar Connector | DSP now supports collecting data from an Apache Pulsar topic.
|
Google Cloud Pub/Sub Connector | DSP now supports collecting messages from Google Cloud Pub/Sub.
|
Send to SignalFx (trace) | You can now send trace data to a SignalFx endpoint using the SignalFx connector.
|
Updates to the Splunk App for DSP | The DSP Health application has been renamed to Splunk App for DSP.
You can now collect additional metrics about your DSP environment and monitor those metrics in Splunk Enterprise. In addition, there are now more dashboards to help you visualize the health of your DSP environment.
|
New install flavors and profiles. | DSP now supports additional install flavors and node roles. In addition, DSP also supports more than five master nodes in a cluster. |
Updated Send to Microsoft Azure Event Hubs sink function | This sink function now provides improved performance and data batching controls.
|
Updated Send to Amazon S3 sink function | You can now compress the data that you send to Amazon S3. When sending data in Parquet format, you can now specify the version of Parquet Writer to use, the maximum size of each row group, and how DSP handles records with invalid schemas.
Files generated by this function are now given the correct filename extension based on the file format.
|
SPL2 Named Arguments | DSP now supports named arguments when using SPL2 (Search Processing Language version 2) for source, sink, and scalar functions.
|
Dot and bracket notation support for accessing lists and maps | It's now easier to access list and maps.
|
map_merge scalar function | You can now merge two or more maps together in DSP using the map_merge scalar function.
|
Improved performance of the Forwarders Service | Changes to the Forwarders Service for better performance. |
Updated names for connectors and functions | The display names that appear in the DSP UI for connectors, source functions, and sink functions have been updated for clarity and consistency. Additionally, the SPL2 names for some functions have been updated. See the "Renamed functions in version 1.2.0" section on this page for more information. |
SCloud 4 | SCloud 4.0 is now bundled with DSP. |
--location install flag | You can now specify a location for Gravity to save container and state information using a --location flag.
|
What's new in the DSP SDK
The following table describes new features or enhancements in the DSP SDK.
New Feature or Enhancement | Description |
---|---|
RuntimeContext#getArgument() no longer replaces dashes in argument names with underscores | Previously, scalar function arguments could be accessed from RuntimeContext using dash-cased argument names. Now, all argument names must be accessed using their underscore_cased names. |
Record#get() returns read-only view of maps and lists | Previously, functions were able to read maps or lists from Record and directly modify them. Now, maps or lists read from Record must be explicitly copied before they can be modified. |
AggregationFunction#initialState() is deprecated | Update classes that implement AggregationFunction to use AggregationFunction#initialState(RuntimeContext) instead. |
Renamed SPL2 functions in version 1.2.0
The following functions were renamed in 1.2.0.
Original SPL2 function name | Updated SPL2 function name |
---|---|
read_event_hubs
|
event_hubs
|
read_kafka
|
kafka
|
read_kinesis
|
kinesis
|
read_splunk_firehose
|
splunk_firehose
|
receive_from_forwarders
|
forwarders
|
receive_from_ingest_rest_api
|
ingest_rest_api
|
write_index
|
index
|
write_kafka
|
kafka
|
write_kinesis
|
kinesis
|
write_null
|
dev_null
|
Version 1.1.0
New Feature or Enhancement | Description | Learn more link |
---|---|---|
SPL2 Support | DSP now supports creating and configuring DSP pipelines using SPL2 (Search Processing Language version 2). | SPL2 for DSP. |
SPL2 Builder | DSP now supports an additional pipeline builder experience allowing you to write pipelines in SPL2. | SPL2 Pipeline Builder. |
DSP HTTP Event Collector | You can send events and metrics to a DSP data pipeline using the DSP HTTP Event Collector (DSP HEC). The DSP HEC supports the Splunk HTTP Event Collector (HEC) /services/collector , /services/collector/event , and /services/collector/event/1.0 endpoints allowing you to quickly redirect your existing Splunk HEC workflow into DSP via the DSP Firehose.
|
Send events to a DSP data pipeline using the DSP HTTP Event Collector. |
Syslog support | You can now easily ingest syslog data into DSP using Splunk Connect for Syslog (SC4S). | Send Syslog events to a DSP data pipeline using SC4S with DSP HEC. |
Amazon Linux 2 support | DSP now supports Amazon Linux 2. | Hardware and Software requirements. |
Upgraded Streams REST API | Upgraded Streams REST API endpoints to v3beta1 | Splunk Data Stream Processor REST API Reference. |
Apache Pulsar messaging bus | DSP now uses Apache Pulsar as its messaging bus for data sent via the Ingest, Collect, and Forwarders Services. | Increase Pulsar partitions for improved pipeline throughput |
Splunk Enterprise sink function with Batching | You can now do index-based routing even while batching records. This function performs the common workflow of mapping the DSP event schema to Splunk HEC metrics or events schema, turning records into JSON payloads, and batching the bytes of those payloads for better throughput. | Write to the Splunk platform with Batching |
Splunk Enterprise sink function | This function replaces Write Splunk Enterprise . This function adds out of the box support for index-based routing while batching.
|
Write to the Splunk platform |
Batch Bytes streaming function | DSP now supports batching your data as byte payloads for increased throughput. | Batch Bytes |
To Splunk JSON streaming function | You can now perform automatic mapping of DSP events schema to Splunk HEC events or metrics schema. | To Splunk JSON. |
Write to S3-compatible storage sink function | DSP now supports sending data to an Amazon S3 bucket. | Write to S3-compatible storage |
Write to SignalFx sink function | DSP now supports sending data to a SignalFx Endpoint. | Write to SignalFx |
Microsoft 365 Connector | DSP now supports collecting data from Microsoft 365 and Office 365 services using the Microsoft 365 Connector. | Use the Microsoft 365 Connector with Splunk DSP. |
Google Cloud Monitoring Metrics Connector | DSP now supports collecting metrics data from Google Cloud Monitoring. | Use the Google Cloud Monitoring Metrics Connector with Splunk DSP. |
Amazon S3 Connector | The Amazon S3 Connector now supports Parquet format as a File Type. | Use the Amazon S3 Connector with Splunk DSP. |
Write to Azure Event Hubs Using SAS Key sink function (Beta) | DSP now supports sending data to an Azure Event Hubs namespace using an SAS key. This is a beta function and not ready for production. | Write to Azure Event Hubs. |
Bug fixes | The Splunk Data Stream Processor 1.1.0 includes several bug fixes. | Fixed Issues for DSP. |
Version 1.0.1
- Bug fixes. For details, see Fixed issues.
Version 1.0.0
This is the first release of the Splunk Data Stream Processor.
Known issues for DSP |
This documentation applies to the following versions of Splunk® Data Stream Processor: 1.4.4
Feedback submitted, thanks!