Splunk® Data Stream Processor

Function Reference

Acrobat logo Download manual as PDF


On April 3, 2023, Splunk Data Stream Processor reached its end of sale, and will reach its end of life on February 28, 2025. If you are an existing DSP customer, please reach out to your account team for more information.

All DSP releases prior to DSP 1.4.0 use Gravity, a Kubernetes orchestrator, which has been announced end-of-life. We have replaced Gravity with an alternative component in DSP 1.4.0. Therefore, we will no longer provide support for versions of DSP prior to DSP 1.4.0 after July 1, 2023. We advise all of our customers to upgrade to DSP 1.4.0 in order to continue to receive full product support from Splunk.
Acrobat logo Download topic as PDF

Send data to Splunk Infrastructure Monitoring

Use the Send to Splunk Infrastructure Monitoring sink function to send metric data to Splunk Infrastructure Monitoring.

When configuring this sink function, you specify which fields from your DSP records to map to the metric, value, timestamp, and dimensions fields in the Splunk Infrastructure Monitoring metric schema.

Splunk Infrastructure Monitoring field Sink function arguments used for mapping
metric metric_name
value metric_value
timestamp metric_timestamp
dimensions metric_dimensions

All other fields from your DSP records are dropped.

See Formatting data into the Splunk Infrastructure Monitoring metrics schema in the Connect to Data Sources and Destinations with the manual for an example of how to transform your records so that they can be consistently mapped to the Splunk Infrastructure Monitoring metrics schema. See the /datapoint section of Send Metrics and Events in the Observability API Reference for more information about the supported schema.

Exceeding the DPM (datapoints per minute) limit of your Splunk Infrastructure Monitoring subscription can cause this sink function to drop data. See the "Limitations of the Splunk Infrastructure Monitoring sink function" section on this page for more information.

Prerequisites

Before you can use this function, you must do the following:

  • Create a Splunk Observability connection. See Create a DSP connection to Splunk Observability in the Connect to Data Sources and Destinations with the manual. When configuring this sink function, set the connection_id argument to the ID of that connection.
  • Configure your local firewall to allow outgoing HTTP traffic from at least one of the ports in the range of dynamic or ephemeral ports allocated by your operating system. These ports typically range from 49152 to 65535, but this can be different depending on the specific operating system you are using. The sink function sends HTTP requests to the Splunk Infrastructure Monitoring endpoint via a dynamic or ephemeral port.

Function input schema

collection<record<R>>
This function takes in collections of records with schema R.

Required arguments

connection_id
Syntax: string
Description: The Splunk Observability connection ID.
Example in Canvas View: my-splunk-observability-connection
metric_name
Syntax: expression<string>
Description: The Splunk Infrastructure Monitoring metric name. You can set this argument to any of the following:
  • The metric name, enclosed in double quotation marks ( " ).
  • The name of a field that contains the metric name, without any quotation marks.
  • A scalar function that resolves to the metric name.
Examples in Canvas View:
  • Using a static value for all records: "my_metric_name"
  • Using dynamic values retrieved from a name key in an attributes field: cast(map_get(attributes, "name"), "string")
metric_value
Syntax: expression<double>
Description: The Splunk Infrastructure Monitoring metric value. You can set this argument to either of the following:
  • The name of a double field. Don't enclose the name in quotation marks.
  • A scalar function that resolves to a double value.
Examples in Canvas View:
  • Specifying the name a double field: values
  • Using a scalar function that resolves to a double value: cast(value_int, "double")
metric_type
Syntax: expression<string>
Description: The Splunk Infrastructure Monitoring metric type. The supported metric types are COUNTER, CUMULATIVE_COUNTER, and GAUGE.

This argument is case-sensitive, and the values must be uppercase.

You can set this argument to any of the following:
  • The metric type, enclosed in double quotation marks ( " ).
  • The name of a field that contains the metric type, without any quotation marks.
  • A scalar function that resolves to the metric type.
Examples in Canvas View:
  • Using a static value for all records: "COUNTER"
  • Using dynamic values retrieved from a type key in an attributes field: cast(map_get(attributes, "type"), "string")

Optional arguments

metric_timestamp
Syntax: expression<long>
Description: The timestamp associated with the the Splunk Infrastructure Monitoring metric. This timestamp value is interpreted as an epoch time value in milliseconds. You can set this argument to any of the following:
  • An exact timestamp, enclosed in double quotation marks ( " ).
  • The name of a field that contains the timestamp, without any quotation marks.
  • A scalar function that resolves to the timestamp.
Default: Empty. The time when the metric is ingested into the is used as the timestamp.
Examples in Canvas View:
  • Using a static value for all records: "1583864717233L"
  • Using dynamic values retrieved from a record field named timestamp: timestamp
metric_dimensions
Syntax: expression<map<string, string>>
Description: One or more dimensions associated with the the Splunk Infrastructure Monitoring metric. You can set this argument to any of the following:
  • One or more dimensions specified as key-value pairs. The key-value pairs must be separated by commas ( , ) and enclosed in braces ( { } ).
  • The name of a field that contains dimensions, without any quotation marks.
  • A scalar function that resolves to the dimensions.
Default: Empty.
Examples in Canvas View:
  • Using a static set of attributes for all records: {"destIp":"52.88.24.27", "destHost":"companyhost.com"}
  • Using dynamic dimensions retrieved from a record field named dimensions: dimensions
parameters
Syntax: map<string, string>
Description: Key-value pairs that specify additional optional parameters in this function.
  • When working in Canvas View, specify the name and value of the property in the fields on either side of the equal sign ( = ), and click Add to specify additional properties.
  • When working in SPL View, specify each property using the format "<name>": "<value>", and separate each property with a comma ( , ). Make sure to enclose the entire argument in braces ( { } ).
See the following table for descriptions of the supported parameters.
Parameter Syntax Description Example in Canvas View
batch_size integer between 50 and 25000, inclusive. The maximum number of elements to flush. Defaults to 2000. batch_size = 10000
batch_interval_msecs integer between 50 and 100000, inclusive. The maximum amount of time to wait before flushing, in milliseconds. Defaults to 2000. batch_size = 3000

SPL2 example

When working in the SPL View, you can write the function by providing the arguments in this exact order.

...| into signalfx("my-splunk-observability-connection", cast(map_get(attributes, "name"), "string"), cast(body, "double"), "COUNTER", null, {"destIp":"52.88.24.27", "destHost":"companyhost.com"}, {"batch_size": "10000", "batch_interval_msecs": "3000"});

You can omit optional arguments only if you don't specify any other arguments that must be listed after them. This example includes null as a placeholder for metric_timestamp because metric_dimensions and parameters are listed after it.

Alternatively, you can use named arguments in any order and omit any optional arguments you don't want to declare. All unprovided arguments use their default values. In the following example, parameters is the only optional argument that is declared.

...| into signalfx(connection_id: "my-splunk-observability-connection", metric_type: "COUNTER", metric_name: cast(map_get(attributes, "name"), "string"), metric_value: cast(body, "double"), parameters: {"batch_size": "10000", "batch_interval_msecs": "3000"});

If you want to use a mix of unnamed and named arguments in your functions, you need to list all unnamed arguments in the correct order before providing the named arguments.

Limitations of the Splunk Infrastructure Monitoring sink function

If the destination Splunk Infrastructure Monitoring instance returns HTTP status code 429 (Too Many Requests), the sink function drops the current data batch instead trying to send it again.

HTTP status code 429 can occur when you exceed the DPM limit of your Splunk Infrastructure Monitoring subscription. To prevent data from being dropped, monitor your DPM rate and avoid sending more data than what your subscription allows. See Manage resource usage with access tokens in the Splunk Infrastructure Monitoring documentation for information about setting limits and alerts for your DPM rate, and see the DPM Limits FAQ in the Splunk Infrastructure Monitoring documentation for more information about DPM limits.

Last modified on 21 May, 2021
PREVIOUS
Send data to Microsoft Azure Event Hubs (Beta)
  NEXT
Send data to Splunk APM

This documentation applies to the following versions of Splunk® Data Stream Processor: 1.2.1, 1.2.2-patch02, 1.2.4, 1.2.5, 1.3.0, 1.3.1, 1.4.0, 1.4.1, 1.4.2, 1.4.3


Was this documentation topic helpful?


You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters