Splunk® Data Stream Processor

Function Reference

DSP 1.2.0 is impacted by the CVE-2021-44228 and CVE-2021-45046 security vulnerabilities from Apache Log4j. To fix these vulnerabilities, you must upgrade to DSP 1.2.4. See Upgrade the Splunk Data Stream Processor to 1.2.4 for upgrade instructions.

On October 30, 2022, all 1.2.x versions of the Splunk Data Stream Processor will reach its end of support date. See the Splunk Software Support Policy for details.
This documentation does not apply to the most recent version of Splunk® Data Stream Processor. For documentation on the most recent version, go to the latest release.

Send data to SignalFx (metric)

Use the Send Metrics Data to SignalFx sink function to send metric data to a SignalFx endpoint.

Exceeding the DPM (datapoints per minute) limit of your SignalFx subscription can cause this sink function to drop data. See the "Limitations of the Send Metrics Data to SignalFx sink function" section on this page for more information.

Prerequisites

Before you can use this function, you must create a connection. See Create a connection to SignalFx in the Connect to Data Sources and Destinations with the manual. When configuring this sink function, set the connection_id argument to the ID of that connection.

Function input schema

collection<record<R>>
This function takes in collections of records with schema R.

Required arguments

connection_id
Syntax: string
Description: The SignalFx connection ID.
Example in Canvas View: "576205b3-f6f5-4ab7-8ffc-a4089a95d0c4"
metric_name
Syntax: expression<string>
Description: The SignalFx metric name.
Example in Canvas View: "my_metric_name"
metric_value
Syntax: expression<double>
Description: The SignalFx metric value.
Example in Canvas View: 0.33
metric_type
Syntax: expression<string>
Description: The SignalFx metric type. This can be set to:
  • COUNTER
  • CUMULATIVE_COUNTER
  • GAUGE

This argument is case-sensitive and must be uppercase.

Example in Canvas View:"COUNTER"

Optional arguments

metric_timestamp
Syntax: expression<long>
Description: The time associated with the SignalFx metric, measured in milliseconds. If a timestamp is not available, the ingest time is used as the timestamp
Example in Canvas View: 1583864717233L
metric_dimensions
Syntax: expression<map<string, string>>
Description: Defaults to empty { }. JSON key-value pairs that describe a SignalFx metric.
Example in Canvas View: {"my_dimension": "1"}
parameters
Syntax: map<string, string>
Description: Defaults to empty { }. Key-value pairs that can be passed to SignalFx. This can be set to:
  • batch_size: The maximum number of elements to flush. The batch size can range between 50 and 25,000 elements. The default value is 2000.
  • batch_interval_msecs: The maximum time to wait before flushing. The batch size interval can range between 50 and 100,000 milliseconds. The default value is 2000.
Example in Canvas View: batch_size = 20000

SPL2 example

When working in the SPL View, you can write the function by providing the arguments in this exact order.

...| into signalfx("my-signalfx-connection", "my_metric_name", 0.33, "COUNTER", null, {"my_dimension": "1"}, {"batch_size": "20000", "batch_interval_msecs": "10000"});

You can omit optional arguments only if you don't specify any other arguments that must be listed after them. This example includes null as a placeholder for metric_timestamp because metric_dimensions and parameters are listed after it.

Alternatively, you can use named arguments in any order and omit any optional arguments you don't want to declare. All unprovided arguments use their default values. In the following example, parameters is the only optional argument that is declared.

...| into signalfx(connection_id: "my-signalfx-connection", metric_type: "COUNTER", metric_name: "my_metric_name", metric_value: 0.33, parameters: {"batch_size": "20000", "batch_interval_msecs": "10000"});

If you want to use a mix of unnamed and named arguments in your functions, you need to list all unnamed arguments in the correct order before providing the named arguments.

Port requirements

The Send Metrics Data to SignalFx function sends HTTP requests to the SignalFx endpoint via a dynamic or ephemeral port. Your local firewall configuration must be set up to allow outgoing HTTP traffic from at least one of the ports in the range of dynamic or ephemeral ports allocated by your operating system. These ports typically range from 49152 to 65535, but this can be different depending on the specific operating system you are using.

Limitations of the Send Metrics Data to SignalFx sink function

If the destination SignalFx instance returns HTTP status code 429 (Too Many Requests), the sink function drops the current data batch instead trying to send it again.

HTTP status code 429 can occur when you exceed the DPM limit of your SignalFx subscription. To prevent data from being dropped, monitor your DPM rate and avoid sending more data than what your subscription allows. See Manage resource usage with access tokens in the SignalFx documentation for information about setting limits and alerts for your DPM rate, and see the DPM Limits FAQ in the SignalFx documentation for more information about DPM limits.

Last modified on 14 April, 2021
Send data to Microsoft Azure Event Hubs (Beta)   Send data to SignalFx (trace)

This documentation applies to the following versions of Splunk® Data Stream Processor: 1.2.0, 1.2.1-patch02


Was this topic useful?







You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters