Get metrics in from StatsD
StatsD is a network daemon that runs on the Node.js platform, sending metrics over UDP or TCP. For an overview of StatsD, see Measure Anything, Measure Everything on the Code as Craft website.
StatsD has several metric protocol formats, some of which encode dimensions in different ways. The Splunk platform supports the following formats natively:
- Basic StatsD data line metric protocol, which includes
metric_name
,_value
andmetric_type
. - Expanded StatsD data line metric protocol, which adds sample rate and dimensions.
Splunk supports two metric_type
values for StatsD metric data points: g
, for gauge metrics, and c
, for counter metrics.
For ease of use, by default the Splunk platform converts StatsD data into single-measurement metric data points, where each metric data point has a key-value pair for the metric name and another key-value pair for the metric measurement. If you need the Splunk software to convert StatsD data into a metric data point format that supports multiple metric measurements per data point, add STATSD_EMIT_SINGLE_MEASUREMENT_FORMAT=false
to a stanza for the metric source type in props.conf
.
See Create Special StatsD Input Customizations for more information.
Basic StatsD metric protocol
The basic StatsD data line metric protocol just has three fields: the metric_name
, the metric _value
, and the metric_type
.
Syntax
<metric_name>:<_value>|<metric_type>
Example metric
performance.os.disk:1099511627776|g
Expanded StatsD metric protocol
The expanded StatsD data line metric protocol supports dimensions and a sample rate. Sample rates only apply to counter metrics, meaning they have a metric_type
of c
. For
For more about formats for metric names and dimensions, see Best practices for metrics.
Syntax
<metric_name>:<_value>|<metric_type>|@<sample_rate>|#dim1:valueX,dim2:valueY
Example gauge metric
A gauge is a metric that represents a single numerical value that can arbitrarily go up and down. For example, you can use a gauge to represent the number of currently running search jobs, or the temperature in your server room.
performance.os.disk:1099511627776|g|#region:us-west-1,datacenter:us-west-1a,rack:63,os:Ubuntu16.10,arch:x64,team:LON,service:6,service_version:0,service_environment:test, path:/dev/sdal,fstype:ext3
Example counter metric, after processing by the Splunk platform
A counter metric counts occurrences of an event. Its value can only increase or be reset to zero. For example, you can use a counter to represent a number of requests served, tasks completed, or errors. For more information about counter metrics, see Investigate counter metrics.
Here is an example of an counter metric that has been processed by the Splunk platform.
event.login:6|c|@0.5|#region:west,dc:west-1,ip:10.1.1.1,host:valis1.buttercupgames.com,app:zoolu
Note that this counter metric has a sample rate of 0.5
. This means that this counter metric is sampled only 50% of the time by the StatsD client. The Splunk platform adjusts for this by multiplying the metric value by 1/0.5, or 2. This means that the original metric sent from the StatsD client looked like this:
event.login:3|c|@0.5|#region:west,dc:west-1,ip:10.1.1.1,host:valis1.buttercupgames.com,app:zoolu
Note that the original metric event had a numeric value of 3.
About the sample rate
When large numbers of data points are being produced for a particular counter metric, it can be expensive for the Splunk platform to aggregate them. The StatsD client manages this by implementing a sample rate to reduce the network traffic that it sends to the Splunk platform.
The StatsD client puts the sample_rate
value in the counter metric data point to indicate to the Splunk platform the actual downsampling percentage that it employed. The Splunk platform responds to this by multiplying the value of a downsampled counter metric by 1/<sample_rate>
.
For example, say you have a counter metric named event.login
with a sample_rate
of 0.1
. This means that only 10% of the event.login
data points are passed from the StatsD client to your Splunk platform implementation. The Splunk platform multiplies the event.login
values by 1/0.1, or 10, to adjust for the missed data points. So if your Splunk platform implementation receives a event.login
data point with a value of 2, it will change that value to 20.
The Splunk platform passes an warning message for sample_rate
values that are not within 0 and 1. The default setting for sample_rate
is 1.
Using other StatsD formats
If you use a StatsD implementation that uses a different format for dimensions from the ones that the Splunk platform supports natively, for example, one that embeds dimensions within the metric name, you can still use metrics in the Splunk platform. However, you'll need to customize Splunk configuration files to specify how to extract dimensions from your format.
Another option is to use StatsD to gather metrics, but use collectd to send the data to the Splunk platform over HTTP. The benefit of this method is that collectd normalizes the dimension format in the metrics data. For more, see Get metrics in from collectd.
Set up a data input for StatsD data
After you configure your data source to send data in the StatsD protocol, create a UDP or TCP data input in the Splunk platform to listen for StatsD data on an open port.
- In Splunk Web, go to Settings > Data inputs.
- Under Local inputs, click Add new next to UDP or TCP, depending on the type of input you want to create.
- For Port, enter the number of the port you are using for StatsD.
- Click Next.
- Click Select Source Type, then select Metrics > statsd.
- For Index, select an existing metrics index. Or, click Create a new index to create one.
If you choose to create an index, in the New Index dialog box:- Enter an Index Name. User-defined index names must consist of only numbers, lowercase letters, underscores, and hyphens. Index names cannot begin with an underscore or hyphen.
- For Index Data Type, click Metrics.
- Configure additional index properties as needed.
- Click Save.
- Click Review, then click Submit.
When using UDP ports to ingest metric data, you cannot use parallel ingestion or the multiple pipeline sets feature.
Get started with metrics | Configure special StatsD input customizations |
This documentation applies to the following versions of Splunk® Enterprise: 8.0.3, 8.0.4, 8.0.5, 8.0.6, 8.0.7, 8.0.8, 8.0.9, 8.0.10, 8.1.0, 8.1.1, 8.1.2, 8.1.3, 8.1.4, 8.1.5, 8.1.6, 8.1.7, 8.1.8, 8.1.9, 8.1.10, 8.1.11, 8.1.12, 8.1.13, 8.1.14, 8.2.0, 8.2.1, 8.2.2, 8.2.3, 8.2.4, 8.2.5, 8.2.6, 8.2.7, 8.2.8, 8.2.9, 8.2.10, 8.2.11, 8.2.12, 9.0.0, 9.0.1, 9.0.2, 9.0.3, 9.0.4, 9.0.5, 9.0.6, 9.0.7, 9.0.8, 9.0.9, 9.0.10, 9.1.0, 9.1.1, 9.1.2, 9.1.3, 9.1.4, 9.1.5, 9.1.6, 9.1.7, 9.2.0, 9.2.1, 9.2.2, 9.2.3, 9.2.4, 9.3.0, 9.3.1, 9.3.2
Feedback submitted, thanks!