Get metrics in from other sources
If you are gathering metrics from a source that is not natively supported, you can still add this metrics data to a metrics index.
Get metrics in from files in CSV format
If your metrics data is in CSV format, use the metrics_csv
pre-trained source type.
Your CSV file must have a header that starts with the metric_timestamp
, metric_name
, and _value
fields. All other fields are considered to be dimensions.
Field name | Required | Description | Example |
---|---|---|---|
metric_timestamp
|
X | Epoch time (elapsed time since 1/1/1970), in milliseconds. | 1504907933.000 |
metric_name
|
X | The metric name using dotted-string notation. | os.cpu.percent |
_value
|
X | A numerical value. | 42.12345 |
dimensions | All other fields are treated as dimensions. | ip |
To add CSV data to a metrics index, create a data input with the following:
- Source type: Metrics > metrics_csv
- Index: a metrics index
See Monitor files and directories in the Getting Data In manual, and Create metrics indexes in the Managing Indexers and Clusters of Indexers manual.
Example of a CSV file metrics input
Here is an example of a CSV file that is properly formatted for metrics. The first three columns of the table are the required fields, metric_timestamp
, metric_name
, and _value
. The fourth column, process_object_guid
, is a dimension.
"metric_timestamp","metric_name","_value","process_object_guid" "1509997011","process.cpu.avg","2563454144","dbd1414b-378e-48bd-9735-bc2bab1e58fa" "1509997011","process.cpu.min","2563454144","dbd1414b-378e-48bd-9735-bc2bab1e58fa" "1509997011","process.cpu.max","2563454144","dbd1414b-378e-48bd-9735-bc2bab1e58fa" "1509997011","process.cpu.last","2563454144","dbd1414b-378e-48bd-9735-bc2bab1e58fa" "1509997011","process.ram.avg","2563454144","dbd1414b-378e-48bd-9735-bc2bab1e58fa" "1509997011","process.ram.min","2563454144","dbd1414b-378e-48bd-9735-bc2bab1e58fa" "1509997011","process.ram.max","2563454144","dbd1414b-378e-48bd-9735-bc2bab1e58fa" "1509997011","process.ram.last","2563454144","dbd1414b-378e-48bd-9735-bc2bab1e58fa" "1509997011","process.disk.avg","38750","dbd1414b-378e-48bd-9735-bc2bab1e58fa" "1509997011","process.disk.min","38750","dbd1414b-378e-48bd-9735-bc2bab1e58fa" "1509997011","process.disk.max","38750","dbd1414b-378e-48bd-9735-bc2bab1e58fa" "1509997011","process.disk.last","38750","dbd1414b-378e-48bd-9735-bc2bab1e58fa"
To get this metrics data into your system, create an input that uses the pretrained metrics_csv
source type and which moves the metrics data to a metrics index.
After you set up your metrics_csv
input, you should have the following inputs.conf
configuration on your universal forwarder:
#inputs.conf [monitor:///opt/metrics_data] index = metrics sourcetype = metrics_csv
The universal forwarder monitors the CSV data and sends it to the metrics indexer. After you set up your metrics_csv
input, you should have the following indexes.conf
configuration on the metrics indexer:
#indexes.conf [metrics] homePath = $SPLUNK_DB/metrics/db coldPath = $SPLUNK_DB/metrics/colddb thawedPath = $SPLUNK_DB/metrics/thaweddb datatype = metric maxTotalDataSizeMB = 512000
Get metrics in from clients over TCP/UDP
You can add metrics data from a client that is not natively supported to a metrics index by manually configuring a source type for your data, then defining regular expressions to specify how the Splunk software should extract the required metrics fields. See Metrics data format.
For example, let's say you are using Graphite. The Graphite plaintext protocol format is:
<metric path> <metric value> <metric timestamp>
A sample metric might be:
510fcbb8f755.sda2.diskio.read_time 250 1487747370
To index these metrics, edit Splunk configuration files to manually specify how to extract fields.
Configure field extraction by editing configuration files
- Define a custom source type for your metrics data.
- In a text editor, open the props.conf configuration file from the local directory for the location you want to use, such as the Search & Reporting app ($SPLUNK_HOME/etc/apps/search/local/) or the system ($SPLUNK_HOME/etc/system/local). If a props.conf file does not exist in this location, create a text file and save it to that location.
- Append a stanza to the props.conf file as follows:
# props.conf [<metrics_sourcetype_name>] TIME_PREFIX = <regular expression> TIME_FORMAT = <strptime-style format> TRANSFORMS-<class> = <transform_stanza_name> NO_BINARY_CHECK = true SHOULD_LINEMERGE = false pulldown_type = 1 category = Metrics
- metrics_sourcetype_name Name of your custom metrics source type.
- TIME_PREFIX = regular expression: A regular expression that indicates where the timestamp is located.
- TIME_FORMAT = strptime-style format: A strptime format string used to extract the date. For more about strptime, see Configure timestamp recognition in the Getting Data In manual.
- TRANSFORMS-<class> = <transform_stanza_name>: class is a unique literal string that identifies the namespace of the field to extract. transform_stanza_name is the name of the name of your stanza in transforms.conf that indicates how to extract the field.
- Define a regular expression for each metrics field to extract.
- In a text editor, open the transforms.conf configuration file from the local directory for the location you want to use, such as the Search & Reporting app ($SPLUNK_HOME/etc/apps/search/local/) or the system ($SPLUNK_HOME/etc/system/local). If a transforms.conf file does not exist in this location, create a text file and save it to that location.
- Append a stanza for each regular expression as follows:
# transforms.conf [<transform_stanza_name>] REGEX = <regular expression> FORMAT = <string> WRITE_META = true
- transform_stanza_name: A unique name for this stanza.
- REGEX = <regular expression>: A regular expression that defines how to match and extract metrics fields from this metrics data.
- FORMAT = <string>: A string that specifies the format of the metrics event.
- Create a data input for this source type as described in Set up a data input for StatsD data, and select your custom source type.
For more about editing these configuration files, see About configuration files, props.conf, and transforms.conf in the Admin Manual.
Example of configuring field extraction
This example shows how to create a custom source type and regular expressions to extract fields from Graphite metrics data.
# props.conf.example [graphite_plaintext] TIME_PREFIX = \s(\d{0,10})$ TIME_FORMAT = %s NO_BINARY_CHECK = true SHOULD_LINEMERGE = false pulldown_type = 1 TRANSFORMS-graphite-host = graphite_host TRANSFORMS-graphite-metricname = graphite_metric_name TRANSFORMS-graphite-metricvalue = graphite_metric_value category = Metrics
# transforms.conf.example [graphite_host] REGEX = ^(\S[^\.]+) FORMAT = host::$1 DEST_KEY = MetaData:Host [graphite_metric_name] REGEX = \.(\S+) FORMAT = metric_name::graphite.$1 WRITE_META = true [graphite_metric_value] REGEX = \w+\s+(\d+.?\d+)\s+ FORMAT = _value::$1 WRITE_META = true
Get metrics in from clients over HTTP or HTTPS
If you want to send metrics data in JSON format from a client that is not natively supported to a metrics index over HTTP or HTTPS, use the HTTP Event Collector (HEC) and the /collector
REST API endpoint.
Create a data input and token for HEC
- In Splunk Web, click Settings > Data Inputs.
- Under Local Inputs, click HTTP Event Collector.
- Verify that HEC is enabled.
- Click Global Settings.
- For All Tokens, click Enabled if this button is not already selected.
- Click Save.
- Configure an HEC token for sending data by clicking New Token.
- On the Select Source page, for Name, enter a token name, for example "Metrics token".
- Leave the other options blank or unselected.
- Click Next.
- On the Input Settings page, for Source type, click New.
- In Source Type, type a name for your new source type.
- For Source Type Category, select Metrics.
- Optionally, in Source Type Description type a description.
- Next to Default Index, select your metrics index, or click Create a new index to create one.
If you choose to create an index, in the New Index dialog box:- Enter an Index Name.
- For Index Data Type, click Metrics.
- Configure additional index properties as needed.
- Click Save.
- Click Review, and then click Submit.
- Copy the Token Value that is displayed. This HEC token is required for sending data.
See Getting data in with HTTP Event Collector on the Splunk Developer Portal.
Send data to a metrics index over HTTP
Use the /collector
REST API endpoint and your HEC token to send data directly to a metrics index as follows:
http://<splunk_host>:<HTTP_port>/services/collector -H 'Authorization: Splunk <HEC_token>' -d "<metrics_data>"
You need to provide the following values:
- Splunk host machine (IP address, host name, or load balancer name)
- HTTP port number
- HEC token value
- Metrics data, which requires an "event" field set to "metric".
For more about HEC, see Getting data in with HTTP Event Collector and Event formatting on the Splunk Developer Portal.
For more about the /collector
endpoint, see /collector in the REST API Reference Manual.
Example of sending metrics using HEC
The following example shows a command that sends a metric measurement to a metrics index, with the following values:
- Splunk host machine: "localhost"
- HTTP port number: "8088"
- HEC token value: "b0221cd8-c4b4-465a-9a3c-273e3a75aa29"
curl https://localhost:8088/services/collector \ -H "Authorization: Splunk b0221cd8-c4b4-465a-9a3c-273e3a75aa29" \ -d '{"time": 1486683865.000,"event":"metric","source":"disk","host":"host_99","fields":{"region":"us-west-1","datacenter":"us-west-1a","rack":"63","os":"Ubuntu16.10","arch":"x64","team":"LON","service":"6","service_version":"0","service_environment":"test","path":"/dev/sda1","fstype":"ext3","_value":1099511627776,"metric_name":"total"}}'
Get metrics in from collectd | Convert event logs to metric data points |
This documentation applies to the following versions of Splunk® Enterprise: 7.0.0, 7.0.1, 7.0.2, 7.0.3, 7.0.4, 7.0.5, 7.0.6, 7.0.7, 7.0.8, 7.0.9, 7.0.10, 7.0.11, 7.0.13, 7.1.0, 7.1.1, 7.1.2, 7.1.3, 7.1.4, 7.1.5, 7.1.6, 7.1.7, 7.1.8, 7.1.9, 7.1.10, 7.2.0, 7.2.1, 7.2.2, 7.2.3, 7.2.4, 7.2.5, 7.2.6, 7.2.7, 7.2.8, 7.2.9, 7.2.10, 7.3.0, 7.3.1, 7.3.2, 7.3.3, 7.3.4, 7.3.5, 7.3.6, 7.3.7, 7.3.8, 7.3.9
Feedback submitted, thanks!