Splunk® Enterprise

Search Reference

Download manual as PDF

Download topic as PDF

mcollect

Description

Converts events into metric data points and inserts the metric data points into a metric index on the search head.

If you are forwarding data to the indexer, your data will be inserted on the indexer instead of the search head.

Syntax

mcollect index=<string> [file=<string>] [split=<bool>] [spool=<bool>] [prefix_field=<string>]
[host=<string>] [source=<string>] [sourcetype=<string>] [<field-list>]

Required arguments

index
Syntax: index=<string>
Description: Name of the metric index where the collected metric data is added.
field-list
Syntax: <field>, ...
Description: A list of dimension fields. Required if split=true. Optional if split=false. If unspecified, which implies that split=false, all fields are treated as dimensions for the data point except for the metric_name, prefix_field, and all internal fields.
Default: No default value

Optional arguments

file
Syntax: file=<string>
Description: The file name where you want the collected metric data to be written. Only applicable when spool=false. You can use a timestamp or a random number for the file name by specifying either file=$timestamp$ or file=$random$.
Default: $random$_metrics.csv
split
Syntax: split=<bool>
Description: If set to false, the results must include a metric_name field for the name of the metric and a _value field for the numerical value of the metric. If set to true, then <field-list> must be specified.
Default: false
spool
Syntax: spool=<bool>
Description: If set to true, the metrics data file is written to the Splunk spool directory, $SPLUNK_HOME/var/spool/splunk, where the file is indexed. Once the file is indexed, it is removed. If set to false, the file is written to the $SPLUNK_HOME/var/run/splunk directory. The file remains in this directory unless further automation or administration is done.
Default: true
prefix_field
Syntax: prefix_field=<string>
Description: Only applicable when split=true. If specified, any data point with that field missing is ignored. Otherwise, the field value is prefixed to the metric name.
Default: No default value
host
Syntax: host=<string>
Description: The name of the host that you want to specify for the collected metrics data. Only applicable when spool=true.
Default: No default value
source
Syntax: source=<string>
Description: The name of the source that you want to specify for the collected metrics data.
Default: If the search is scheduled, the name of the search. If the search is ad-hoc, the name of the file that is written to the var/spool/splunk directory containing the search results.
sourcetype
Syntax: sourcetype=<string>
Description: The name of the source type that is specified for the collected metrics data. The Splunk platform does not calculate license usage for data indexed with mcollect_stash, the default source type. If you change the value of this setting to a different source type, the Splunk platform calculates license usage for any data indexed by the mcollect command.
Default: mcollect_stash

Do not change this setting without assistance from Splunk Professional Services or Splunk Support. Changing the source type requires a change to the props.conf file.

Usage

You use the mcollect command to convert events into metric data to be stored in a metric index on the search head. The metrics data uses a specific format for the metrics fields. See Metrics data format in Metrics.

The mcollect command causes new data to be written to a metric index for every run of the search.

Splitting

If each result contains only one metric_name field and one numeric _value field, then the result is a normalized metric data point. This result can be consumed directly and does not need to be split. Otherwise, each result is split into multiple metric data points based on the specified list of dimension fields.

For example, if you have the following data:

type=cpu usage=0.78 idle=0.22

You have two metrics, usage and idle.

If you specify in your search:

split=true prefix_field=type

The results are that the value of the field you specify is used as a prefix to the metric field names. In this case, because type is specified, its value, cpu, becomes the metric name prefix.

metric_name_value
cpu.usage 0.78
cpu.idle 0.22

Time

If the _time field is present in the results, it is used as the timestamp of the metric data point. If the _time field is not present, the current time is used.

field-list

If field-list is not specified, all fields are treated as dimensions for the data point, except for the prefix_field and internal fields (fields with an underscore ’_’ prefix). If field-list is specified, the list must appear at the end of the mcollect command arguments. If field-list is specified, all fields are treated as metric values, except for the fields in field-list, the prefix-field, and internal fields.

The name of each metric value is the field name prefixed with the prefix_field value.

Effectively, one metric data point is returned for each qualifying field that contains a numerical value. If one search result contains multiple qualifying metric name/value pairs, the result is split into multiple metric data points.

Examples

1: Generate a count of error events as metric data points

The following example shows you how to generate a count of error events and convert them into metric data points in a metric index called 'my_metric_index'.

ERROR | stats count BY type | rename count AS _value type AS metric_name | mcollect index=my_metric_index

2: Split data into multiple metrics with a common dimension

The following example shows you how to generate metric data by splitting your data into two metrics (avg.bytes and avg.response_time) with a common dimension (uri).

sourcetype=access_combined | stats avg(bytes) AS avg.bytes avg(response_time) AS avg.response_time BY uri | mcollect index=my_metric_index split=t uri

See also

Commands
collect
meventcollect
PREVIOUS
map
  NEXT
metadata

This documentation applies to the following versions of Splunk® Enterprise: 7.1.2, 7.1.3, 7.1.4, 7.2.0, 7.2.1


Was this documentation topic helpful?

Enter your email address, and someone from the documentation team will respond to you:

Please provide your comments here. Ask a question or make a suggestion.

You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters