Converts events into metric data points and inserts the metric data points into a metric index on the search head. A metric index must be present on the search head for
mcollect to work properly, unless you are forwarding data to the indexer.
If you are forwarding data to the indexer, your data will be inserted on the indexer instead of the search head.
You can use the
mcollect command only if your role has the
run_mcollect capability. See Define roles on the Splunk platform with capabilities in Securing Splunk Enterprise.
mcollect index=<string> [file=<string>] [split=<bool>] [spool=<bool>] [prefix_field=<string>]
[host=<string>] [source=<string>] [sourcetype=<string>] [<field-list>]
- Syntax: index=<string>
- Description: Name of the metric index where the collected metric data is added.
- Syntax: <field>, ...
- Description: A list of dimension fields. Required if
split=true. Optional if
split=false. If unspecified, which implies that
split=false, all fields are treated as dimensions for the data point except for the
prefix_field, and all internal fields.
- Default: No default value
- Syntax: file=<string>
- Description: The file name where you want the collected metric data to be written. Only applicable when
spool=false. You can use a timestamp or a random number for the file name by specifying either file=$timestamp$ or file=$random$.
- Default: $random$_metrics.csv
- Syntax: split=<bool>
- Description: If set to false, the results must include a
metric_namefield for the name of the metric and a
_valuefield for the numerical value of the metric. If set to true, then
<field-list>must be specified.
- Default: false
- Syntax: spool=<bool>
- Description: If set to true, the metrics data file is written to the Splunk spool directory,
$SPLUNK_HOME/var/spool/splunk, where the file is indexed. Once the file is indexed, it is removed. If set to false, the file is written to the
$SPLUNK_HOME/var/run/splunkdirectory. The file remains in this directory unless further automation or administration is done.
- Default: true
- Syntax: prefix_field=<string>
- Description: Only applicable when
split=true. If specified, any data point with that field missing is ignored. Otherwise, the field value is prefixed to the metric name.
- Default: No default value
- Syntax: host=<string>
- Description: The name of the host that you want to specify for the collected metrics data. Only applicable when
- Default: No default value
- Syntax: source=<string>
- Description: The name of the source that you want to specify for the collected metrics data.
- Default: If the search is scheduled, the name of the search. If the search is ad-hoc, the name of the file that is written to the
var/spool/splunkdirectory containing the search results.
- Syntax: sourcetype=<string>
- Description: The name of the source type that is specified for the collected metrics data. The Splunk platform does not calculate license usage for data indexed with
mcollect_stash, the default source type. If you change the value of this setting to a different source type, the Splunk platform calculates license usage for any data indexed by the
- Default: mcollect_stash
Do not change this setting without assistance from Splunk Professional Services or Splunk Support. Changing the source type requires a change to the
You use the
mcollect command to convert events into metric data to be stored in a metric index on the search head. The metrics data uses a specific format for the metrics fields. See
Metrics data format in Metrics.
mcollect command causes new data to be written to a metric index for every run of the search.
If each result contains only one
metric_name field and one numeric
_value field, then the result is a normalized metric data point. This result can be consumed directly and does not need to be split. Otherwise, each result is split into multiple metric data points based on the specified list of dimension fields.
For example, if you have the following data:
type=cpu usage=0.78 idle=0.22
You have two metrics,
If you specify in your search:
The results are that the value of the field you specify is used as a prefix to the metric field names. In this case, because
type is specified, its value,
cpu, becomes the metric name prefix.
_time field is present in the results, it is used as the timestamp of the metric data point. If the
_time field is not present, the current time is used.
field-list is not specified, all fields are treated as dimensions for the data point, except for the
prefix_field and internal fields (fields with an underscore '_' prefix). If
field-list is specified, the list must appear at the end of the
mcollect command arguments. If
field-list is specified, all fields are treated as metric values, except for the fields in
prefix-field, and internal fields.
The name of each metric value is the field name prefixed with the
Effectively, one metric data point is returned for each qualifying field that contains a numerical value. If one search result contains multiple qualifying metric name/value pairs, the result is split into multiple metric data points.
1: Generate a count of error events as metric data points
The following example shows you how to generate a count of error events and convert them into metric data points in a metric index called 'my_metric_index'.
ERROR | stats count BY type | rename count AS _value type AS metric_name | mcollect index=my_metric_index
2: Split data into multiple metrics with a common dimension
The following example shows you how to generate metric data by splitting your data into two metrics (
avg.response_time) with a common dimension (
sourcetype=access_combined | stats avg(bytes) AS avg.bytes avg(response_time) AS avg.response_time BY uri | mcollect index=my_metric_index split=t uri
This documentation applies to the following versions of Splunk® Enterprise: 7.1.2, 7.1.3, 7.1.4, 7.1.5, 7.1.6, 7.1.7, 7.1.8, 7.1.9, 7.1.10, 7.2.0, 7.2.1, 7.2.2, 7.2.3, 7.2.4, 7.2.5, 7.2.6, 7.2.7, 7.2.8, 7.2.9, 7.2.10, 7.3.0, 7.3.1, 7.3.2, 7.3.3, 7.3.4, 7.3.5, 7.3.6, 7.3.7, 7.3.8, 7.3.9
Feedback submitted, thanks!