Metrics indexing performance
This topic summarizes the results of metrics indexing performance.
Size on disk
- When ingesting typical metrics payloads with supported metrics source types (
metrics_csv), a metrics index requires about 50% less disk storage space compared to storing the same payload in an events index.
- Consider the following when deciding whether to scale horizontally by adding additional indexers.
- Using the
collectd_httpsource type with an HTTP Event Collector (HEC) input, testing reached a constant of around 55,000 events per second maximum ingestion throughput, and around 58,000 events per second without additional search load.
- The default batch size was 5,000 events per batch. A significant difference in ingestion performance was not observed between batch sizes of 100 to 5,000 events.
keep-alivesetting was enabled for these tests.
- A typical event size was about 214 bytes.
- Using the
statsdsource type with a UDP input, throughput was highly variable depending on other network activity. For UDP inputs we recommend using a universal forwarder as close as possible to where metrics are collected.
- Consider the results from the following test for running metrics queries. This test used metrics from 1,000 hosts, with a total event count of 6 billion events in the metrics index, where queries were representative and did not use wildcards in
Time range Events Query speed 1 hour 35 million < 0.1s 1 day 850 million ~3-5s 1 week 6 billion ~20-22s
See the Capacity Planning Manual.
Use histogram metrics
Best practices for metrics
This documentation applies to the following versions of Splunk Cloud Platform™: 8.2.2109, 8.1.2011, 8.0.2006, 8.0.2007, 8.1.2009, 8.1.2012, 8.1.2101, 8.1.2103, 8.2.2104, 8.2.2105 (latest FedRAMP release), 8.2.2106, 8.2.2107