Optimize Splunk Enterprise for peak performance
Like many services, Splunk on Windows needs proper maintenance in order to run at peak performance. This topic discusses the methods that you can apply to keep your Splunk on Windows deployment running properly, either during the course of the deployment, or after the deployment is complete.
To ensure peak Splunk performance:
- Designate one or more machines solely for Splunk operations. Splunk scales horizontally. This means that more physical computers dedicated to Splunk, rather than more resources in a single computer, translate into better performance. Where possible, split up your indexing and searching activities across a number of machines, and only run main Splunk services on those machines. With the exception of the universal forwarder performance is reduced when you run Splunk on servers that share other services.
- Dedicate fast disks for your Splunk indexes. The faster the available disks on a system are for Splunk indexing, the faster Splunk will run. Use disks with spindle speeds faster than 10,000 RPM when possible. When dedicating redundant storage for Splunk, use hardware-based RAID 1+0 (also known as RAID 10). It offers the best balance of speed and redundancy. Software-based RAID configurations through the Windows Disk Management utility are not recommended.
- Don't allow anti-virus programs to scan disks used for Splunk operations. When anti-virus file system drivers scan files for viruses on access, performance is significantly reduced, especially when Splunk internally ages data that has recently been indexed. If you must use anti-virus programs on the servers running Splunk, make sure that all Splunk directories and programs are excluded from on-access file scans.
- Use multiple indexes, where possible. Distribute the data that in indexed by Splunk into different indexes. Sending all data to the default index can cause I/O bottlenecks on your system. Where appropriate, configure your indexes so that they point to different physical volumes on your systems, when possible. For information on how to configure indexes, read "Configure your indexes" in Managing Indexers and Clusters of Indexers.
- Don't store your indexes on the same physical disk or partition as the operating system. The disk that holds your Windows OS directory (
%WINDIR%) or its swap file is not recommended for Splunk data storage. Put your Splunk indexes on other disks on your system.
For more information on how indexes are stored, including information on database bucket types and how Splunk stores and ages them, review "How Splunk stores indexes" in Managing Indexers and Clusters of Indexers
- Don't store the hot and warm database buckets of your Splunk indexes on network volumes. Network latency will decrease performance significantly. Reserve fast, local disk for the hot and warm buckets of your Splunk indexes. You can specify network shares such as Distributed File System (DFS) volumes or Network File System (NFS) mounts for the cold and frozen buckets of the index, but note that searches that include data stored in the cold database buckets will be slower.
- Maintain disk availability, bandwidth and space on your Splunk indexers. Make sure that the disk volumes that hold Splunk's indexes maintain 20% or more free space at all times. Disk performance decreases proportionally to available space because disk seek times increase. This affects how fast Splunk indexes data, and can also determine how quickly search results, reports and alerts are returned. In a default Splunk installation, the drive(s) that contain your indexes must have at least 5000 megabytes (approximately 5 gigabytes) of free disk space, or indexing will pause.
Introduction for Windows admins
Differences between *nix and Windows in Splunk operations
This documentation applies to the following versions of Splunk® Enterprise: 6.5.7, 7.0.0, 7.0.1, 7.0.2, 7.0.3, 7.0.4, 7.0.5, 7.0.6, 7.0.7, 7.0.8, 7.0.9, 7.0.10, 7.0.11, 7.0.13, 7.1.0, 7.1.1, 7.1.2, 7.1.3, 7.1.4, 7.1.5, 7.1.6, 7.1.7, 7.1.8, 7.1.9, 7.1.10
Feedback submitted, thanks!