About workload management
This documentation applies to workload management in Splunk Enterprise only. For documentation that applies to workload management in Splunk Cloud Platform, see Workload Management overview in the Splunk Cloud Platform Admin Manual.
Workload management is a rule-based framework that lets you allocate compute and memory resources to search, indexing, and other workloads in Splunk Enterprise.
Workload management lets you create system resource pools, called workload pools, and allocate search workloads to different pools. You can also monitor long-running searches and perform automated remediation actions.
Workload management lets you:
- Assign resources to critical search workloads.
- Avoid data-ingestion latency due to heavy search load.
- Restrict allocation of resources to low-priority search workloads, such as real-time searches.
- Monitor searches and apply automated remediation, such as aborting a search or throttling resources.
To learn more about workload management, see How workload management works.
For Linux configuration prerequisites, see Set up Linux for workload management.
For workload management configuration instructions, see Configure workload pools.
To learn about automated search placement and monitoring rules, see Configure workload rules.
For instructions on how to deploy workload management in a distributed environment, see Configure workload management on a distributed deployment.
How workload management works |
This documentation applies to the following versions of Splunk® Enterprise: 8.0.0, 8.0.1, 8.0.2, 8.0.3, 8.0.4, 8.0.5, 8.0.6, 8.0.7, 8.0.8, 8.0.9, 8.0.10, 8.1.0, 8.1.1, 8.1.2, 8.1.3, 8.1.4, 8.1.5, 8.1.6, 8.1.7, 8.1.8, 8.1.9, 8.1.10, 8.1.11, 8.1.12, 8.1.13, 8.1.14, 8.2.0, 8.2.1, 8.2.2, 8.2.3, 8.2.4, 8.2.5, 8.2.6, 8.2.7, 8.2.8, 8.2.9, 8.2.10, 8.2.11, 8.2.12, 9.0.0, 9.0.1, 9.0.2, 9.0.3, 9.0.4, 9.0.5, 9.0.6, 9.0.7, 9.0.8, 9.0.9, 9.0.10, 9.1.0, 9.1.1, 9.1.2, 9.1.3, 9.1.4, 9.1.5, 9.1.6, 9.1.7, 9.2.0, 9.2.1, 9.2.2, 9.2.3, 9.2.4, 9.3.0, 9.3.1, 9.3.2
Feedback submitted, thanks!