Splunk® Enterprise

Capacity Planning Manual

Reference hardware

The reference hardware specification is a baseline for scoping and scaling the Splunk platform for your use. The recommendations are based on the Splunk Validated Architectures.

Reference host specification for single-instance deployments

This represents the minimum basic instance specifications for a production grade Splunk Enterprise deployment. A single-instance represents an S1 architecture in SVA:

  • An x86 64-bit chip architecture
  • 12 physical CPU cores, or 24 vCPU at 2 GHz or greater speed per core.
  • 12 GB RAM.
  • For storage, review the Indexer recommendation in What storage type should I use for a role?
  • A 1 Gb Ethernet NIC, optional second NIC for a management network.
  • A 64-bit Linux or Windows distribution. See Supported Operating Systems in the Installation Manual.

If you are planning a single instance Splunk Enterprise installation and want additional headroom for search concurrency or more Splunk Apps, consider using the indexer mid-range or high-performance specifications described below. Once you've exceeded the ability of a single instance deployment to meet your search and data ingest load, review the distributed deployment models defined in SVA.

Reference host specifications for distributed deployments

Distributed deployments are designed to separate the index and search functionality into dedicated tiers that can be sized and scaled independently without disrupting the other tier. The daily data ingest volume and the concurrent search volume are the two most important factors used when estimating the hardware capabilities and node counts for each tier. The search and indexing roles prioritize different compute resources. The indexing tier uses high-performance storage to store and retrieve data efficiently. The search tier uses CPU cores and RAM to handle ad-hoc and scheduled search workloads.

An increase in search tier capacity corresponds to increased search load on the indexing tier, requiring scaling of the indexer nodes. Scaling either tier can be done vertically by increasing per-instance hardware resources, or horizontally by increasing the total node count. For assistance with sizing a production Splunk Enterprise deployment, contact your Splunk Sales team for guidance with meeting the infrastructure requirements and total cost of ownership.

Search head

A search head uses CPU resources more consistently than an indexer, but does not require the same storage capacity. A search request uses up to 1 CPU core while the search is active. You must account for scheduled searches when you provision a search head in addition to ad-hoc searches that users run. More active users and higher concurrent search loads require additional CPU cores.

Minimum search head specification

For a review on how searches are prioritized, see the topic Configure the priority of scheduled reports in the Reporting Manual. For information on scaling search performance, see How to maximize search performance.

Indexer

When you distribute the indexing process among many indexers, the Splunk platform can scale to consume terabytes of data in a day. Adding indexers distributes the work of search requests and data indexing across all of the indexers. This horizontal scaling of indexers increases performance significantly.

Minimum indexer specification

Mid-range indexer specification

This specification adds additional cores and RAM to provide overhead for additional search concurrency in a distributed Splunk Enterprise deployment:

  • An x86 64-bit chip architecture.
  • 24 physical CPU cores, or 48 vCPU at 2 GHz or greater speed per core.
  • 64 GB RAM.
  • For storage, see What storage type should I use for a role?
  • A 1 Gb Ethernet NIC, with optional second NIC for a management network.
  • A 64-bit Linux or Windows distribution. See Supported Operating Systems in the Installation Manual.

High-performance indexer specification

This specification adds additional cores, RAM, and storage performance to use for improving indexing throughput and providing overhead for additional search concurrency for use cases where sustained search performance is critical, such as Premium Splunk apps.

Recommended hardware for management components

A Splunk Enterprise distributed deployment requires several management components. These components often run on their own instances, and can include:

  • Deployment Server
  • Heavy Forwarders
  • Indexer Cluster Management node
  • License Manager
  • Monitoring Console
  • Search head cluster deployer

When allocating resources for the management components, begin with the reference host specification for single-instance deployments noted above, and adjust the resource allocation to accommodate the scale of your deployment. For detailed sizing and resource allocation recommendations, contact your Splunk account team.

For guidance on management components sharing the same instance based on utilization, see Whether to colocate management components in the Distributed Deployment Manual.

If you're using heavy forwarders in an intermediate forwarding tier, and have available resources, you can configure multiple pipelines to improve data distribution. See Manage pipeline sets for index parallelization in the Managing Indexers and Clusters of Indexers manual.

What storage type should I use for a role?

Insufficient storage I/O is the most commonly encountered limitation in a Splunk software infrastructure. For best results, review the recommended storage types before provisioning your hardware. For guidance on testing your storage system, see How to test my storage system using FIO on Splunk Answers.

Role Recommended storage type Notes
Search Head SSD, HDD Search heads with a high ad-hoc or scheduled search loads should use SSD. A HDD-based storage system must provide no less than 800 sustained IOPS. A search head requires at least 300 GB of dedicated storage space.
Indexer:
Hot and warm index storage,
data model storage
SSD The indexer role requires high performance storage for writing and reading (searching) the hot and warm index buckets. The storage volume path is the same for hot and warm buckets, and data model acceleration storage by default.
Indexer:
SmartStore
NVMe or SSD, and access to a remote object store SmartStore is a hybrid storage technology that utilizes high performance local storage for both short-term reads and writes, and as a bucket retrieval cache from cloud-hosted storage. For more information on SmartStore, see SmartStore advantages in the Managing Indexers and Clusters of Indexers manual.
Indexer:
Cold index storage
HDD, SAN, NAS,
Network file systems
A cold index bucket is data that has reached a space or time limit, and is rolled from warm. The cold index can have a unique storage volume path. The cold index buckets are often placed on slower, cheaper storage depending upon the search use case. Storage performance affects how quickly search results, reports, and alerts are returned. An unreliable cold storage volume can impact indexing operations.
Indexer:
Frozen bucket storage
SAN, NAS,
Network file systems,
HDD
A frozen index bucket is data that has reached a space or time limit, and is moved from cold to an archival state. Frozen data can have a unique storage volume path. A frozen index bucket is deleted by default. See Archive indexed data in the Managing Indexers and Clusters of Indexers manual.

Notes about optimizing Splunk software and storage usage

  • The storage volume where Splunk software is installed must provide no less than 800 sustained IOPS.
  • Always configure your index storage to use a separate volume from the operating system. The volume used for the operating system or its swap file is not recommended for Splunk Enterprise data storage. For more information on how indexes are stored, including information on database bucket types and how Splunk stores and ages them, see How the indexer stores indexes in the Managing Indexers and Clusters of Indexers manual.
  • Always monitor storage availability, bandwidth, and capacity for your indexers. The storage volumes or mounts used by the indexes must have some free space at all times. Storage performance decreases as available space decreases. By default, indexing will stop If the volume containing the indexes goes below 5GB of free space.
  • Never store the hot and warm buckets of your indexes on network volumes. Network latency will dramatically decrease indexing performance. You can use network shares such as Distributed File System (DFS) volumes or Network File System (NFS) mounts for the cold index buckets. Searches that include data stored on network volumes will be slower.

Ratio of indexers to search heads

The aggregate search and indexing load determines what Splunk instance role (search head or indexer) the infrastructure needs to scale to maintain performance. For a table with scaling guidelines, see Summary of performance recommendations.

Network latency limits for clustered deployments

A Splunk environment with search head or indexer clusters must have fast, low-latency network connectivity between clusters and cluster nodes. This is particularly important in environments that are planning for multi-site clusters.

  • For indexer cluster nodes, network latency should not exceed 100 milliseconds. Higher latencies can significantly slow indexing performance and hinder recovery from cluster node failures.
  • For search head clusters, latency should not exceed 200 milliseconds. Higher latencies can impact how fast a search head cluster elects a cluster captain.
Impact of network latency on clustered deployment operations.
Network latency Cluster Index time. 1 TB of data Cluster node recovery time
< 100 ms 6202 s 143 s
300 ms 6255 s (+ 1%) 1265 s (+ 884%)
600 ms 7531 s (+ 21%) 3048 s (+ 2131%)

Confirm with your network administrator that the networks used to support a clustered Splunk environment meet or surpass the latency guidelines.

Premium Splunk app requirements

Premium Splunk apps can demand greater hardware resources than the reference specifications in this topic provide. Before architecting a deployment for a premium app, review the app documentation for additional scaling and hardware recommendations. The following list shows examples of some premium Splunk apps and their recommended hardware specifications.

Virtualized Infrastructures

Splunk supports use of its software in virtual hosting environments:

  • A hypervisor (such as VMware) must be configured to provide reserved resources that meet the hardware specifications above. An indexer in a virtual machine can consume data about 10 to 15 percent more slowly than an indexer hosted on a bare-metal machine. Search performance in a virtual hosting environment is similar to bare-metal machines.
  • The storage performance that a virtual infrastructure provides must account for resource contention with any other active virtual hosts that share the same hardware or storage array. It also must provide sufficient IOPS per instance of a Splunk role. For example, a shared storage array providing SSD-level performance for 10 indexers would require 40000 concurrent IOPS (4000 IOPS x 10 indexers) to service the indexers alone, while simultaneously providing additional IOPS to support any other workloads using the same shared storage.

Splunk Cloud Platform

Splunk offers its machine data platform and licensed software as a subscription service called Splunk Cloud Platform. When you subscribe to the service, you purchase a capacity to index, store, and search your machine data. Splunk Cloud Platform abstracts the infrastructure specification from you and delivers high performance on the capacity you have purchased.

To learn more about Splunk Cloud Platform, visit the Splunk Cloud Platform website.

Self-managed Splunk Enterprise in the cloud

Running Splunk Enterprise in the cloud is another alternative to running it on-premises using bare-metal hardware.

If you run Splunk Enterprise on an Cloud-managed infrastructure:

  • Cloud vendors assign processor capacity in virtual CPUs (vCPUs). The vCPU is a logical CPU core, and might represent only a small portion of a CPU's full performance. The classification of a vCPU is determined by the cloud vendor.
  • Storage options offered by cloud vendors vary dramatically in performance and price. To maintain consistent search and indexing performance, see the storage type recommendations in What storage type should I use for a role?.


Considerations for deploying Splunk software on partner infrastructure

Many hardware vendors and cloud providers have worked to create reference architectures and solution guides that describe how to deploy Splunk Enterprise and other Splunk software on their infrastructure. For your convenience, Splunk maintains a separate page where Splunk Technology Alliance Partners (TAP) may submit reference architectures and solution guides that meet or exceed the specifications of the documented reference hardware standard. See the Splunk Partner Solutions page on the Splunk website.

While Splunk works with TAPs to ensure that their solutions meet the standard, it does not endorse any particular hardware vendor or technology.

Last modified on 25 November, 2024
How concurrent users and searches impact performance   Determine when to scale your Splunk Enterprise deployment

This documentation applies to the following versions of Splunk® Enterprise: 9.0.0, 9.0.1, 9.0.2, 9.0.3, 9.0.4, 9.0.5, 9.0.6, 9.0.7, 9.0.8, 9.0.9, 9.0.10, 9.1.0, 9.1.1, 9.1.2, 9.1.3, 9.1.4, 9.1.5, 9.1.6, 9.1.7, 9.2.0, 9.2.1, 9.2.2, 9.2.3, 9.2.4, 9.3.0, 9.3.1, 9.3.2, 9.4.0


Was this topic useful?







You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters