Splunk® Enterprise

Distributed Deployment Manual

Download manual as PDF

Splunk Enterprise version 5.0 reached its End of Life on December 1, 2017. Please see the migration information.
This documentation does not apply to the most recent version of Splunk. Click here for the latest version.
Download topic as PDF

Configure search head pooling

Important: Search head pooling is an advanced feature. It's recommended that you contact the Splunk sales team to discuss your deployment before attempting to implement it.

Search head pooling feature overview

You can set up multiple search heads so that they share configuration and user data. This is known as search head pooling. The main reason for having multiple search heads is to facilitate horizontal scaling when you have large numbers of users searching across the same data. Search head pooling can also reduce the impact if a search head becomes unavailable.

Search head pooling 1.png

You must enable search head pooling on each search head, so that they can share configuration and user data. Once search head pooling has been enabled, these categories of objects will be available as common resources across all search heads in the pool:

For example, if you create and save a search on one search head, all the other search heads in the pool will automatically have access to it.

Search head pooling makes all files in $SPLUNK_HOME/etc/{apps,users} available for sharing. This includes *.conf files, *.meta files, view files, search scripts, lookup tables, etc.

Key implementation issues

Note the following:

  • Most shared storage solutions don't perform well across a WAN. Since search head pooling requires low-latency shared storage capable of serving a high number of operations per second, implementing search head pooling across a WAN is not supported.
  • All search heads in a pool must be running the same version of Splunk. Be sure to upgrade all of them at once. See "Upgrade your distributed deployment" in the Distributed Deployment Overview for details.
  • The purpose of search head pooling is to simplify the management of groups of dedicated search heads. Do not implement it on groups of indexers doubling as search heads. That is an unsupported configuration. Search head pooling has a significant effect on indexing performance.
  • The search heads in a pool cannot be search peers of each other.

Search head pooling and knowledge bundles

The set of data that a search head distributes to its search peers is known as the knowledge bundle. For details, see "What search heads send to search peers".

By default, only one search head in a search head pool sends the knowledge bundle to the set of search peers. This optimization is controllable by means of the useSHPBundleReplication attribute in distsearch.conf.

As a further optimization, you can mount knowledge bundles on shared storage, as described in "Mount the knowledge bundle". By doing so, you eliminate the need to distribute the bundle to the search peers. For information on how to combine search head pooling with mounted knowledge bundles, read the section in that topic called "Use mounted bundles with search head pooling".

Create a pool of search heads

To create a pool of search heads, follow these steps:

1. Set up a shared storage location accessible to each search head.

2. Configure each individual search head.

3. Stop the search heads.

4. Enable pooling on each search head.

5. Copy user and app directories to the shared storage location.

6. Restart the search heads.

The steps are described below in detail:

1. Set up a shared storage location accessible to each search head

So that each search head in a pool can share configurations and artifacts, they need to access a common set of files via shared storage:

  • On *nix platforms, set up an NFS mount.
  • On Windows, set up a CIFS (SMB) share.

Important: The Splunk user account needs read/write access to the shared storage location. When installing a search head on Windows, be sure to install it as a user with read/write access to shared storage. The Local System user does not have this access. For more information, see "Choose the user Splunk should run as" in the Installation manual.

2. Configure each search head

a. Set up each search head individually, specifying the search peers in the usual fashion. See "Configure distributed search".

b. Make sure that each search head has a unique serverName attribute, configured in server.conf. See "Manage distributed server names" for detailed information on this requirement. If the search head does not have a unique serverName, Splunk will generate a warning at start-up. See "Warning about unique serverName attribute" for details.

c. Specify the necessary authentication. You have two choices:

  • Place a common authentication configuration on shared storage, to be used by all pool members. You must restart the pool members after any change to the authentication.

Note: Any authentication change made on an individual pool member (for example, via Splunk Web) overrides for that pool member only any configuration on shared storage. You should, therefore, generally avoid making authentication changes through Splunk Web if a common configuration already exists on shared storage.

3. Stop the search heads

Before enabling pooling, you must stop splunkd. Do this for each search head in the pool.

4. Enable pooling on each search head

You use the pooling enable CLI command to enable pooling on a search head. The command sets certain values in server.conf. It also creates subdirectories within the shared storage location and validates that Splunk can create and move files within them.

Here's the command syntax:

  splunk pooling enable <path_to_shared_storage> [--debug]


  • On NFS, <path_to_shared_storage> should be the NFS's share mountpoint.
  • On Windows, <path_to_shared_storage> should be the UNC path of the CIFS/SMB share.
  • The --debug parameter causes the command to log additional information to btool.log.

Execute this command on each search head in the pool.

The command sets values in the [pooling] stanza of the server.conf file in $SPLUNK_HOME/etc/system/local.

You can also directly edit the [pooling] stanza of server.conf. For detailed information on server.conf, look here.

Important: The [pooling] stanza must be placed in the server.conf file directly under $SPLUNK_HOME/etc/system/local/. This means that you cannot deploy the [pooling] stanza via an app, either on local disk or on shared storage. For details see the server.conf spec file.

5. Copy user and app directories to the shared storage location

Copy the contents of the $SPLUNK_HOME/etc/apps and $SPLUNK_HOME/etc/users directories on an existing search head into the empty /etc/apps and /etc/users directories in the shared storage location. Those directories were created in step 4 and reside under the <path_to_shared_storage> that you specified at that time.

For example, if your NFS mount is at /tmp/nfs, copy the apps subdirectories that match this pattern:




This results in a set of subdirectories like:


Similarly, copy the user subdirectories:




Important: You can choose to copy over just a subset of apps and user subdirectories; however, be sure to move them to the precise locations described above.

6. Restart the search heads

After running the pooling enable command, restart splunkd. Do this for each search head in the pool.

Use a load balancer

You will probably want to run a load balancer in front of your search heads. That way, users can access the pool of search heads through a single interface, without needing to specify a particular one.

Another reason for using a load balancer is to ensure access to search artifacts and results if one of the search heads goes down. Ordinarily, RSS and email alerts provide links to the search head where the search originated. If that search head goes down (and there's no load balancer), the artifacts and results become inaccessible. However, if you've got a load balancer in front, you can set the alerts so that they reference the load balancer instead of a particular search head.

Configure the load balancer

There are a couple issues to note when selecting and configuring the load balancer:

  • The load balancer must employ layer-7 (application-level) processing.
  • Configure the load balancer so that user sessions are "sticky" or "persistent". This ensures that the user remains on a single search head throughout their session.

Generate alert links to the load balancer

To generate alert links to the load balancer, you must edit alert_actions.conf:

1. Copy alert_actions.conf from a search head to the appropriate app directory in the shared storage location. In most cases, this will be /<path_to_shared_storage>/etc/apps/search/local.

2. Edit the hostname attribute to point to the load balancer:

hostname = <proxy host>:<port>

For details, see alert_actions.conf in the Admin manual.

The alert links should now point to the load balancer, not the individual search heads.

Other pooling operations

Besides the pooling enable CLI command, there are several other commands that are important for managing search head pooling:

  • pooling validate
  • pooling disable
  • pooling display

You must stop splunkd before running pooling enable or pooling disable. However, you can run pooling validate and pooling display while splunkd is either stopped or running.

Validate that each search head has access to shared resources

The pooling enable command validates search head access when you initially set up search head pooling. If you ever need to revalidate the search head's access to shared resources (for example, if you change the NFS configuration), you can run the pooling validate CLI command:

  splunk pooling validate [--debug]

Disable search head pooling

You can disable search head pooling with this CLI command:

  splunk pooling disable [--debug]

Run this command for each search head that you need to disable.

Important: Before running the pooling disable command, you must stop splunkd. After running the command, you should restart splunkd.

Display pooling status

You can use the pooling display CLI command to determine whether pooling is enabled on a search head:

  splunk pooling display

This example shows how the system response varies depending on whether pooling is enabled:

$ splunk pooling enable /foo/bar
$ splunk pooling display
Search head pooling is enabled with shared storage at: /foo/bar
$ splunk pooling disable
$ splunk pooling display
Search head pooling is disabled

Manage configuration changes

Important: Once pooling is enabled on a search head, you must notify the search head whenever you directly edit a configuration file.

Specifically, if you add a stanza to any configuration file in a local directory, you must run the following command:

splunk btool fix-dangling

Note: This is not necessary if you make changes by means of Splunk Web or the CLI.

Deployment server and search head pooling

With search head pooling, all search heads access a single set of configurations, so you don't need to use a deployment server or a third party deployment management tool like Puppet to push updates to multiple search heads. However, you might still want to use a deployment tool with search head pooling, in order to consolidate configuration operations across all Splunk instances.

If you want to use the deployment server to manage your search head configuration, do the following:

1. Designate one of the search heads as a deployment client by creating a deploymentclient.conf file in $SPLUNK_HOME/etc/system/local and specifying its deployment server. You only need to designate one search head as a deployment client.

2. In deploymentclient.conf, set the repositoryLocation attribute to the search head's shared storage mountpoint. You must also set serverRepositoryLocationPolicy=rejectAlways, so that the locally set repositoryLocation gets used as the download location.

3. In serverclass.conf on the deployment server, define a server class for the search head client.

For detailed information on the deployment server, see "About deployment server" and the topics that follow it.

Select timing for configuration refresh

In version 5.0.2 and earlier, the defaults for synchronizing from the storage location were set to very frequent intervals. This could lead to Splunk spending excessive time reading configuration changes from the pool, particularly in deployments with large numbers of users (in the hundreds or thousands).

The default settings have been changed to less frequent intervals starting with 5.0.3. In server.conf, the following settings affect configuration refresh timing:

# 5.0.3 defaults
poll.interval.rebuild = 1m
poll.interval.check = 1m

The previous defaults for these settings were 2s and 5s, respectively.

With the old default values, a change made on one search head would become available on another search head at most seven seconds later. There is usually no need for updates to be propagated that quickly. By changing the settings to values of one minute, the load on the shared storage system is greatly reduced. Depending on your business needs, you might be able to set these values to even longer intervals.


Have questions? Visit Splunk Answers and see what questions and answers the Splunk community has about search head pooling.

Mount the knowledge bundle
How authorization works in distributed searches

This documentation applies to the following versions of Splunk® Enterprise: 5.0.3, 5.0.4, 5.0.5, 5.0.6, 5.0.7, 5.0.8, 5.0.9, 5.0.10, 5.0.11, 5.0.12, 5.0.13, 5.0.14, 5.0.15, 5.0.16, 5.0.17, 5.0.18


Skawasaki: Another good point. I've updated the doc to comply.

September 13, 2013

Sorry, you also need "serverRepositoryLocationPolicy = rejectAlways" otherwise the deployed apps will follow the "repositoryLocation" *only* if it is rooted by $SPLUNK_HOME. Most of the time the SHP will mounted on /mnt/, not $SPLUNK_HOME, so we should always reject this policy.<br /><br />I also found a bug about the search head's authorize.conf with SHP:<br /><br />With SHP, the search head's role-base access with authorize.conf in etc/system/local will be ignored. Therefore you can make changes in the search head's web UI, which will update etc/system/local/authorize.conf, but Splunk will ignore it. The web UI won't even show any errors or warnings.<br /><br />The only way to update authorize.conf is to put it in the SHP manually. I would create an app and add it in /etc/apps/authorize/local/authorize.conf.

Skawasaki splunk
September 11, 2013

Skawasaki - Good catch. I've updated that passage.

September 10, 2013

Sorry, for deploymentclient.conf it should be "repositoryLocation", not "targetRepositoryLocation".

Skawasaki splunk
September 10, 2013

For "Deployment server and search head pooling", it should be made more clear that if you specify "targetRepositoryLocation" in serverclass.conf, the it *has* to be under the [global] stanza (and therefore every app to every deployment client, even indexers and forwarders, will go to that SHP location, which is wrong).<br /><br />Therefore most of the time it will make sense to set "targetRepositoryLocation" in the deploymentclient.conf on the search head.

Skawasaki splunk
September 10, 2013

Was this documentation topic helpful?

Enter your email address, and someone from the documentation team will respond to you:

Please provide your comments here. Ask a question or make a suggestion.

You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters