Splunk® Enterprise

Distributed Deployment Manual

Download manual as PDF

Splunk version 4.x reached its End of Life on October 1, 2013. Please see the migration information.
This documentation does not apply to the most recent version of Splunk. Click here for the latest version.
Download topic as PDF

Configure search head pooling

Search head pooling feature overview

You can set up multiple search heads so that they share configuration and user data. This is known as search head pooling. The main reason for having multiple search heads is to facilitate horizontal scaling when you have large numbers of users searching across the same data. Search head pooling can also reduce the impact if a search head becomes unavailable.


Search head pooling 1.png


You must enable search head pooling on each search head, so that they can share configuration and user data. Once search head pooling has been enabled, these categories of objects will be available as common resources across all search heads in the pool:

For example, if you create and save a search on one search head, all the other search heads in the pool will automatically have access to it.

Search head pooling makes all files in $SPLUNK_HOME/etc/{apps,users} available for sharing. This includes *.conf files, *.meta files, view files, search scripts, lookup tables, etc.

You can also combine search head pooling with mounted knowledge bundles, as described in "Mount the knowledge bundle".

Note: Search head pooling is an advanced feature. It's recommended that you contact the Splunk sales team to discuss your deployment before attempting to implement it.

Key implementation issues

Note the following:

  • Most shared storage solutions don't perform well across a WAN. Since search head pooling requires low-latency shared storage capable of serving a high number of operations per second, implementing search head pooling across a WAN is not supported.
  • All search heads in a pool must be running the same version of Splunk. Be sure to upgrade all of them at once. See "Upgrade your distributed deployment" in the Distributed Deployment Overview for details.
  • The purpose of search head pooling is to simplify the management of groups of dedicated search heads. You should ordinarily not implement it on groups of indexers doubling as search heads. Search head pooling has a significant effect on indexing performance.
  • The search heads in a pool cannot be search peers of each other.

Create a pool of search heads

To create a pool of search heads, follow these steps:

1. Configure each individual search head.

2. Set up a shared storage location accessible to each search head.

3. Stop the search heads.

4. Enable pooling on each search head.

5. Copy user and app directories to the shared storage location.

6. Restart the search heads.

The steps are described below in detail:

1. Configure each search head

Set up each search head individually, specifying the search peers in the usual fashion. See "Configure distributed search".

Important: Be aware of these key issues:

  • You must specify user authentication on each search head separately. A valid user on one search head is not automatically a user on another search head in the pool. You can use LDAP to centrally manage user authentication, as described in "Set up user authentication with LDAP".

2. Set up a shared storage location accessible to each search head

So that each search head in a pool can share configurations and artifacts, they need to access a common set of files via shared storage:

  • On *nix platforms, set up an NFS mount.
  • On Windows, set up a CIFS (SMB) share.

Important: The Splunk user account needs read/write access to the shared storage location. When installing a search head on Windows, be sure to install it as a user with read/write access to shared storage. The Local System user does not have this access. For more information, see "Choose the user Splunk should run as" in the Installation manual.

3. Stop the search heads

Before enabling pooling, you must stop splunkd. Do this for each search head in the pool.

4. Enable pooling on each search head

You use the pooling enable CLI command to enable pooling on a search head. The command sets certain values in server.conf. It also creates subdirectories within the shared storage location and validates that Splunk can create and move files within them.

Here's the command syntax:

  splunk pooling enable <path_to_shared_storage> [--debug]

Note:

  • On NFS, <path_to_shared_storage> should be the NFS's share mountpoint.
  • On Windows, <path_to_shared_storage> should be the UNC path of the CIFS/SMB share.
  • The --debug parameter causes the command to log additional information to btool.log.

Execute this command on each search head in the pool.

The command sets values in the [pooling] stanza of the server.conf file in $SPLUNK_HOME/etc/system/local. For detailed information on server.conf, look here.

5. Copy user and app directories to the shared storage location

Copy the contents of the $SPLUNK_HOME/etc/apps and $SPLUNK_HOME/etc/users directories on an existing search head into the empty /etc/apps and /etc/users directories in the shared storage location. Those directories were created in step 4 and reside under the <path_to_shared_storage> that you specified at that time.

For example, if your NFS mount is at /tmp/nfs, copy the apps subdirectories that match this pattern:

$SPLUNK_HOME/etc/apps/*

into

/tmp/nfs/etc/apps

This results in a set of subdirectories like:

/tmp/nfs/etc/apps/search
/tmp/nfs/etc/apps/launcher
/tmp/nfs/etc/apps/unix
[...]

Similarly, copy the user subdirectories:

$SPLUNK_HOME/etc/users/*

into

/tmp/nfs/etc/users

Important: You can choose to copy over just a subset of apps and user subdirectories; however, be sure to move them to the precise locations described above.

6. Restart the search heads

After running the pooling enable command, restart splunkd. Do this for each search head in the pool.

Use a load balancer

You will probably want to run a load balancer in front of your search heads. That way, users can access the pool of search heads through a single interface, without needing to specify a particular one.

Another reason for using a load balancer is to ensure access to search artifacts and results if one of the search heads goes down. Ordinarily, RSS and email alerts provide links to the search head where the search originated. If that search head goes down (and there's no load balancer), the artifacts and results become inaccessible. However, if you've got a load balancer in front, you can set the alerts so that they reference the load balancer instead of a particular search head.

Configure the load balancer

There are a couple issues to note when selecting and configuring the load balancer:

  • The load balancer must employ layer-7 (application-level) processing.
  • Configure the load balancer so that user sessions are "sticky" or "persistent". This ensures that the user remains on a single search head throughout their session.

Generate alert links to the load balancer

To generate alert links to the load balancer, you must edit alert_actions.conf:

1. Copy alert_actions.conf from a search head to the appropriate app directory in the shared storage location. In most cases, this will be /<path_to_shared_storage>/etc/apps/search/local.

2. Edit the hostname attribute to point to the load balancer:

hostname = <proxy host>:<port>

For details, see alert_actions.conf in the Admin manual.

The alert links should now point to the load balancer, not the individual search heads.

Other pooling operations

Besides the pooling enable CLI command, there are several other commands that are important for managing search head pooling:

  • pooling validate
  • pooling disable
  • pooling display

You must stop splunkd before running pooling enable or pooling disable. However, you can run pooling validate and pooling display while splunkd is either stopped or running.

Validate that each search head has access to shared resources

The pooling enable command validates search head access when you initially set up search head pooling. If you ever need to revalidate the search head's access to shared resources (for example, if you change the NFS configuration), you can run the pooling validate CLI command:

  splunk pooling validate [--debug]

Disable search head pooling

You can disable search head pooling with this CLI command:

  splunk pooling disable [--debug]

Run this command for each search head that you need to disable.

Important: Before running the pooling disable command, you must stop splunkd. After running the command, you should restart splunkd.

Display pooling status

You can use the pooling display CLI command to determine whether pooling is enabled on a search head:

  splunk pooling display

This example shows how the system response varies depending on whether pooling is enabled:

$ splunk pooling enable /foo/bar
$ splunk pooling display
Search head pooling is enabled with shared storage at: /foo/bar
$ splunk pooling disable
$ splunk pooling display
Search head pooling is disabled

Manage configuration changes

Important: Once pooling is enabled on a search head, you must notify the search head whenever you directly edit a configuration file.

Specifically, if you add a stanza to any configuration file in a local directory, you must run the following command:

splunk btool fix-dangling

Note: This is not necessary if you make changes by means of Splunk Web Manager or the CLI.

Deployment server and search head pooling

With search head pooling, all search heads access a single set of configurations, so you don't need to use a deployment server or a third party deployment management tool like Puppet to push updates to multiple search heads. However, you might still want to use a deployment tool with search head pooling, in order to consolidate configuration operations across all Splunk instances.

If you want to use the deployment server to manage your search head configuration, note the following:

  • Designate one of the search heads as a deployment client by creating a deploymentclient.conf file in $SPLUNK_HOME/etc/system/local. You only need to designate one search head as a deployment client.
  • In serverclass.conf on the deployment server, define a server class for the search head. Set its repositoryLocation attribute to the shared storage mountpoint on the search head. You can also specify the value in deploymentclient.conf on the search head, but in either case, the value must point to the shared storage mountpoint.

For detailed information on the deployment server, see "About deployment server" and the topics that follow it.

Answers

Have questions? Visit Splunk Answers and see what questions and answers the Splunk community has about search head pooling.

PREVIOUS
Mount the knowledge bundle
  NEXT
How authorization works in distributed searches

This documentation applies to the following versions of Splunk® Enterprise: 4.3, 4.3.1


Comments

Try running fix-dangling on the deployment server before adding files to the server.

Sgoodman
October 28, 2011

It appears that if you manage the searchhead via the deployment server you still need to run splunk btool fix-dangling. this is really annoying, is there a way around it?

Tpdcops
October 28, 2011

Was this documentation topic helpful?

Enter your email address, and someone from the documentation team will respond to you:

Please provide your comments here. Ask a question or make a suggestion.

You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters