Search head pooling configuration issues
When implementing search head pooling, there are a few potential issues you should be aware of, mainly having to do with coordination among search heads.
Authentication and authorization changes made in Splunk Web apply only to a single search head
Authentication and authorization changes made through a search head's Splunk Web apply only to that search head and not to other search heads in that pool. Each member of the pool maintains its local
configurations in $SPLUNK_HOME/etc/system/local
. To share configurations across the pool, set them up in shared storage, as described in "Configure search head pooling".
It's important to keep the clocks on your search heads and shared storage server in sync, via NTP (network time protocol) or some similar means. If the clocks are out-of-sync by more than a few seconds, you can end up with search failures or premature expiration of search artifacts.
On each search head, the user account Splunk runs as must have read/write permissions to the files on the shared storage server.
Performance analysis
A large percentage of search head pooling issues boil down to insufficient performance.
When deploying or investigating a search head pooling environment, it's important to consider these factors:
- Storage: The storage backing the pool must be able to handle a very high number of IOPS. IOPS under 1000 will probably never work well.
- Network: The communication path between the backing store and the search heads must be high bandwidth and extremely low latency. This probably means your storage system should be on the same switch as your search heads. WAN links are not going to work.
- Server Parallelism: Because searching results in a large number of processes requesting a large number of files, the parallelism in the system must be high. This can require tuning the NFS server to handle a larger number of requests in parallel.
- Client Parallelism: The client operating system must be able to handle a significant number of requests at the same time.
To validate an environment, a typical approach would be:
- Use a storage benchmarking tool, such as Bonnie++, while the file store is not in use to validate that the IOPS provided are robust.
- Use network testing methods to determine that the roundtrip time between search heads and the storage system is on the order of 10ms.
- Perform known simple tasks such as creating a million files and then deleting them.
- Assuming the above tests have not shown any weaknesses, perform some IO load generation or run the actual Splunk Enterprise load while gathering NFS stat data to see what's happening with the NFS requests.
NFS client concurrency limits can cause search timeouts or slow search behavior
The search performance in a search head pool is a function of the throughput of the shared storage and the search workload. The combined effect of concurrent search users and concurrent scheduled searches running will yield a total IOPs that the shared volume needs to support. IOP requirements will also vary by the kind of searches run. To adequately provision a device to be shared between search heads, you need to know the number of concurrent users submitting searches and the number of jobs/apps that will be executed simultaneously.
If searches are timing out or running slowly, you might be exhausting the maximum number of concurrent requests supported by the NFS client. To solve this problem, increase your client concurrency limit. For example, on a Linux NFS client, adjust the tcp_slot_table_entries
setting.
NFS latency for large user count can incur configuration access latency or slow dispatch reaping
Splunk Enterprise synchronizes the search head pool storage configuration state with the in-memory state when it detects changes. Essentially, it reads the configuration into memory when it detects updates. When dealing either with overloaded search pool storage or with large numbers of users, apps, and configuration files, this synchronization process can reduce performance. To mitigate this, the minimum frequency of reading can be increased, as discussed in "Select timing for configuration refresh".
Warning about unique serverName attribute
Each search head in the pool must have a unique serverName
attribute. Splunk Enterprise validates this condition when each search head starts. If it finds a problem, it generates this error message:
serverName "<xxx>" has already been claimed by a member of this search head pool in <full path to pooling.ini on shared storage> There was an error validating your search head pooling configuration. For more information, run 'splunk pooling validate'
The most common cause of this error is that another search head in the pool is already using the current search head's serverName
. To fix the problem, change the current search head's serverName
attribute in .system/local/server.conf
.
There are a few other conditions that also can generate this error:
- The current search head's
serverName
has been changed. - The current search head's GUID has been changed. This is usually due to
/etc/instance.cfg
being deleted.
To fix these problems, run
splunk pooling replace-member
This updates the pooling.ini
file with the current search head's serverName
->GUID mapping, overwriting any previous mapping.
Artifacts and incorrectly displayed items in Splunk Web after upgrade
When upgrading pooled search heads, you must copy all updated apps - even those that ship with Splunk Enterprise (such as the Search app) - to the search head pool's shared storage after the upgrade is complete. If you do not, you might see artifacts or other incorrectly-displayed items in Splunk Web.
To fix the problem, copy all updated apps from an upgraded search head to the shared storage for the search head pool, taking care to exclude the local
sub-directory of each app.
Important: Excluding the local
sub-directory of each app from the copy process prevents the overwriting of configuration files on the shared storage with local copies of configuration files.
Once the apps have been copied, restart Splunk Enterprise on all search heads in the pool.
Quarantine a search peer | Distributed search error messages |
This documentation applies to the following versions of Splunk® Enterprise: 7.0.0, 7.0.1, 7.0.2, 7.0.3, 7.0.4, 7.0.5, 7.0.6, 7.0.7, 7.0.8, 7.0.9, 7.0.10, 7.0.11, 7.0.13, 7.1.0, 7.1.1, 7.1.2, 7.1.3, 7.1.4, 7.1.5, 7.1.6, 7.1.7, 7.1.8, 7.1.9, 7.1.10, 7.2.0, 7.2.1, 7.2.2, 7.2.3, 7.2.4, 7.2.5, 7.2.6, 7.2.7, 7.2.8, 7.2.9, 7.2.10, 7.3.0, 7.3.1, 7.3.2, 7.3.3, 7.3.4, 7.3.5, 7.3.6, 7.3.7, 7.3.8, 7.3.9
Feedback submitted, thanks!