Configure search head pooling
See "Key implementation issues" in the Splunk Enterprise documentation before you set up a search head pool.
In a distributed search environment, a search head is a Splunk instance that directs search requests to one or more search peers (Indexers).
To set up a search head pooled environment, see "Configure search head pooling" in the Distributed Deployment manual. This topic discusses how to set up your search heads and search peers. To install and configure the Splunk App for VMware in a search head pooled environment, see the Installation and configuration instructions in this manual and the requirements and limitations discussed in this topic.
Dedicated search head
Install the instance of the Splunk App for VMware that you will use to administer the Distributed Collection Scheduler on its own search head. A dedicated search head will improve performance and eliminate the possibility of conflict with other apps installed on the same Splunk deployment.
To install the app in a search head pool
- In a search head pooled environment all apps are stored on a central location (such as an nfs mount). This shared disk space is used by all of the search heads in the pool. Ensure that you have set up your shared storage for all search heads.
- On each search head, install the Splunk App for VMware, as described in the previous topics in this manual.
- Install an instance of the Splunk App for VMware on a search head that does not belong to the search head pool. This instance is the "central configuration instance" used to configure and start the Distributed Collection Scheduler. Note that this is the only location from which you can start or stop the Distributed Collection Scheduler, and configure data collection from your sources. Use this instance for configuration purposes and not an instance that is part of the search head pool. The dashboards in this instance will remain empty as data is not sent to it.
- After you have installed the "central configuration instance", follow the instruction described in this manual to get your data collection nodes.
- Data collection nodes are managed by the Distributed Collection Scheduler. On the "central configuration instance", (the non search head pooled instance of the app) log in to Splunk Web and navigate to the Collection Configuration dashboard. Register all new data collection nodes individually with the Distributed Collection Scheduler, specifying the vCenter Server to connect to for data collection, and have them forward data to the indexers, then start the Distributed Collection Scheduler.
- Assuming that the app is installed on each search head in the pool, log in to Splunk Web on the search head and add the indexers as search peers. Now that the indexers are set up the data collection node can forward data to them. See "Configure search head pooling" in the Distributed Deployment manual to do this.
- Log in to the data collection nodes and check that data is being forwarded to the indexers in the pool.
- When you use TSIDX namespaces in a search head pool, the namespaces must be shared as the TSIDX namespaces live on the shared storage. To share the namespaces and to check that they are shared correctly:
- Create the TSIDX namespace on one of the search heads in the pooled environment. Set "tsidxStatsHomePath" in the splunk_for_vmware/local/indexes.conf file to the shared storage location. If the shared storage is mounted, for example, under
/mnt/shp/home
, then editsplunk_for_vmware/local/indexes.conf
to include the following default stanza:[default]
tsidxStatsHomePath=/mnt/shp/home
- Create the TSIDX namespace on one of the search heads in the pooled environment. This search populates the tsidx namespace.
- Test that the TSIDX namespace is set up correctly using this tscollect query, replacing "foo" with the name of your shared namespace:
index=_internal | head 10 | table host,source,sourcetype | `tscollect("foo")`
- If this search returns data without an error, then run this tstats query on "foo" on each of the search heads in the pool:
| `tstats` count from foo | stats count | search count=10
- If this search also returns data, your configuration is working correctly. It checks that you can get data from the shared namespace and that the result (count) returned by the search is the same for all search heads in the pool. The result of the search shows that they are identical instances, and that search head pooling with tsidx namespaces works.
- Set up your namespace retention policy.
- To set up a default policy for all namespaces, create a local version of the file
$SPLUNK_HOME/etc/apps/SA-Utils/default/tsidx_retention.conf
file. You can use the default settings in the file to limit the size of the TSIDX namespaces and to limit the length of time that namespaces are retained, or you can modify the settings to values that work in your environment. - To set up a retention policy for specific namespaces in the app, create a local version of the
SA-VMW-Performance/default/tsidx_retention.conf
file. Uncomment the namespaces that you want to use in the app. and the values associated with those namespaces. You can use the default values for the namespaces or modify the settings to values that work in your environment. A retention policy is set for each of the namespaces.
- To set up a default policy for all namespaces, create a local version of the file
- Create the TSIDX namespace on one of the search heads in the pooled environment. Set "tsidxStatsHomePath" in the splunk_for_vmware/local/indexes.conf file to the shared storage location. If the shared storage is mounted, for example, under
When you have installed and configured the app in your environment, you can log in to Splunk Web on any of the search heads that are part of the pooled environment to view the Splunk App for VMware dashboards and use the app.
For more information about tsidx namespaces in the Splunk App for VMware, see "Considerations when using tsidx namespaces" in this manual.
Considerations for using vSphere syslog over TCP in a search head pool
In a search head pooled environment, if you use vSphere syslog over TCP and you configured it using Splunk Web, you must disable the syslog scripted alert, vmw_esxlog_interruption_alert
. This search alerts you to interruptions in the flow of Syslog data, which only happens when you collect vsphere syslog over TCP (resulting from a VMware ESXI bug). This search must be able to read the collection configuration, which is now not part of the search head pool - if the search runs, it will fail. The search also can not be run on the separate search head because it needs to search the indexed data in addition to the collection configuration.
Note: When retrieving Syslog data using UDP or when using a different solution to get your Syslog data, you can ignore this step as this search is disabled by default.
In Splunk Web, to disable the saved search, vmw_esxlog_interruption_alert
:
- Log in to Splunk Web and enter the IP address and port number of the OS hosting your search head.
- Navigate to Settings > Searches and reports and change the status for the search there.
- Restart Splunk to enable the configuration.
- To edit the
savedsearches.conf
configuration file directly, on the search head, copy$SPLUNK_HOME/etc/apps/splunk_for_vmware/default/savedsearches.conf
to$SPLUNK_HOME/etc/apps/splunk_for_vmware/local
.
- Edit the configuration file,
savedsearches.conf
, in your local directory to setdisabled=True
, if not already set.# Syslog scripted alert
[vmw_esxlog_interruption_alert]
disabled = True
- Restart Splunk to enable the configuration.
Search head pooling with real-time searches turned on
Using search head pooing with real-time searches can cause a significant slow down in performance. We recommend against using real-time searches in a search head pooled environment.
Install on your search head or indexer | Configure a cluster deployment |
This documentation applies to the following versions of Splunk® App for VMware (Legacy): 3.1
Feedback submitted, thanks!