Configure search head clustering
Splunk Analytics for Hadoop reaches End of Life on January 31, 2025.
If you have at least three Splunk Analytics for Hadoop instances licensed for Splunk Analytics for Hadoop and Splunk Enterprise, you can configure search head clustering for Splunk.
- Manually copy
indexes.conf
on all the instances manually, and maintain the information across all the members of the search head cluster. (Not recommended) - Using the search head cluster deployer functionality to update the index configuration. (recommended)
To learn more about the Deployer and search head clustering architecture, see About search head clustering.
Install and configure using the Deployer
1. Install and configure a Deployer on the instance that is NOT part of your search head cluster.
2. On the Deployer create the configuration you want to propagate. For example, to deploy an indexes.conf
configuration from a search app to all the members of a search head cluster, you would create a search app in the following directory for the instance that is the Deployer:
SPLUNK_HOME/etc/shcluster/apps.
3. Go to:
SPLUNK_HOME/etc/shcluster/apps/search/local/
4. Create or edit indexes.conf
and props.conf
(if applicable) and any other files you may have created for Splunk Analytics for Hadoop and need to propagate across the cluster.
5. Run the following command:
SPLUNK_HOME/bin/splunk apply shcluster-bundle -target https://<any_member_SHC>:<mgmt_port> -auth admin:<password>
6. Read the warning and click OK. Splunk performs a rolling restart on the members of the search head cluster and should restart with your propagated deployment.
Note that you cannot perform another deployment until rolling restart is initiated and completed. If you are unsure if rolling restart has been completed, you can run SPLUNK_HOME/bin/splunk show shcluster-status
on any members and check that all the instance are up and in the cluster.
Schedule bundle replication and bundle reaping
You can set a custom replication factor for bundles on HDFS. Increasing the bundle replication factor improves performance on large clusters by decreasing the average access time for a bundle across Task Nodes.
vix.splunk.setup.bundle.replication = <positive integer between 1 and 32767>
Set a reaping timelimit to specifiy the age at which bundles in the working directory on each data node can be deleted.
vix.splunk.setup.bundle.reap.timelimit = <positive integer in milliseconds>
Defaults to 24 hours, which is also the maximum value. Any values greater than 24 hours are treated as a 24 hour value.
Configure Parquet connectivity | Explore and configure Hadoop source files in the HDFS Explorer |
This documentation applies to the following versions of Splunk® Enterprise: 7.0.0, 7.0.1, 7.0.2, 7.0.3, 7.0.4, 7.0.5, 7.0.6, 7.0.7, 7.0.8, 7.0.9, 7.0.10, 7.0.11, 7.0.13, 7.1.0, 7.1.1, 7.1.2, 7.1.3, 7.1.4, 7.1.5, 7.1.6, 7.1.7, 7.1.8, 7.1.9, 7.1.10, 7.2.0, 7.2.1, 7.2.2, 7.2.3, 7.2.4, 7.2.5, 7.2.6, 7.2.7, 7.2.8, 7.2.9, 7.2.10, 7.3.0, 7.3.1, 7.3.2, 7.3.3, 7.3.4, 7.3.5, 7.3.6, 7.3.7, 7.3.8, 7.3.9, 8.0.0, 8.0.1, 8.0.2, 8.0.3, 8.0.4, 8.0.5, 8.0.6, 8.0.7, 8.0.8, 8.0.9, 8.0.10, 8.1.0, 8.1.1, 8.1.2, 8.1.3, 8.1.4, 8.1.5, 8.1.6, 8.1.7, 8.1.8, 8.1.9, 8.1.10, 8.1.11, 8.1.12, 8.1.13, 8.1.14, 8.2.0, 8.2.1, 8.2.2, 8.2.3, 8.2.4, 8.2.5, 8.2.6, 8.2.7, 8.2.8, 8.2.9, 8.2.10, 8.2.11, 8.2.12, 9.0.0, 9.0.1, 9.0.2, 9.0.3, 9.0.4, 9.0.5, 9.0.6, 9.0.7, 9.0.8, 9.0.9, 9.0.10, 9.1.0, 9.1.1, 9.1.2, 9.1.3, 9.1.4, 9.1.5, 9.1.6, 9.2.0, 9.2.1, 9.2.2, 9.2.3, 9.3.0, 9.3.1
Feedback submitted, thanks!