Configure search head clustering
If you have at least three Splunk Analytics for Hadoop instances licensed for Splunk Analytics for Hadoop and Splunk Enterprise, you can configure search head clustering for Splunk.
- Manually copy
indexes.confon all the instances manually, and maintain the information across all the members of the search head cluster. (Not recommended)
- Using the search head cluster deployer functionality to update the index configuration. (recommended)
To learn more about the Deployer and search head clustering architecture, see About search head clustering.
Install and configure using the Deployer
1. Install and configure a Deployer on the instance that is NOT part of your search head cluster.
2. On the Deployer create the configuration you want to propagate. For example, to deploy an
indexes.conf configuration from a search app to all the members of a search head cluster, you would create a search app in the following directory for the instance that is the Deployer:
3. Go to:
4. Create or edit
props.conf (if applicable) and any other files you may have created for Splunk Analytics for Hadoop and need to propagate across the cluster.
5. Run the following command:
SPLUNK_HOME/bin/splunk apply shcluster-bundle -target https://<any_member_SHC>:<mgmt_port> -auth admin:<password>
6. Read the warning and click OK. Splunk performs a rolling restart on the members of the search head cluster and should restart with your propagated deployment.
Note that you cannot perform another deployment until rolling restart is initiated and completed. If you are unsure if rolling restart has been completed, you can run
SPLUNK_HOME/bin/splunk show shcluster-status on any members and check that all the instance are up and in the cluster.
Schedule bundle replication and bundle reaping
You can set a custom replication factor for bundles on HDFS. Increasing the bundle replication factor improves performance on large clusters by decreasing the average access time for a bundle across Task Nodes.
vix.splunk.setup.bundle.replication = <positive integer between 1 and 32767>
Set a reaping timelimit to specifiy the age at which bundles in the working directory on each data node can be deleted.
vix.splunk.setup.bundle.reap.timelimit = <positive integer in milliseconds>
Defaults to 24 hours, which is also the maximum value. Any values greater than 24 hours are treated as a 24 hour value.
Configure Parquet connectivity
Explore and configure Hadoop source files in the HDFS Explorer
This documentation applies to the following versions of Splunk® Enterprise: 6.5.0, 6.5.1, 6.5.2, 6.5.3, 6.5.4, 6.5.5, 6.5.6, 6.5.7, 6.5.8, 6.5.9, 6.5.10, 6.6.0, 6.6.1, 6.6.2, 6.6.3, 6.6.4, 6.6.5, 6.6.6, 6.6.7, 6.6.8, 6.6.9, 6.6.10, 6.6.11, 6.6.12, 7.0.0, 7.0.1, 7.0.2, 7.0.3, 7.0.4, 7.0.5, 7.0.6, 7.0.7, 7.0.8, 7.0.9, 7.0.10, 7.0.11, 7.0.13, 7.1.0, 7.1.1, 7.1.2, 7.1.3, 7.1.4, 7.1.5, 7.1.6, 7.1.7, 7.1.8, 7.1.9, 7.1.10, 7.2.0, 7.2.1, 7.2.2, 7.2.3, 7.2.4, 7.2.5, 7.2.6, 7.2.7, 7.2.8, 7.2.9, 7.2.10, 7.3.0, 7.3.1, 7.3.2, 7.3.3, 7.3.4, 7.3.5, 7.3.6, 7.3.7, 7.3.8, 7.3.9, 8.0.0, 8.0.1, 8.0.2, 8.0.3, 8.0.4, 8.0.5, 8.0.6, 8.0.7, 8.0.8, 8.0.9, 8.1.0, 8.1.1, 8.1.2, 8.1.3