
Upgrade a search head cluster
This topic describes how to upgrade a search head cluster. The process is the same for maintenance and major release upgrades.
Upgrade to a new release
Caution: Before performing the upgrade, note the following requirements:
- All cluster members must run the same version of Splunk Enterprise (down to the maintenance level).
- If the search head cluster integrates with an indexer cluster, you must upgrade both clusters at the same time. See Upgrading an indexer cluster that integrates with a search head cluster?
- You can run search head cluster members against 5.x or 6.x search peers, so it is not necessary to upgrade standalone indexers at the same time. See "Splunk Enterprise version compatibility."
Steps
1. Stop all cluster members.
2. Upgrade all members.
3. Stop the deployer.
4. Upgrade the deployer.
5. Start the deployer.
6. Start the members.
7. Wait one to two minutes for captain election to complete. The cluster will then begin functioning.
Deployer initiates restart after post-6.2.6 upgrade
The deployer handles user configurations differently in versions higher than 6.2.6, compared to versions 6.2.6 and below. Because of this change, the first time that you use the deployer to distribute updates after upgrading your cluster to a version higher than 6.2.6, the deployer must initiate a rolling restart of all cluster members.
This restart takes place the first time, post-upgrade, that you run the splunk apply shcluster-bundle
command. The restart only occurs if you had used the deployer to push user configurations in 6.2.6 or below.
This change in user configuration deployment means that such configurations no longer reside in default directories on the cluster members. This enables certain runtime operations on the configurations. Specifically, you can now delete or move the configurations or change their sharing levels. For more information on how the deployer handles user configurations post-6.2.6, see "User configurations."
Changed behavior for user-based and role-based search quotas
The default behavior for handling user-based and role-based concurrent search quotas has changed with version 6.3.
Starting with 6.3, the search head cluster enforces these quotas across the set of cluster members. Prior to 6.3, quotas were enforced instead on a member-by-member basis.
The new default behavior is usually preferable, because the captain, both pre- and post-6.3, does not take into account the search user when it assigns a search to a member. Combined with the pre-6.3 behavior of member-enforced quotas, this could result in unwanted and unexpected behavior. For example, if the captain happened to assign most of a particular user's searches to one cluster member, that member could quickly reach the quota for that user, even though other members had not yet reached their limit for the user.
If you need to maintain the pre-6.3 behavior, make these attribute changes in limits.conf:
shc_role_quota_enforcement = false shc_local_quota_check = true
The role-based quota on a cluster is calculated by multiplying the individual user's role quota by the number of cluster members.
For more information on how the captain assigns searches to members, see "Job scheduling."
For more information on role-based quotas, see "Create and edit reports" in the Reporting Manual.
PREVIOUS Migrate from a search head pool to a search head cluster |
NEXT Configure the search head cluster |
This documentation applies to the following versions of Splunk® Enterprise: 6.2.7, 6.2.8, 6.2.9, 6.2.10, 6.2.11, 6.2.12, 6.2.13, 6.2.14, 6.2.15, 6.3.0, 6.3.1, 6.3.2, 6.3.3, 6.3.4, 6.3.5, 6.3.6, 6.3.7, 6.3.8, 6.3.9, 6.3.10, 6.3.11, 6.3.12, 6.3.13, 6.3.14, 6.4.0, 6.4.1, 6.4.2, 6.4.3, 6.4.4, 6.4.5, 6.4.6, 6.4.7, 6.4.8, 6.4.9, 6.4.10, 6.4.11
Feedback submitted, thanks!