Migrate existing data on an indexer cluster to SmartStore
You can migrate the existing data on your indexer cluster from local storage to the remote store.
This procedure describes how to migrate all the indexes on the indexer cluster to SmartStore. You can modify the procedure if you only want to migrate some of the indexes. Indexers support a mixed environment of SmartStore and non-SmartStore indexes.
Because this process requires the cluster to upload large amounts of data, it can take a long time to complete and can have a significant impact on concurrent indexing and searching.
You cannot revert an index to non-SmartStore after you migrate it to SmartStore.
Migrate data
Perform the migration operation in two phases:
- Test the SmartStore configurations and remote connectivity on a standalone test instance.
- Run the migration by applying the configurations to your production indexer cluster.
Prerequisites
- Read:
- SmartStore system requirements
- Configure SmartStore
- Update common peer configurations and apps
- Choose the storage location for each index
- Documentation provided by the vendor of the remote storage service that you are using
- If the cluster was once migrated from single-site to multisite, you must convert any pre-existing, single-site buckets to follow the multisite replication and search policies. To do so, change the
constrain_singlesite_buckets
setting in the manager node'sserver.conf
file to "false" and restart the manager node. See Configure the manager to convert existing buckets to multisite. - The cluster should be on the smaller side, with a maximum of 20 indexers. If you want to migrate a larger cluster, consult Splunk Professional Services.
- Be aware of these configuration issues:
- The value of the
path
setting for each remote volume stanza must be unique to the indexer cluster. You can share remote volumes only among indexes within a single cluster. In other words, if indexes on one cluster use a particular remote volume, no index on any other cluster or standalone indexer can use the same remote volume. - You must set all SmartStore indexes in an indexer cluster to use
repFactor = auto
. - Leave
maxDataSize
at its default value of "auto" (750MB) for each SmartStore index. - The
coldPath
setting for each SmartStore index requires a value, even though the setting is ignored except in the case of migrated indexes.
- The value of the
- The
thawedPath
setting for each SmartStore index requires a value, even though the setting has no practical purpose because you cannot thaw data to a SmartStore index. See Thawing data and SmartStore. - Reconfigure the cluster as necessary to conform with the lists of unsupported features, current restrictions, and incompatible settings:
- Features not supported by SmartStore
- Current restrictions on SmartStore use. Regarding the requirement that replication factor and search factor be equal, you can make this change post-migration.
- Settings in indexes.conf that are incompatible with SmartStore or otherwise restricted
1. Test the configuration on a standalone instance
The purpose of performing the test on a standalone instance is to:
- test remote store connectivity.
- validate the configuration.
Steps
-
Ensure that you have met all prerequisites relevant to this test setup. In particular, read:
- SmartStore system requirements
- The appropriate topic for configuring your remote storage type:
- Understand SmartStore security strategies and prepare to implement them as necessary during the deployment process. See the topic on security strategies for your remote storage type:
- Install a new Splunk Enterprise instance. For information on how to install Splunk Enterprise, read the Installation Manual.
-
Edit
indexes.conf
in$SPLUNK_HOME/etc/system/local
to specify the SmartStore settings for your indexes. These should be the same group of settings that you intend to use later on your production deployment.
Using an S3 remote object store:
This example configures SmartStore indexes, using an S3 remote object store. The SmartStore-related settings are configured at the global level, which means that all indexes are SmartStore-enabled, and they all use a single remote storage volume, named "remote_store". The example also creates one new index, "cs_index".
[default] # Configure all indexes to use the SmartStore remote volume called # "remote_store". # Note: If you want only some of your indexes to use SmartStore, # place this setting under the individual stanzas for each of the # SmartStore indexes, rather than here. remotePath = volume:remote_store/$_index_name # Configure the remote volume. [volume:remote_store] storageType = remote # The volume's 'path' setting points to the remote storage location where # indexes reside. Each SmartStore index resides directly below the location # specified by the 'path' setting. path = s3://mybucket/some/path # The following S3 settings are required only if you're using the access and secret # keys. They are not needed if you are using AWS IAM roles. remote.s3.access_key = <S3 access key> remote.s3.secret_key = <S3 secret key> remote.s3.endpoint = https:|http://<S3 host> # This example stanza configures a custom index, "cs_index". [cs_index] homePath = $SPLUNK_DB/cs_index/db # SmartStore-enabled indexes do not use thawedPath or coldPath, but you must still specify them here. coldPath = $SPLUNK_DB/cs_index/colddb thawedPath = $SPLUNK_DB/cs_index/thaweddb
For details on these settings, see Configure SmartStore. Also see indexes.conf.spec in the Admin Manual.
Using a GCS remote object store:
This example configures SmartStore indexes, using a GCS remote object store. The SmartStore-related settings are configured at the global level, which means that all indexes are SmartStore-enabled, and they all use a single remote storage volume, named "remote_store". The example also creates one new index, "cs_index".
[default] # Configure all indexes to use the SmartStore remote volume called # "remote_store". # Note: If you want only some of your indexes to use SmartStore, # place this setting under the individual stanzas for each of the # SmartStore indexes, rather than here. remotePath = volume:remote_store/$_index_name # Configure the remote volume. [volume:remote_store] storageType = remote # The volume's 'path' setting points to the remote storage location where # indexes reside. Each SmartStore index resides directly below the location # specified by the 'path' setting. path = gs://mybucket/some/path # There are several ways to specify credentials. For details, see the topic, # "SmartStore on GCS security strategies." One way to specify credentials # is to point to a file, as shown here. remote.gs.credential_file = credential.json # This example stanza configures a custom index, "cs_index". [cs_index] homePath = $SPLUNK_DB/cs_index/db # SmartStore-enabled indexes do not use thawedPath or coldPath, but you must still specify them here. coldPath = $SPLUNK_DB/cs_index/colddb thawedPath = $SPLUNK_DB/cs_index/thaweddb
For details on these settings, see Configure SmartStore. Also see indexes.conf.spec in the Admin Manual.
Using an Azure Blob remote object store:
This example configures SmartStore indexes, using an Azure Blob remote object store. The SmartStore-related settings are configured at the global level, which means that all indexes are SmartStore-enabled, and they all use a single remote storage volume, named "remote_store". The example also creates one new index, "cs_index".
[default] # Configure all indexes to use the SmartStore remote volume called # "remote_store". # Note: If you want only some of your indexes to use SmartStore, # place this setting under the individual stanzas for each of the # SmartStore indexes, rather than here. remotePath = volume:remote_store/$_index_name # Configure the remote volume. [volume:remote_store] storageType = remote # The volume's 'path' setting points to the remote storage location where # indexes reside. Each SmartStore index resides directly below the location # specified by the 'path' setting. # There are multiple ways to fully specify the location. Here, for example, the # Azure container is specified in its own setting, but it can also be specified as # part of the "path" setting. See the indexes.conf.spec file for more information. remote.azure.endpoint = https://account-name.blob.core.windows.net remote.azure.container_name = your-container path = azure://example/20_39/TID_01 # To authenticate with the remote storage service, you must use either hardcoded access/secret # keys or Azure Active Directory with configured Managed Identity. See the topic, "SmartStore on # Azure Blob security strategies." # This example stanza configures a custom index, "cs_index". [cs_index] homePath = $SPLUNK_DB/cs_index/db # SmartStore-enabled indexes do not use thawedPath or coldPath, but you must still specify them here. coldPath = $SPLUNK_DB/cs_index/colddb thawedPath = $SPLUNK_DB/cs_index/thaweddb
For details on these settings, see Configure SmartStore. Also see indexes.conf.spec in the Admin Manual.
- Restart the instance.
-
Test the deployment:
-
To confirm remote storage access:
- Place a sample text file in the remote store.
-
On the Splunk Enterprise instance, run this command, which recursively lists any files that are present in the remote store:
splunk cmd splunkd rfs -- ls --starts-with volume:remote_store
If you see the sample file when you run the command, you have access to the remote store.
-
Validate data transfer to the remote store:
- Send some data to the indexer.
-
Wait for buckets to roll. If you don't want to wait for buckets to roll naturally, you can manually roll some buckets:
splunk _internal call /data/indexes/<index_name>/roll-hot-buckets -auth <admin>:<password>
- Look for warm buckets being uploaded to remote storage.
-
Validate data transfer from the remote store:
Note: At this point, you should be able to run normal searches against this data. In the majority of cases, you will not be transferring any data from the remote storage, because the data will already be in the local cache. Therefore, to validate data transfer from the remote store, it is recommended that you first evict a bucket from the local cache.-
Evict a bucket from the cache, with a POST to this REST endpoint:
services/admin/cacheman/<cid>/evict
where
<cid>
isbid|<bucketId>|
. For example: "bid|cs_index~0~7D76564B-AA17-488A-BAF2-5353EA0E9CE5|"
Note: To get thebucketId
for a bucket, run a search on your test index. For example:splunk search "|rest /services/admin/cacheman | search title=*cs_index* | fields title" -auth <admin>:<password>
- Run a search that requires data from the evicted bucket.
The instance must now transfer the bucket from remote storage to run the search. After running the search, you can check that the bucket has reappeared in the cache.
-
Evict a bucket from the cache, with a POST to this REST endpoint:
-
To confirm remote storage access:
2. Run the migration on the indexer cluster
In this procedure, you configure your cluster for SmartStore. The goal of the procedure is to migrate all existing warm and cold buckets on all indexes to SmartStore. Going forward, all new warm buckets will also reside in SmartStore.
The migration process takes a while to complete. If you have a large amount of data, it can take a long while. Expect some degradation of indexing and search performance during the migration. For that reason, it is best to schedule the migration for a time when your indexers will be relatively idle.
Steps
-
Ensure that you have met the prerequisites. In particular, read:
- SmartStore system requirements
- The appropriate topic for configuring your remote storage type:
- Features not supported by SmartStore
- Current restrictions on SmartStore use
- Understand SmartStore security strategies and prepare to implement them as necessary during the deployment process. See the topic on security strategies for your remote storage type:
- Upgrade all cluster nodes (manager node, peer nodes, search heads) to the latest version of Splunk Enterprise. See Upgrade an indexer cluster.
- Confirm that the cluster is in a valid and complete state, with both replication and search factors met. Go to the Manager Node Dashboard to confirm.
- Confirm that there are no bucket fixup tasks in progress or pending. Go to the Manager Node Dashboard, click on the Indexes tab, and then click on the Bucket Status button to confirm.
-
Run
splunk enable maintenance-mode
on the manager node. To confirm that the manager is in maintenance mode, runsplunk show maintenance-mode
. -
Stop all the peer nodes. When bringing down the peers, use the
splunk stop
command, notsplunk offline
. -
On the manager node, edit the existing
$SPLUNK_HOME/etc/manager-apps/_cluster/local/indexes.conf
file to make the following additions. See "Structure of the configuration bundle" for information on themanager-apps
directory.Do not replace the existing
indexes.conf
file. You need to retain its current settings, such as its index definition settings. Instead, merge these additional settings into the existing file. Be sure to remove any other copies of these settings from the file.-
Specify the SmartStore index global and volume settings. Assuming that you have already tested these settings on your standalone instance, you can simply copy the settings over from your standalone instance, remembering to add a line for
repFactor=auto
. For example:
Using an S3 remote object store:
[default] # Configure all indexes to use the SmartStore remote volume called # "remote_store". # Note: If you want only some of your indexes to use SmartStore, # place this setting under the individual stanzas for each of the # SmartStore indexes, rather than here. remotePath = volume:remote_store/$_index_name repFactor = auto # Configure the remote volume. [volume:remote_store] storageType = remote # The volume's 'path' setting points to the remote storage location where # indexes reside. Each SmartStore index resides directly below the location # specified by the 'path' setting. path = s3://mybucket/some/path # The following S3 settings are required only if you're using the access and secret # keys. They are not needed if you are using AWS IAM roles. remote.s3.access_key = <S3 access key> remote.s3.secret_key = <S3 secret key> remote.s3.endpoint = https:|http://<S3 host> # This example stanza configures a custom index, "cs_index". [cs_index] homePath = $SPLUNK_DB/cs_index/db # SmartStore-enabled indexes do not use thawedPath or coldPath, but you must still specify them here. coldPath = $SPLUNK_DB/cs_index/colddb thawedPath = $SPLUNK_DB/cs_index/thaweddb
Using a GCS remote object store:
[default] # Configure all indexes to use the SmartStore remote volume called # "remote_store". # Note: If you want only some of your indexes to use SmartStore, # place this setting under the individual stanzas for each of the # SmartStore indexes, rather than here. remotePath = volume:remote_store/$_index_name repFactor = auto # Configure the remote volume. [volume:remote_store] storageType = remote # The volume's 'path' setting points to the remote storage location where # indexes reside. Each SmartStore index resides directly below the location # specified by the 'path' setting. path = gs://mybucket/some/path # There are several ways to specify credentials. For details, see the topic, # "SmartStore on GCS security strategies." One way to specify credentials # is to point to a file, as shown here. remote.gs.credential_file = credential.json # This example stanza configures a custom index, "cs_index". [cs_index] homePath = $SPLUNK_DB/cs_index/db # SmartStore-enabled indexes do not use thawedPath or coldPath, but you must still specify them here. coldPath = $SPLUNK_DB/cs_index/colddb thawedPath = $SPLUNK_DB/cs_index/thaweddb
Using an Azure Blob remote object store:
[default] # Configure all indexes to use the SmartStore remote volume called # "remote_store". # Note: If you want only some of your indexes to use SmartStore, # place this setting under the individual stanzas for each of the # SmartStore indexes, rather than here. remotePath = volume:remote_store/$_index_name repFactor = auto # Configure the remote volume. [volume:remote_store] storageType = remote # The volume's 'path' setting points to the remote storage location where # indexes reside. Each SmartStore index resides directly below the location # specified by the 'path' setting. # There are multiple ways to fully specify the location. Here, for example, the # Azure container is specified in its own setting, but it can also be specified as # part of the "path" setting. See the indexes.conf.spec file for more information. remote.azure.endpoint = https://account-name.blob.core.windows.net remote.azure.container_name = your-container path = azure://example/20_39/TID_01 # To authenticate with the remote storage service, you must use either hardcoded access/secret # keys or Azure Active Directory with configured Managed Identity. See the topic, "SmartStore on # Azure Blob security strategies." # This example stanza configures a custom index, "cs_index". [cs_index] homePath = $SPLUNK_DB/cs_index/db # SmartStore-enabled indexes do not use thawedPath or coldPath, but you must still specify them here. coldPath = $SPLUNK_DB/cs_index/colddb thawedPath = $SPLUNK_DB/cs_index/thaweddb
-
Configure the data retention settings, as necessary, to ensure that the cluster will follow your desired freezing behavior, post-migration. See Configure data retention for SmartStore indexes.
This step is extremely important, to avoid unwanted bucket freezing and possible data loss. SmartStore bucket-freezing behavior and settings are different from the non-SmartStore behavior and settings.
-
Specify the SmartStore index global and volume settings. Assuming that you have already tested these settings on your standalone instance, you can simply copy the settings over from your standalone instance, remembering to add a line for
-
On the manager node, edit
$SPLUNK_HOME/etc/manager-apps/_cluster/local/server.conf
to make any necessary changes to the SmartStore-relatedserver.conf
settings on the peer nodes. In particular, configure the cache size to fit the needs of your deployment. See Configure the SmartStore cache manager. -
If the setting
rolling_restart
in$SPLUNK_HOME/etc/system/local/server.conf
on the manager node has some value other than "restart", such as "searchable_force", you must change the value to "restart" before applying the bundle. On the manager node, run:splunk edit cluster-config -rolling_restart restart
-
On the manager node, run:
splunk apply cluster-bundle --answer-yes
-
If you changed the value of
rolling_restart
prior to applying the bundle, revert the setting to its original value. On the manager node, run:splunk edit cluster-config -rolling_restart <original_restart_type>
-
Start all the peer nodes. Wait briefly for the peer nodes to download the configuration bundle with the SmartStore settings. To view the status of the configuration bundle process, you can run the
splunk show cluster-bundle-status
command, described in Update common peer configurations and apps. -
Run
splunk disable maintenance-mode
on the manager. To confirm that the manager is not in maintenance mode, runsplunk show maintenance-mode
. -
Wait briefly for the peer nodes to begin uploading their warm and cold buckets to the remote store.
Cold buckets use the cold path as their cache location, post-migration.
In all respects, cold buckets are functionally equivalent to warm buckets. The cache manager manages the migrated cold buckets in the same way that it manages warm buckets. The only difference is that the cold buckets will be fetched into the cold path location, rather than the home path location. -
To confirm remote storage access across the indexer cluster, run this command from one of the peer nodes:
splunk cmd splunkd rfs -- ls --starts-with volume:remote_store
This command recursively lists any files that are present in the remote store. It should show that the cluster is starting to upload warm buckets to the remote store. If necessary, wait a little while for the first uploads to occur.
- On the manager node, make any necessary changes to ensure that the indexer cluster's replication factor and search factor use the same values, for example, 3/3.
- To determine that migration is complete:
-
Test SmartStore functionality. At this point, you should be able to run normal searches against this data. In the majority of cases, you will not be transferring any data from the remote storage, because the data will already be in the local cache. To validate data fetching from remote storage, do the following:
- On one of the peer nodes, look for a fully populated bucket, containing both tsidx files and the rawdata file.
-
Evict the bucket from the cache, using this REST endpoint:
services/admin/cacheman/<cid>/evict
where
<cid>
isbid|<bucketId>|
. For example: "bid|cs_index~0~7D76564B-AA17-488A-BAF2-5353EA0E9CE5|"
Note: To get thebucketId
for a bucket, go to a search head node and run a search on your test index. For example:splunk search "|rest /services/admin/cacheman | search title=*cs_index* | fields splunk_server, title" -auth <admin>:<password>
The results list the set of buckets (by
bucketId
) in the specified test index, along with their associated peer nodes. You can use this information to evict one of the buckets from the cache of one of the peer nodes. - Run a search locally on the peer node. The search must be one that requires data from the evicted bucket.
The peer must now transfer the bucket from remote storage to run the search. After running the search, you can check that the bucket has reappeared in the cache.
If you need to restart the cluster during migration, upon restart, migration will continue from where it left off.
Refrain from rebalancing data or removing excess buckets until you have run the SmartStore-enabled cluster successfully for a while. In particular, run these operations only after you have set the replication factor and search factor to use equal values and the cluster has performed any related bucket fixup.
Monitor the migration process
You can use the monitoring console to monitor migration progress. See Troubleshoot with the monitoring console.
You can also run an endpoint from the manager node to determine the status of the migration:
$ splunk search "|rest /services/admin/cacheman/_metrics |fields splunk_server migration.*" -auth <admin>:<password>
The endpoint returns data on the migration, which you can use to determine how far along in the process each of the peers is. In this example, peer1 is on its 8th job, out of a total of 35, so the peer's migration is about 20-25% complete. The start_epoch field tells you when the migration began, allowing you to extrapolate an approximate completion time:
splunk_server migration.current_job migration.start_epoch migration.status migration.total_jobs --------------- ----------------------- --------------------- --------------- --------------------- cluster1-manager not_started peer1.ajax.com 8 1484942186 running 35 peer2.ajax.com 7 1484942190 running 37 peer2.ajax.com 5 1484942194 running 36
Once migration.status reaches "finished" on all peers, the migration is finished, and current_job will match total_jobs.
If any peer restarts during migration, its migration information is lost, and this endpoint cannot be used to check status of that peer, although the migration will, in fact, resume. The peer's reported status will remain "not_started" even after migration resumes.
Instead, you can run the following endpoint on the restarted peer:
"|rest /services/admin/cacheman |search cm:bucket.stable=0 |stats count"
The count equals the number of upload jobs remaining, where an upload job represents a single bucket to be uploaded, or, in other words, (total_jobs - current_jobs) from the earlier endpoint. The count decrements to zero as migration continues.
Deploy SmartStore on a new standalone indexer | Migrate existing data on a standalone indexer to SmartStore |
This documentation applies to the following versions of Splunk® Enterprise: 9.0.0, 9.0.1, 9.0.2, 9.0.3, 9.0.4, 9.0.5, 9.0.6, 9.0.7, 9.0.8, 9.0.9, 9.0.10, 9.1.0, 9.1.1, 9.1.2, 9.1.3, 9.1.4, 9.1.5, 9.1.6, 9.1.7, 9.2.0, 9.2.1, 9.2.2, 9.2.3, 9.2.4, 9.3.0, 9.3.1, 9.3.2, 9.4.0
Feedback submitted, thanks!