Deploy SmartStore on a new indexer cluster
During this procedure, you:
- Install a new indexer cluster.
- Configure the cluster peer nodes to access the remote store.
- Test the deployment.
Note: This procedure configures SmartStore for a new indexer cluster only. It does not describe how to load existing data onto an indexer cluster. To migrate existing local indexes to SmartStore, see Migrate existing data on an indexer cluster to SmartStore. To bootstrap existing SmartStore data onto a new indexer cluster, see Bootstrap SmartStore indexes onto an indexer cluster.
- Indexer cluster deployment overview
- Update common peer configurations and apps
- SmartStore system requirements
- Configure the remote store for SmartStore
- Choose the storage location for each index
- Documentation provided by the vendor of the remote storage service that you are using
Be aware of these configuration issues:
- The value of the
pathsetting for each remote volume stanza must be unique to the indexer cluster or standalone indexer. You can share remote volumes only among indexes within a single cluster or standalone indexer. For example, if indexes on one cluster use a particular remote volume, no index on any other cluster or standalone indexer can use the same remote volume.
- You must set all SmartStore indexes in an indexer cluster to use
repfactor = auto.
maxDataSize = auto(the default value, which is 750MB) for each SmartStore index.
coldPathsetting for each SmartStore index requires a value, even though the setting is ignored except in the case of migrated indexes.
The following procedure assumes that you are deploying SmartStore on a new indexer cluster. It also assumes that you are deploying SmartStore for all indexes on the cluster, using a single remote location. If you want to deploy SmartStore for some indexes only, or if you want to use multiple remote locations for the SmartStore indexes, you can modify the procedure to fit your need.
- Ensure that you have met the prerequisites. In particular, read:
- Understand SmartStore security strategies and prepare to implement them as necessary during the deployment process. See SmartStore security strategies.
- Install a new Splunk Enterprise instance and enable it as the master node for a new indexer cluster. See Enable the indexer cluster master node.
- Set the indexer cluster's replication factor and search factor to equal values, for example, 3/3.
On the master node, create or edit
$SPLUNK_HOME/etc/master-apps/_cluster/local/indexes.confand specify the SmartStore settings, as shown in the example below. When the peer nodes later start up, the master automatically distributes these settings, along with the rest of the configuration bundle, to the peer nodes.
Here is an example of how to configure the SmartStore indexes on the cluster peers, using an S3 remote object store. In this example, all indexes use a single remote storage volume, and the remote volume is given the name "remote_store". In addition, the example configures one new index, "cs_index".
[default] # Configure all indexes to use the SmartStore remote volume called # "remote_store". # Note: If you want only some of your indexes to use SmartStore, # place this setting under the individual stanzas for each of the # SmartStore indexes, rather than here. remotePath = volume:remote_store/$_index_name repFactor = auto # Configure the remote volume [volume:remote_store] storageType = remote # On the next line, the volume's path setting points to the remote storage location # where indexes reside. Each SmartStore index resides directly below the location # specified by the path setting. The <scheme> identifies a supported remote # storage system type, such as S3. The <remote-location-specifier> is a # string specific to the remote storage system that specifies the location # of the indexes inside the remote system. # This is an S3 example: "path = s3://mybucket/some/path". path = <scheme>://<remote-location-specifier> # The following S3 settings are required only if you're using the access and secret # keys. They are not needed if you are using AWS IAM roles. remote.s3.access_key = <S3 access key> remote.s3.secret_key = <S3 secret key> remote.s3.endpoint = https:|http://<S3 host> # This example stanza configures a custom index, "cs_index". [cs_index] homePath = $SPLUNK_DB/cs_index/db # SmartStore-enabled indexes do not use thawedPath or coldPath, but you must still specify them here. coldPath = $SPLUNK_DB/cs_index/colddb thawedPath = $SPLUNK_DB/cs_index/thaweddb
On the master node, run:
splunk apply cluster-bundle --answer-yes
Install and enable the peer nodes and search head, as for any new indexer cluster. See Enable the peer nodes and Enable the search head. Wait briefly for the peer nodes to download the configuration bundle with the SmartStore settings. To view the status of the configuration bundle process, you can run the
splunk show cluster-bundle-statuscommand, described in Update common peer configurations and apps.
Test the deployment.
You can monitor the status of the cluster start-up process from the master with this command:
splunk show cluster-status -auth <admin>:<password>
To confirm remote storage access across the indexer cluster, run this command from one of the peer nodes:
splunk cmd splunkd rfs -- ls --starts-with volume:remote_store
This command recursively lists any files that are present in the remote volume. It is recommended that you first place a sample text file in the remote volume. If you see the file when you run the command, you have access to the remote store.
Send some data to the indexers and wait for buckets to roll. If you don't want to wait for buckets to roll naturally, you can manually roll some buckets from a peer node:
splunk _internal call /data/indexes/<index_name>/roll-hot-buckets -auth <admin>:<password>
Look for warm buckets being uploaded to remote storage.
At this point, you should be able to run normal searches against this data. In the majority of cases, you will not be transferring any data from the remote storage, because the data will already be in the local cache. To validate data fetching from remote storage, do the following:
On one of the indexers, evict a bucket from the cache, using this REST endpoint:
bid|<bucketId>|. For example: "bid|taktaklog~0~7D76564B-AA17-488A-BAF2-5353EA0E9CE5|"
- Run a search locally on the indexer. The search must be one that requires data from the evicted bucket.
The indexer must now transfer the bucket from remote storage to run the search. After running the search, you can check that the bucket has reappeared in the cache.
- On one of the indexers, evict a bucket from the cache, using this REST endpoint:
- You can monitor the status of the cluster start-up process from the master with this command:
Once your cluster is running with SmartStore, there are a number of configuration matters that warrant your immediate attention. In particular:
- On the master node, edit the
$SPLUNK_HOME/etc/master-apps/_cluster/local/indexes.conffile and configure the
frozenTimePeriodInSecssettings, as necessary, to ensure that the cluster follows your desired freezing behavior. See Configure data retention for SmartStore indexes.
This step is extremely important, to avoid unwanted bucket freezing and possible data loss. SmartStore bucket-freezing behavior and settings are different from the non-SmartStore behavior and settings.
- On the master node, edit
$SPLUNK_HOME/etc/master-apps/_cluster/local/server.confto make any necessary changes to the SmartStore-related
server.confsettings on the peer nodes. In particular, configure the cache size to fit the needs of your deployment. See Configure the SmartStore cache manager.
Once you have made these changes to the configuration bundle on the master node, apply the bundle to distribute the settings to the peer nodes:
splunk apply cluster-bundle --answer-yes
For details on other SmartStore settings, see Configure SmartStore.
SmartStore security strategies
Migrate existing data on an indexer cluster to SmartStore
This documentation applies to the following versions of Splunk® Enterprise: 7.2.0, 7.2.1, 7.2.2, 7.2.3, 7.2.4, 7.2.5, 7.2.6, 7.2.7, 7.2.8, 7.2.9, 7.2.10