Splunk® Enterprise

Managing Indexers and Clusters of Indexers

Download manual as PDF

Download topic as PDF

Deploy SmartStore on a new standalone indexer

During this procedure, you:

  1. Install a new indexer .
  2. Configure the indexer to access the remote store.
  3. Test the deployment.

Note: This procedure configures SmartStore for a new standalone indexer only. It does not describe how to load existing data onto an indexer. To migrate existing local indexes to SmartStore, see Migrate existing data on a standalone indexer to SmartStore. To bootstrap existing SmartStore data onto a new standalone indexer, see Bootstrap SmartStore indexes.

To deploy SmartStore on a cluster, see Deploy SmartStore on a new indexer cluster.

Prerequisites

Read:

Cautions

Be aware of these configuration issues:

  • The value of the path setting for each remote volume stanza must be unique to the indexer. You can share remote volumes only among indexes within a single standalone indexer. In other words, if indexes on one indexer use a particular remote volume, no index on any other standalone indexer or indexer cluster can use the same remote volume.
  • Leave maxDataSize at its default value of "auto" (750MB) for each SmartStore index.
  • The coldPath setting for each SmartStore index requires a value, even though the setting is ignored except in the case of migrated indexes.
  • The thawedPath setting for each SmartStore index requires a value, even though the setting has no practical purpose because you cannot thaw data to a SmartStore index. See Thawing data and SmartStore.

Deploy SmartStore

The following procedure assumes that you are deploying SmartStore on a new indexer. It also assumes that you are deploying SmartStore for all indexes on the indexer, using a single remote location. If you want to deploy SmartStore for some indexes only, or if you want to use multiple remote locations for the SmartStore indexes, you can modify the procedure to fit your need.

  1. Ensure that you have met the prerequisites. In particular, read:
  2. Understand SmartStore security strategies and prepare to implement them as necessary during the deployment process. See SmartStore security strategies.
  3. Install a new Splunk Enterprise instance to serve as the standalone indexer. For information on how to install Splunk Enterprise, read the Installation Manual.
  4. Create or edit $SPLUNK_HOME/etc/system/local/indexes.conf and specify the SmartStore settings, as shown in the example below.

    This example configures SmartStore indexes, using an S3 remote object store. The SmartStore-related settings are configured at the global level, which means that all indexes are SmartStore-enabled, and they all use a single remote storage volume, named "remote_store". The example also creates one new index, "cs_index".

    [default]
    # Configure all indexes to use the SmartStore remote volume called
    # "remote_store".
    # Note: If you want only some of your indexes to use SmartStore, 
    # place this setting under the individual stanzas for each of the 
    # SmartStore indexes, rather than here.
    remotePath = volume:remote_store/$_index_name
    
    # Configure the remote volume
    [volume:remote_store]
    storageType = remote
    
    # On the next line, the volume's path setting points to the remote storage location 
    # where indexes reside. Each SmartStore index resides directly below the location 
    # specified by the path setting. The <scheme> identifies a supported remote 
    # storage system type, such as S3. The <remote-location-specifier> is a 
    # string specific to the remote storage system that specifies the location 
    # of the indexes inside the remote system. 
    # This is an S3 example: "path = s3://mybucket/some/path". 
    
    path = <scheme>://<remote-location-specifier>
    
    # The following S3 settings are required only if you're using the access and secret 
    # keys. They are not needed if you are using AWS IAM roles.
    
    remote.s3.access_key = <S3 access key>
    remote.s3.secret_key = <S3 secret key>
    remote.s3.endpoint = https:|http://<S3 host>
    
    # This example stanza configures a custom index, "cs_index".
    [cs_index]
    homePath = $SPLUNK_DB/cs_index/db
    # SmartStore-enabled indexes do not use thawedPath or coldPath, but you must still specify them here.
    coldPath = $SPLUNK_DB/cs_index/colddb
    thawedPath = $SPLUNK_DB/cs_index/thaweddb
    

    For details on SmartStore-specific settings, see Configure SmartStore. Also see indexes.conf.spec in the Admin Manual.

  5. Restart the indexer.
  6. Test the deployment.
    1. To confirm remote storage access:
      1. Place a sample text file in the remote store.
      2. On the indexer, run this command, which recursively lists any files that are present in the remote store:
        splunk cmd splunkd rfs -- ls --starts-with volume:remote_store
        

      If you see the sample file when you run the command, you have access to the remote store.

    2. Validate data transfer to the remote store:
      1. Send some data to the indexer.
      2. Wait for buckets to roll. If you don't want to wait for buckets to roll naturally, you can manually roll some buckets:
        splunk _internal call /data/indexes/<index_name>/roll-hot-buckets -auth <admin>:<password>
        
      3. Look for warm buckets being uploaded to remote storage.
    3. Validate data transfer from the remote store:

      Note: At this point, you should be able to run normal searches against this data. In the majority of cases, you will not be transferring any data from the remote storage, because the data will already be in the local cache. Therefore, to validate data transfer from the remote store, it is recommended that you first evict a bucket from the local cache.
      1. Evict a bucket from the cache, with a POST to this REST endpoint:
        services/admin/cacheman/<cid>/evict
        

        where <cid> is bid|<bucketId>|. For example: "bid|taktaklog~0~7D76564B-AA17-488A-BAF2-5353EA0E9CE5|"

      2. Run a search that requires data from the evicted bucket.

      The indexer must now transfer the bucket from remote storage to run the search. After running the search, you can check that the bucket has reappeared in the cache.

  7. Perform all other configurations necessary for any deployment of a standalone indexer. For example, configure the connections between indexer and forwarders and between indexer and search head.

Follow-on steps

Once your indexer is running with SmartStore, there are a number of configuration matters that warrant your immediate attention. In particular:

This step is extremely important, to avoid unwanted bucket freezing and possible data loss. SmartStore bucket-freezing behavior and settings are different from the non-SmartStore behavior and settings.

  • Edit $SPLUNK_HOME/etc/system/local/server.conf to make any necessary changes to the SmartStore-related server.conf settings. In particular, configure the cache size to fit the needs of your deployment. See Configure the SmartStore cache manager.

After you make these changes, restart the indexer.

For details on other SmartStore settings, see Configure SmartStore.

PREVIOUS
Deploy SmartStore on a new indexer cluster
  NEXT
Migrate existing data on an indexer cluster to SmartStore

This documentation applies to the following versions of Splunk® Enterprise: 7.3.0, 7.3.1, 7.3.2, 7.3.3, 8.0.0


Was this documentation topic helpful?

Enter your email address, and someone from the documentation team will respond to you:

Please provide your comments here. Ask a question or make a suggestion.

You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters