Splunk® Enterprise

Managing Indexers and Clusters of Indexers

Splunk Enterprise version 9.0 will no longer be supported as of June 14, 2024. See the Splunk Software Support Policy for details. For information about upgrading to a supported version, see How to upgrade Splunk Enterprise.

Deploy SmartStore on a new standalone indexer

During this procedure, you:

  1. Install a new indexer .
  2. Configure the indexer to access the remote store.
  3. Test the deployment.

Note: This procedure configures SmartStore for a new standalone indexer only. It does not describe how to load existing data onto an indexer. To migrate existing local indexes to SmartStore, see Migrate existing data on a standalone indexer to SmartStore. To bootstrap existing SmartStore data onto a new standalone indexer, see Bootstrap SmartStore indexes.

To deploy SmartStore on a cluster, see Deploy SmartStore on a new indexer cluster.

Prerequisites

Read:

Cautions

Be aware of these configuration issues:

  • The value of the path setting for each remote volume stanza must be unique to the indexer. You can share remote volumes only among indexes within a single standalone indexer. In other words, if indexes on one indexer use a particular remote volume, no index on any other standalone indexer or indexer cluster can use the same remote volume.
  • Leave maxDataSize at its default value of "auto" (750MB) for each SmartStore index.
  • The coldPath setting for each SmartStore index requires a value, even though the setting is ignored except in the case of migrated indexes.
  • The thawedPath setting for each SmartStore index requires a value, even though the setting has no practical purpose because you cannot thaw data to a SmartStore index. See Thawing data and SmartStore.

Deploy SmartStore

The following procedure assumes that you are deploying SmartStore on a new indexer. It also assumes that you are deploying SmartStore for all indexes on the indexer, using a single remote location. If you want to deploy SmartStore for some indexes only, or if you want to use multiple remote locations for the SmartStore indexes, you can modify the procedure to fit your need.

  1. Ensure that you have met the prerequisites. In particular, read:
  2. Understand SmartStore security strategies and prepare to implement them as necessary during the deployment process. See the topic on security strategies for your remote storage type:
  3. Install a new Splunk Enterprise instance to serve as the standalone indexer. For information on how to install Splunk Enterprise, read the Installation Manual.
  4. Create or edit $SPLUNK_HOME/etc/system/local/indexes.conf and specify the SmartStore settings, as shown in the examples below.

    Using an S3 remote object store:
    This example configures SmartStore indexes, using an S3 remote object store. The SmartStore-related settings are configured at the global level, which means that all indexes are SmartStore-enabled, and they all use a single remote storage volume, named "remote_store". The example also creates one new index, "cs_index".
    [default]
    # Configure all indexes to use the SmartStore remote volume called
    # "remote_store".
    # Note: If you want only some of your indexes to use SmartStore, 
    # place this setting under the individual stanzas for each of the 
    # SmartStore indexes, rather than here.
    remotePath = volume:remote_store/$_index_name
    
    # Configure the remote volume.
    [volume:remote_store]
    storageType = remote
    
    # The volume's 'path' setting points to the remote storage location where
    # indexes reside. Each SmartStore index resides directly below the location 
    # specified by the 'path' setting.   
    path = s3://mybucket/some/path
    
    # The following S3 settings are required only if you're using the access and secret 
    # keys. They are not needed if you are using AWS IAM roles.
    
    remote.s3.access_key = <S3 access key>
    remote.s3.secret_key = <S3 secret key>
    remote.s3.endpoint = https:|http://<S3 host>
    
    # This example stanza configures a custom index, "cs_index".
    [cs_index]
    homePath = $SPLUNK_DB/cs_index/db
    # SmartStore-enabled indexes do not use thawedPath or coldPath, but you must still specify them here.
    coldPath = $SPLUNK_DB/cs_index/colddb
    thawedPath = $SPLUNK_DB/cs_index/thaweddb
    

    For details on these settings, see Configure SmartStore. Also see indexes.conf.spec in the Admin Manual.

    Using a GCS remote object store:
    This example configures SmartStore indexes, using a GCS remote object store. The SmartStore-related settings are configured at the global level, which means that all indexes are SmartStore-enabled, and they all use a single remote storage volume, named "remote_store". The example also creates one new index, "cs_index".

    [default]
    # Configure all indexes to use the SmartStore remote volume called
    # "remote_store".
    # Note: If you want only some of your indexes to use SmartStore, 
    # place this setting under the individual stanzas for each of the 
    # SmartStore indexes, rather than here.
    remotePath = volume:remote_store/$_index_name
    
    # Configure the remote volume.
    [volume:remote_store]
    storageType = remote
    
    # The volume's 'path' setting points to the remote storage location where
    # indexes reside. Each SmartStore index resides directly below the location 
    # specified by the 'path' setting. 
    path = gs://mybucket/some/path
    
    # There are several ways to specify credentials. For details, see the topic, 
    # "SmartStore on GCS security strategies." One way to specify credentials 
    # is to point to a file, as shown here.
    remote.gs.credential_file = credential.json
    
    # This example stanza configures a custom index, "cs_index".
    [cs_index]
    homePath = $SPLUNK_DB/cs_index/db
    # SmartStore-enabled indexes do not use thawedPath or coldPath, but you must still specify them here.
    coldPath = $SPLUNK_DB/cs_index/colddb
    thawedPath = $SPLUNK_DB/cs_index/thaweddb
    

    For details on these settings, see Configure SmartStore. Also see indexes.conf.spec in the Admin Manual.

    Using an Azure Blob remote object store:
    This example configures SmartStore indexes, using an Azure Blob remote object store. The SmartStore-related settings are configured at the global level, which means that all indexes are SmartStore-enabled, and they all use a single remote storage volume, named "remote_store". The example also creates one new index, "cs_index".

    [default]
    # Configure all indexes to use the SmartStore remote volume called
    # "remote_store".
    # Note: If you want only some of your indexes to use SmartStore, 
    # place this setting under the individual stanzas for each of the 
    # SmartStore indexes, rather than here.
    remotePath = volume:remote_store/$_index_name
    
    # Configure the remote volume.
    [volume:remote_store]
    storageType = remote
    
    # The volume's 'path' setting points to the remote storage location where
    # indexes reside. Each SmartStore index resides directly below the location 
    # specified by the 'path' setting. 
    # There are multiple ways to fully specify the location. Here, for example, the
    # Azure container is specified in its own setting, but it can also be specified as 
    # part of the "path" setting. See the indexes.conf.spec file for more information.
    remote.azure.endpoint = https://account-name.blob.core.windows.net
    remote.azure.container_name = your-container
    path = azure://example/20_39/TID_01
    
    # To authenticate with the remote storage service, you must use either hardcoded access/secret 
    # keys or Azure Active Directory with configured Managed Identity. See the topic, "SmartStore on 
    # Azure Blob security strategies."  
    
    # This example stanza configures a custom index, "cs_index".
    [cs_index]
    homePath = $SPLUNK_DB/cs_index/db
    # SmartStore-enabled indexes do not use thawedPath or coldPath, but you must still specify them here.
    coldPath = $SPLUNK_DB/cs_index/colddb
    thawedPath = $SPLUNK_DB/cs_index/thaweddb
    

    For details on these settings, see Configure SmartStore. Also see indexes.conf.spec in the Admin Manual.

  5. Restart the indexer.
  6. Test the deployment.
    1. To confirm remote storage access:
      1. Place a sample text file in the remote store.
      2. On the indexer, run this command, which recursively lists any files that are present in the remote store:
        splunk cmd splunkd rfs -- ls --starts-with volume:remote_store
        

      If you see the sample file when you run the command, you have access to the remote store.

    2. Validate data transfer to the remote store:
      1. Send some data to the indexer.
      2. Wait for buckets to roll. If you don't want to wait for buckets to roll naturally, you can manually roll some buckets:
        splunk _internal call /data/indexes/<index_name>/roll-hot-buckets -auth <admin>:<password>
        
      3. Look for warm buckets being uploaded to remote storage.
    3. Validate data transfer from the remote store:

      Note: At this point, you should be able to run normal searches against this data. In the majority of cases, you will not be transferring any data from the remote storage, because the data will already be in the local cache. Therefore, to validate data transfer from the remote store, it is recommended that you first evict a bucket from the local cache.
      1. Evict a bucket from the cache, with a POST to this REST endpoint:
        services/admin/cacheman/<cid>/evict
        

        where <cid> is bid|<bucketId>|. For example: "bid|cs_index~0~7D76564B-AA17-488A-BAF2-5353EA0E9CE5|"

        Note: To get the bucketId for a bucket, run a search on your test index. For example:

        splunk search "|rest /services/admin/cacheman | search title=*cs_index*  | fields title" -auth <admin>:<password>
        
      2. Run a search that requires data from the evicted bucket.

      The indexer must now transfer the bucket from remote storage to run the search. After running the search, you can check that the bucket has reappeared in the cache.

  7. Perform all other configurations necessary for any deployment of a standalone indexer. For example, configure the connections between indexer and forwarders and between indexer and search head.

Follow-on steps

Once your indexer is running with SmartStore, there are a number of configuration matters that warrant your immediate attention. In particular:

This step is extremely important, to avoid unwanted bucket freezing and possible data loss. SmartStore bucket-freezing behavior and settings are different from the non-SmartStore behavior and settings.

  • Edit $SPLUNK_HOME/etc/system/local/server.conf to make any necessary changes to the SmartStore-related server.conf settings. In particular, configure the cache size to fit the needs of your deployment. See Configure the SmartStore cache manager.

After you make these changes, restart the indexer.

For details on other SmartStore settings, see Configure SmartStore.

Last modified on 15 September, 2021
Deploy multisite indexer clusters with SmartStore   Migrate existing data on an indexer cluster to SmartStore

This documentation applies to the following versions of Splunk® Enterprise: 9.0.0, 9.0.1, 9.0.2, 9.0.3, 9.0.4, 9.0.5, 9.0.6, 9.0.7, 9.0.8, 9.0.9, 9.0.10, 9.1.0, 9.1.1, 9.1.2, 9.1.3, 9.1.4, 9.1.5, 9.1.6, 9.1.7, 9.2.0, 9.2.1, 9.2.2, 9.2.3, 9.2.4, 9.3.0, 9.3.1, 9.3.2, 9.4.0


Was this topic useful?







You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters