Splunk® Enterprise

Managing Indexers and Clusters of Indexers

Splunk Enterprise version 8.0 is no longer supported as of October 22, 2021. See the Splunk Software Support Policy for details. For information about upgrading to a supported version, see How to upgrade Splunk Enterprise.
This documentation does not apply to the most recent version of Splunk® Enterprise. For documentation on the most recent version, go to the latest release.

SmartStore security strategies

SmartStore security strategies vary according to the type of remote storage service. This topic covers security when using S3 as the remote storage service.

Authenticate with the remote storage service

If the indexer or indexer cluster is running on EC2, use the access and secret keys from its IAM role.

If the indexer or indexer cluster is not running on EC2, use hardcoded keys in indexes.conf. These are the relevant settings for hardcoding the S3 keys:

  • remote.s3.access_key. Specifies the access key to use when authenticating with the remote storage system.
  • remote.s3.secret_key. Specifies the secret key to use when authenticating with the remote storage system.
  • remote.s3.endpoint. Specifies the URL of the remote storage system. This setting tells the indexer where to go for S3 authentication. Use the value for the S3 bucket region. For example, https://s3-us-west-2.amazonaws.com.

For more information on these attributes, see the indexes.conf spec file.

The credentials you use, whether from the IAM role or from indexes.conf, need permission to perform S3 operations. They also need permission to perform KMS operations, if you are encrypting data-at-rest on the remote store.

Manage SSL certifications for the remote store

The SSL certification settings vary according to the remote storage service type. This section provides information for managing SSL for an S3 remote store, using the settings provided in indexes.conf. For more details on any of these settings, as well as for information on additional S3-related SSL settings, see the indexes.conf spec file.

The S3 SSL settings are overlaid on the sslConfig stanza in server.conf, except for sslVerifyServerCert, sslAltNameToCheck, and sslCommonNameToCheck. Therefore, if you run into issues, consult the server.conf SSL settings, in addition to the remote-storage-specific settings.

Specify SSL settings on a per-remote-volume basis.

The following table includes common attributes and their recommended values.

SSL setting Description Recommended value
remote.s3.sslVerifyServerCert Specifies whether to check the server cert provided by the S3 endpoint. true
remote.s3.sslVersions The SSL version to use. tls1.2
remote.s3.sslAltNameToCheck List of alternative names in the certificate presented by the server to match against. For example, s3.<region>.amazonaws.com. N/A
remote.s3.sslRootCAPath Absolute path to the PEM format file containing list of root certificates. N/A
remote.s3.cipherSuite Ciphers to use to connect with S3. Check with your security experts. Here is an example of the type of value to enter for this attribute:


ECDHE-ECDSA-AES128-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA256

remote.s3.ecdhCurves ECDH curves to send. Check with your security experts. Here is an example of the type of value to enter for this attribute:


prime256v1, secp384r1, secp521r1

Encrypt the data on the remote store

SmartStore supports client- and server-side encryption of data at rest, or data which is neither in use nor in transit, on S3. SmartStore supports four encryption schemes through the remote.s3.encryption setting in the indexes.conf configuration file. You must use configuration files to encrypt data on S3 volumes, as there is no other method to perform this kind of configuration..

When you enable encryption, depending on the encryption method you use, the Splunk platform generates an encryption key and encrypts data that you upload to the target volume with this key. After a certain interval, the platform generates a new key.

With most encryption methods, the Splunk platform instance that does the encryption must maintain a connection to AWS or KMS. Failure to maintain this connection can cause problems with generating new keys for encryption.

The high-level procedure to encrypt data on a SmartStore volume on a single Splunk platform instance follows. If you want to encrypt data on a SmartStore volume on an indexer cluster, do not follow this procedure. Instead, see Update common peer configurations and apps. When you perform that procedure, supply the settings that appear in this procedure.

  1. Choose the encryption method that you want to use on a SmartStore volume.

    Choosing the encryption method is a one-time decision. You cannot change the encryption method later.

  2. On the Splunk platform instance where you want to encrypt data on a SmartStore volume, open the $SPLUNK_HOME/etc/system/local/indexes.conf for editing.
  3. Specify the type of encryption method you want to apply to each SmartStore volume by using the remote.s3.encryption setting under the [volume:<volume_name>] stanza for that volume:
    [volume:myvolume]
    remote.s3.encryption = sse-s3 | sse-kms | sse-c | cse | none 
    
  4. Depending on the type of encryption you use, specify additional settings that are required to interact with AWS or KMS to do the encryption. See the encryption examples later in this topic for information on the settings to use.
  5. Save the indexes.conf file and close it,
  6. Restart the Splunk platform.

Encryption occurs at the time of data upload to the volumes that you specify, and remains in effect until you change the encryption scheme again. When you configure encryption for the remote volume, you do not cause data that is already on the volume to be encrypted.

If you disable encryption, you do not cause existing encrypted data to be decrypted. Any encrypted data becomes unusable, because the Splunk platform cannot decrypt it.

If you do not already know which encryption scheme you want to use, the best choice for server-side encryption is sse-c (server-side encryption with customer keys). This method avoids running into potential throttling issues from KMS. For client-side encryption, cse is the best and only choice.

  • For detailed information on the settings to use for encryption, see the indexes.conf spec file.
  • For information on configuring server-side encryption in AWS, see the Amazon documentation.

Server-side encryption with customer-provided encryption keys (sse-c)

Here is an example of setting server-side encryption with customer keys. All of these settings go into the indexes.conf configuration file.

[volume:example_volume]
remote.s3.encryption = sse-c
remote.s3.encryption.sse-c.key_type = kms 
remote.s3.encryption.sse-c.key_refresh_interval = 86400
// 86400 equals 24 hours. This is the default and recommended value. The minimum value is 3600. 
// Setting a very low value can degrade performance.
remote.s3.kms.auth_region = <aws_region>
remote.s3.kms.key_id = <kms_keyid> 
// The kms_keyid must be a unique key ID, the Amazon Resource Name (ARN) of the CMK, 
// or the name or ARN of an alias that points to the CMK. 

// SSL settings for KMS communication
remote.s3.kms.sslVerifyServerCert = true
remote.s3.kms.sslVersions = tls1.2
remote.s3.kms.sslAltNameToCheck = kms.<aws_region>.amazonaws.com
remote.s3.kms.sslRootCAPath = $SPLUNK_HOME/etc/auth/kms_rootcert.pem  
remote.s3.kms.cipherSuite = ECDHE-ECDSA-AES128-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA256
remote.s3.kms.ecdhCurves = prime256v1, secp384r1, secp521r1

Server-side encryption with Amazon S3-managed encryption keys (sse-s3)

Here is an example of setting server-side encryption with AES256. All of these settings go into the indexes.conf configuration file.

[volume:example_volume]
remote.s3.encryption = sse-s3 

Server-side encryption with customer master keys stored in AWS KMS (sse-kms)

Here is an example of setting server-side encryption with KMS-managed keys. All of these settings go into the indexes.conf configuration file.

[volume:example_volume]
remote.s3.encryption = sse-kms
remote.s3.kms.key_id = <kms_keyid> 

Client-side encryption (cse)

Client side encryption of data ensures that the cloud service provider (CSP) cannot read the encrypted data in any way.

SmartStore contacts the AWS KMS service on an interval that you specify. It uses KMS to generate data encryption keys (DEKs) based on the Customer Master Key (CMK) that KMS stores. When you upload data after you enable CSE, SmartStore stores the data encryption keys with the uploaded data buckets.

There is no support for key revocation of any kind in the software. You are responsible for managing the customer master key and any DEKs. Also, you cannot encrypt data that has already been encrypted with a new DEK. You must decrypt the data first, then have the platform generate a new DEK before uploading the data again.

You must use AWS KMS to take advantage of this feature. It does not work with any other type of key management service.

There are some caveats to enabling client-side encryption on an index in SmartStore:

  • Performance can degrade up to 20% due to the data encryption.
  • There are limitations to some of the remote file system CLI commands that you can run on a SmartStore volume for maintenance or troubleshooting purposes:
    • The splunk cmd splunkd rfs getF command only lets you upload the receipt.json file, and CSE does not encrypt this file on upload.
    • The splunk cmd splunkd rfs putF command only lets you download the receipt.json file, and assumes the file has not been encrypted.
    • The splunk cmd splunkd rfs get command must include receipt.json as an argument, otherwise it fails.

Here is an example of setting client-side encryption. All of these settings go into the indexes.conf configuration file.

# The volume stanza and path specify the name of the remote volume. 
# This is the volume that the index that will store the encrypted data uses.
# The Splunk platform expects the path to be in the remote storage format.
[volume:<VOLUME_NAME>]
path = <S3_BUCKET_PATH>
storageType = remote

# The following settings facilitate interaction with AWS KMS. You need
# KMS to generate the client-side encryption key. The settings
# set the the KMS endpoint and authentication region, 
#
# You must configure one of the following two settings
# If you configure neither setting, the Splunk platform attempts
# to use the 'remote.s3.endpoint' and 'remote.s3.auth_region' settings
# before it fails to start.
remote.s3.kms.endpoint = <KMS_ENDPOINT>
remote.s3.kms.auth_region = <KMS_AUTH_REGION>

# The unique ID or Amazon Resource Name (ARN) of the 
# customer master key to use, or the alias name or ARN
# of the alias that refers to this key. 
remote.s3.kms.key_id = <KEY_ID>

# The signature version to use when authenticating the remote storage
# system supporting the S3 API. Since CSE uses KMS to manage encryption
# keys, you must set this to "v4".
remote.s3.signature_version = v4

# Enable client-side encryption on the remote volume.
remote.s3.encryption = cse

# The bucket encryption algorithm to use for CSE.
# Currently, "aes-256-gcm" is the only acceptable value.
remote.s3.encryption.cse.algorithm = aes-256-gcm

# How long to wait, in seconds, between generation of
# keys to encrypt data that is uploaded to S3 when CSE
# is active.
remote.s3.encryption.cse.key_refresh_interval = 86400

# The key mechanism to use for CSE. Currently, "kms"
# is the only acceptable value. You must configure this
# setting for the Splunk platform to start.
remote.s3.encryption.cse.key_type = kms


[INDEX_NAME]
# The index you specify for CSE of data. When the Splunk
# platform adds data to this index, it encrypts the data with
# the key that you provide. The index references
# the remote S3 storage volume.
homePath = $SPLUNK_DB/<INDEX_NAME>/db
coldPath = $SPLUNK_DB/<INDEX_NAME>/colddb
thawedPath = $SPLUNK_DB/<INDEX_NAME>/thaweddb
remotePath = volume:<VOLUME_NAME>/$_index_name
Last modified on 07 May, 2021
Choose the storage location for each index   Deploy SmartStore on a new indexer cluster

This documentation applies to the following versions of Splunk® Enterprise: 8.0.7, 8.0.8, 8.0.9, 8.0.10


Was this topic useful?







You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters