Manage DDSS self storage locations
The Admin Config Service (ACS) API lets you manage DDSS self storage locations for your Splunk Cloud Platform indexes. You can use the ACS API to view and configure self storage locations on both AWS and GCP programmatically without using the Splunk Web UI.
This topic covers how to manage DDSS self storage locations only. For instructions on how to configure indexes for use with DDSS/DDAA, see Manage indexes in Splunk Cloud Platform.
For more information on how to configure DDSS self storage locations, see Store expired Splunk Cloud Platform data in your private archive in the Splunk Cloud Platform Admin Manual.
Requirements
To manage DDSS self storage locations using the ACS API:
- You must have Splunk Cloud Platform version 8.2.2106 or higher.
- You must have the
sc_admin
role. - Your deployment must have one or more separate search heads or a search head cluster. ACS is not supported on single instance deployments.
- You must have an existing S3 or GCS bucket pre-created in the same AWS or GCP region as your Splunk Cloud Platform deployment.
Set up the ACS API
Before using the ACS API, you must download the ACS Open API 3.0 specification, which includes the parameters, response codes, and other data you need to work with the ACS API. You must also create an authentication token in Splunk Cloud Platform for use with ACS endpoint requests. For details on how to set up the ACS API for index management, see Set up the ACS API.
Manage DDSS self storage locations using the ACS API
You can use the ACS API to list, describe, and configure DDSS self storage locations.
Before you can configure a self storage location, you must create an S3 or GCS bucket in the same AWS or GCP region as your Splunk Cloud Platform deployment.
ACS does not support modifying and deleting self storage locations.
List self storage locations
To view a list of all valid self storage locations configured for your deployment, send an HTTP GET request to the cloud-resources/self-storage-locations/buckets
endpoint. For example:
curl 'https://admin.splunk.com/{stack}/adminconfig/v2/cloud-resources/self-storage-locations/buckets' \ --header 'Authorization: Bearer eyJraWQiOiJzcGx1bmsuc2VjcmV0Iiwi…'
The request returns information about each self storage location, including bucketName
, bucketPath
, title
and uri
. For example:
{ "selfStorageLocations": [ { "bucketName": "<bucket-name>", "bucketPath": "<bucket-name>/dup-title-ui", "description": "Test duplicate title from UI", "folder": "dup-title-ui", "title": "test-bucket-1-with-message", "uri": "s3://<bucket-name>/dup-title-ui" }, { "bucketName": "<bucket-name>", "bucketPath": "<bucket-name>/some-folder", "description": "Test", "folder": "some-folder", "title": "test-bucket-1-with-message", "uri": "s3://<bucket-name>/some-folder" }, { "bucketName": "<bucket-name>", "bucketPath": "<bucket-name>/with-message", "description": "Test configuring ddss with ACS and show async message", "folder": "with-message", "title": "test-bucket-1-with-message", "uri": "s3://<bucket-name>/with-message" } ] }
Describe self storage locations
To view information about a specific self storage location, send an HTTP GET request to the cloud-resources/self-storage-locations/buckets/{bucketPath}
endpoint, specifying the bucketPath
value in the request URL, where bucketPath
is a unique identifier that combines <bucket-name>/<bucket-folder>
. The bucket path must be URL encoded. For example, <bucket-name>
must be passed as <bucket-name>%2Fsome-folder
, as shown:
curl 'https://admin.splunk.com/{stack}/adminconfig/v2/cloud-resources/self-storage-locations/buckets/<bucket-name>%2Fsome-folder' \ --header 'Authorization: Bearer eyJraWQiOiJzcGx1bmsuc2VjcmV0Iiwi…'
The response shows information about the specific self storage location. For example:
For AWS deployments:
{ "bucketName": "<bucket-name>", "bucketPath": "<bucket-name>/some-folder", "description": "Test", "folder": "some-folder", "title": "test-bucket-1-with-message", "uri": "s3://<bucket-name>/some-folder" }
For GCP deployments:
{ "bucketName": "<bucket-name>", "bucketPath": "<bucket-name>/some-folder", "description": "Test", "folder": "some-folder", "title": "test-bucket-with-message", "uri": "gs://<bucket-name>/some-folder" }
The bucketPath
value combines bucketName
and folder
and is used as an identifier for the self storage location.
Get prefix to configure a bucket
To configure a new self storage location, you must have the predefined bucket name prefix provided by Splunk Cloud Platform. This prefix contains your organization's Splunk Cloud ID, which is the first part of your Splunk Cloud URL, and a 12-character string (for AWS) or 4-character string (for GCP). For example, an AWS S3 bucket name has the following syntax:
Splunk Cloud ID-{12-character string}-{your bucket name}
To retrieve the prefix for your AWS or GCP bucket, send an HTTP GET request to the cloud-resources/self-storage-locations/configs/prefix
endpoint. For example:
curl 'https://admin.splunk.com/{stack}/adminconfig/v2/cloud-resources/self-storage-locations/configs/prefix' \ --header 'Authorization: Bearer eyJraWQiOiJzcGx1bmsuc2VjcmV0Iiwi…'
The response includes the prefix. For example:
{ "message": "Please create a bucket in the same region as your Splunk Cloud environment. The bucket must have 'acs-play-noah-aws-iycf10l9z5nl-' as the prefix in the name", "prefix": "acs-play-noah-aws-iycf10l9z5nl-" }
For more information on bucket name prefixes, see Configure self storage locations in the Splunk Cloud Platform Admin Manual.
Get IAM policy for AWS S3 bucket
When you configure a new self storage location for AWS, you must specify an S3 bucket policy. This policy allows your Splunk Cloud Platform deployment to access your AWS S3 bucket.
To generate an AWS S3 bucket policy, send an HTTP GET request to the cloud-resources/self-storage-locations/buckets/{bucketName}/policy
endpoint, specifying the AWS S3 bucket name in the URL. For example:
curl 'https://admin.splunk.com/{stack}/adminconfig/v2/cloud-resources/self-storage-locations/buckets/<bucket-name>/policy' \ --header 'Authorization: Bearer eyJraWQiOiJzcGx1bmsuc2VjcmV0Iiwi…'
The response includes the AWS S3 bucket policy, which you must apply to your S3 bucket in AWS. For example:
{ "message": "Please copy and apply this bucket policy to your S3 bucket in AWS. Please refer to https://docs.splunk.com/Documentation/SplunkCloud/latest/Admin/DataSelfStorage for more info.", "policy": { "Statement": [ { "Action": [ "s3:PutObject", "s3:ListBucket" ], "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::594195655983:role/acs-play-noah-aws" }, "Resource": [ "arn:aws:s3:::acs-play-noah-aws-iycf10l9z5nl-some-bucket", "arn:aws:s3:::acs-play-noah-aws-iycf10l9z5nl-some-bucket/*" ] } ], "Version": "2012-10-17" } }
Get service accounts for GCP GCS bucket
When you create a new GCS bucket in your GCP environment, you must configure proper permissions for the two service accounts associated with your Splunk Cloud Platform deployement. For more information on how to configure permissions for GCP GCS buckets, see Configure self storage in GCP.
To retrieve the two service accounts for your GCP GCS bucket, send an HTTP GET request to cloud-resources/self-storage-locations/configs/service-accounts
. For example:
curl 'https://admin.splunk.com/{stack}/adminconfig/v2/cloud-resources/self-storage-locations/configs/service-accounts' \ --header 'Authorization: Bearer eyJraWQiOiJzcGx1bmsuc2VjcmV0Iiwi…'
The response shows the two service accounts for the GCP GCS bucket. For example:
{ "message": "Please configure proper permissions for the GCP service accounts. Please refer to https://docs.splunk.com/Documentation/SplunkCloud/latest/Admin/DataSelfStorage for more info.", "serviceAccounts": { "clusterMaster": "indexes-acs-gcp-c0m1@indexes-acs-gcp-cdf8.iam.gserviceaccount.com", "indexer": "indexes-acs-gcp-idx@indexes-acs-gcp-cdf8.iam.gserviceaccount.com" } }
Configure a new self-storage location
To configure a new self storage location for DDSS, send an HTTP POST request to the cloud-resources/self-storage-locations/buckets
endpoint, specifying the bucket title
and bucketName
parameters in the request body. You can optionally include values for folder
and description
parameters.
Before you configure a new self storage location, you must create the storage bucket in your AWS or GCP environment and apply the correct AWS IAM policy or GCP service accounts. For more information, see Store expired Splunk Cloud Platform data in your private archive in the Splunk Cloud Platform Admin Manual.
For example:
curl -X POST 'https://admin.splunk.com/{stack}/adminconfig/v2/cloud-resources/self-storage-locations/buckets' \ --header 'Authorization: Bearer eyJraWQiOiJzcGx1bmsuc2VjcmV0Iiwi…' \ --header 'Content-Type: application/json' \ --data-raw '{ "title": "test-title", "bucketName": "<bucket-name>", "folder": "string", "description": "string" }'
For AWS S3 storage locations, the response appears as follows:
{ "bucketName": "<bucket-name>", "bucketPath": "<bucket-name>/with-message", "description": "Test configuring ddss with ACS and show async message", "folder": "with-message", "title": "test-bucket-1-with-message", "uri": "s3://<bucket-name>/with-message" }
For GCP GCS storage locations, the response appears as follows:
{ "bucketName": "<bucket-name>", "bucketPath": "<bucket-name>/untitled-folder", "description": "Test configuring ddss with ACS on GCP Stack", "folder": "untitled-folder", "title": "test-bucket-for-gcp", "uri": "gs://<bucket-name>/untitled-folder" }
The bucketPath
value combines bucketName
and folder
and is used as an identifier for the self storage location.
Manage indexes in Splunk Cloud Platform | Manage limits.conf configurations in Splunk Cloud Platform |
This documentation applies to the following versions of Splunk Cloud Platform™: 8.2.2112, 8.2.2201, 8.2.2202, 8.2.2203, 9.0.2205, 9.0.2208, 9.0.2209, 9.0.2303, 9.0.2305, 9.1.2308, 9.1.2312, 9.2.2403, 9.2.2406 (latest FedRAMP release)
Feedback submitted, thanks!