Splunk® Supported Add-ons

Splunk Add-on for AWS

Acrobat logo Download manual as PDF


Acrobat logo Download topic as PDF

Configure Security Lake inputs for the Splunk Add-on for AWS

Complete the steps to configure Security Lake inputs for the Splunk Add-on for Amazon Web Services (AWS):

  1. You must manage accounts for the add-on as a prerequisite. See Manage accounts for the Splunk Add-on for AWS.
  2. Configure AWS services for the Security Lake input.
  3. Configure AWS permissions for the Security Lake input.
  4. Configure AWS services for the Amazon Security Lake input
  5. Configure Security Lake inputs either through Splunk Web or configuration files.

The Safari web browser is not supported for configuring an Amazon Security Lake input using Splunk Web at this time. Use Google Chrome or Firefox for your configurations instead.

Configuration prerequisites

  • SQS-based S3 inputs currently support a maximum age limit of 1 hour for SNS messages. After the 1 hour mark, if a message hasn't been processed or consumed by the input, it will expire, which might lead to data loss.

Configure AWS services for the Amazon Security Lake input

After completing all the required configuration prerequisites, configure a subscriber with data access in your Amazon Security Lake service. This creates the resources needed to make the Amazon Security Lake events available to be consumed into your Splunk platform deployment.

Ensure all AWS prerequisites for setting up Subscriber data access are met. For more information, see the Managing data access for Security Lake subscribers topic in the Amazon Security Lake documentation.

Set up a subscriber

Perform the following steps to set up a subscriber for the Splunk Add-on for AWS.

  1. Log into the AWS console.
  2. Navigate to the Security Lake service Summary page.
  3. In the navigation pane on the left side, choose Subscribers.
  4. Click Create subscriber.
  5. On the Create subscriber page, fill out the details that apply to your deployment.
    1. Add a name for your subscriber.
    2. Add an optional description for your subscriber.
    3. For Log and event sources, select the event sources that you want to subscribe to for your data collection. Sources that are not selected will not be collected into your Splunk platform deployment.
    4. Select S3 as your data collection method.
    5. Enter your Account ID from where you want to collect events.
    6. Enter a placeholder value for External ID. External ID is not supported, but the field must be populated when creating a subscriber. For example, enter placeholder-value-splunk.
    7. For Notification details, select SQS queue.
    8. Click the Create button.
  6. On the subscribers Details page, confirm that the subscriber has been created with the appropriate parameters.

Verify information in SQS Queue

Perform the following steps in your Amazon deployment to verify the information in the SQS Queue that Security Lake creates.

  1. In your AWS console, navigate to the Amazon SQS service.
  2. In the Queues section, navigate to the SQS Queue that Security Lake created, and click on the name.
  3. On the information page for the SQS Queue that Security Lake created, perform the following validation steps.
    1. Click on the Monitoring tab to verify that events are flowing into the SQS Queue.
    2. Click on the Dead-letter queue tab to verify that a dead-letter queue (DLQ) has been created. If a DLQ has not been created, see the Configuring a dead-letter queue (console) topic in the AWS documentation.

Verify events are flowing into S3 bucket

Perform the following steps in your Amazon deployment to verify that parquet events are flowing into your configured S3 buckets.

  1. In your AWS console, navigate to the Amazon S3 service.
  2. Navigate to the Buckets section, and click on the S3 bucket that Security Lake created for each applicable region.
  3. In each applicable bucket, navigate to the Objects tab, and click through the directories to verify that Security Lake has available events flowing into the S3 bucket. If Security Lake is enabled on more than one AWS account, check to see if each applicable account number is listed, and that parquet files exist inside each account.
  4. In each applicable S3 bucket, navigate to the Properties tab.
  5. Navigate to Event notifications, and verify that the Security Lake SQS Queue that was created has event notifications turned on, and the data destination is the Security Lake SQS queue.

Configure IAM policies

After you set up and configured a subscriber in the Amazon Security Lake service, perform the following modifications to your IAM policies to make the Amazon Security Lake service work:

  1. Update a user to assume a role. Then modify the assumed role so that it doesn't reference an External ID.
  2. Update your boundary policy to work with the Splunk Add-on for AWS.

Update a user to assume a role

Modify your Security Lake subscriber role to associate an existing user with a role, and modify the assumed role so that it doesn't reference an External ID. You must get access to the subscription role notification that was created as part of the Amazon Security Lake subscriber provisioning.

  1. In your AWS console, navigate to the Amazon IAM service.
  2. In your Amazon IAM service, navigate to the Roles page.
  3. On the Roles page, select the Role name of the subscription role notification that was created as part of the Security Lake subscriber provisioning process.
  4. On the Summary page, navigate to the Trust relationships tab.
  5. Modify the Trusted entity policy with the following updates:
    1. Remove any reference to the External ID that was created during the Security Lake subscriber provisioning process.
    2. On the stanza containing the ARN, Attach the username from your desired user account to the end of the ARN. For example, "arn:aws:iam:772039352793:user/jdoe", where jdoe is the user name.
      For more information, see the following example Trust entity:
      {
      "Version": "2012-10-17",
      "Statement": [
      {
      "Sid": "1",
      "Effect": "Allow",
      "Principal": {
      "AWS": "arn:aws:iam::772039352793:user/jdoe"
      },
      "Action": "sts:AssumeRole"
      }
      ]
      }
      


      This step connects a user to the role that was created, and lets a user take their secret key access key to then configure the Security Lake service.
  6. In your Amazon IAM service, navigate to the Users page.
  7. On the Users page, select the User name of the user who has been connected to the role that was created.
  8. On the Summary page, navigate to the Access keys section, and copy the user's Access key ID. If no access keys currently exist, first click the Create access key button.

Update your boundary policy to work with the Splunk Add-on for AWS

  1. In your Amazon IAM service, navigate to the Roles page.
  2. On the Roles page, select the Role name of the subscription role notification that was created as part of the Security Lake subscriber provisioning process.
  3. On the Summary page, navigate to the Permissions policies tab, and click on the Policy name for your Amazon Security Lake subscription role, in order to modify the role policy.
  4. On the Edit policy page, click on the JSON tab.
  5. Navigate to the Resource column of the role policy.
  6. Under the existing S3 resources stanzas, add a stanza containing the Amazon Resource Name (ARN) of the SQS Queue that was created during the Security Lake service subscriber provisioning process.
  7. Navigate to the Action column of the role policy.
  8. Review the contents of the Action column, and add the following stanzas, if they do not already exist:
    "sqs:GetQueueUrl",
    "sqs:ReceiveMessage",
    "sqs:SendMessage",
    "sqs:DeleteMessage",
    "sqs:GetQueueAttributes",
    "sqs:ListQueues",
    "sqs:ChangeMessageVisibility",
    


    For more information, see the following example:

    {
    "Version": "2012-10-17",
    "Statement": [
    {
    "Sid": "1",
    "Effect": "Allow",
    "Action": [
    "s3:GetObject",
    "s3:GetObjectVersion",
    "sqs:GetQueueUrl",
    "sqs:ReceiveMessage",
    "sqs:SendMessage",
    "sqs:DeleteMessage",
    "sqs:GetQueueAttributes",
    "sqs:ListQueues",
    "s3:ListBucket",
    "s3:ListBucketVersions",
    "sqs:ChangeMessageVisibility",
    "kms:Decrypt"
    ],
    "Resource": [
    "arn:aws:s3:::aws-security-data-lake-us-east-2-o-w5jts1954e/aws/CLOUD_TRAIL/*",
    "arn:aws:s3:::aws-security-data-lake-us-east-2-o-w5jts1954e/aws/VPC_FLOW/*",
    "arn:aws:s3:::aws-security-data-lake-us-east-2-o-w5jts1954e/aws/ROUTE53/*",
    "arn:aws:s3:::aws-security-data-lake-us-east-2-o-w5jts1954e",
    "arn:aws:sqs:us-east-2:772039352793:moose-public-sqs"
    ]
    }
    ]
    }
    
  9. Save your changes.

Configure Amazon Security Lake inputs for the Splunk Add-on for AWS

Complete the steps to configure Amazon Security Lake inputs for the Splunk Add-on for AWS:

  1. Configure AWS accounts for the Amazon Security Lake input.
  2. Configure Amazon Security Lake inputs either through Splunk Web or configuration files.

Configuration prerequisites

This data input supports the following compression types:

  • Apache Parquet file format.

Configure AWS accounts for the Amazon Security Lake input

Add your AWS account to the Splunk Add-on for AWS

  1. On the Splunk Web home page, click on Splunk Add-on for AWS in the navigation bar.
  2. Navigate to the Configuration page,
  3. On the Configuration page, navigate to the Account tab.
  4. Click the Add button.
  5. On the Add Account page, add a Name, the Key ID of the user who was given Security Lake configuration privileges, Secret Key, and Region Category.
  6. Click the Add button.
  7. Navigate to the IAM Role tab.
  8. Click the Add button.
  9. Add the ARN role that was created during the Security Lake service provisioning process.
  10. Click the Add button.


Configure an Amazon Security Lake input using Splunk Web

To configure inputs in Splunk Web, click Splunk Add-on for AWS in the navigation bar on Splunk Web home, then choose one of the following menu paths depending on which data type you want to collect:

  • Create New Input > Security Lake > SQS-Based S3

You must have the admin_all_objects role enabled in order to add new inputs.

Choose the menu path that corresponds to the data type you want to collect. The system automatically sets the source type and display relevant field settings in the subsequent configuration page.

Use the following table to complete the fields for the new input in the .conf file or in Splunk Web:

Argument in configuration file Field in Splunk Web Description
aws_account AWS Account The AWS account or EC2 IAM role the Splunk platform uses to access the keys in your S3 buckets. In Splunk Web, select an account from the drop-down list. In inputs.conf, enter the friendly name of one of the AWS accounts that you configured on the Configuration page or the name of the automatically discovered EC2 IAM role.

If the region of the AWS account you select is GovCloud, you may encounter errors such as"Failed to load options for S3 Bucket". You need to manually add AWS GovCloud Endpoint in the S3 Host Name field. See http://docs.aws.amazon.com/govcloud-us/latest/UserGuide/using-govcloud-endpoints.html for more information.

aws_iam_role Assume Role The IAM role to assume.
using_dlq Force using DLQ (Recommended) Check the checkbox to remove the checking of DLQ (Dead Letter Queue) for ingestion of specific data. In inputs.conf, enter 0 or 1 to respectively disable or enable the checking. (Default value is 1)
sqs_queue_region AWS Region AWS region that the SQS queue is in.
private_endpoint_enabled Use Private Endpoints Check the checkbox to use private endpoints of AWS Security Token Service (STS) and AWS Simple Cloud Storage (S3) services for authentication and data collection. In inputs.conf, enter 0 or 1 to respectively disable or enable use of private endpoints.
sqs_private_endpoint_url Private Endpoint (SQS) Private Endpoint (Interface VPC Endpoint) of your SQS service, which can be configured from your AWS console.


Supported Formats :
<http/https>://vpce-<endpoint_id>-<unique_id>.sqs.<region_id>.vpce.amazonaws.com <http/https>://vpce-<endpoint_id>-<unique_id>-<availability_zone>.sqs.<region_id>.vpce.amazonaws.com

sqs_sns_validation SNS Signature Validation SNS validation of your SQS messages, which can be configured from your AWS console. If selected, all messages will be validated. If unselected, then messages will not be validated until receiving a signed message. Thereafter, all messages will be validated for an SNS signature. For new SQS-based S3 inputs, this feature is enabled, by default.


Supported Formats :
1 is enabled, 0 is disabled. Default is 0

s3_private_endpoint_url Private Endpoint (S3) Private Endpoint (Interface VPC Endpoint) of your S3 service, which can be configured from your AWS console.


Supported Formats :
<http/https>://vpce-<endpoint_id>-<unique_id>.s3.<region_id>.vpce.amazonaws.com <http/https>://vpce-<endpoint_id>-<unique_id>-<availability_zone>.s3.<region_id>.vpce.amazonaws.com

sts_private_endpoint_url Private Endpoint (STS) Private Endpoint (Interface VPC Endpoint) of your STS service, which can be configured from your AWS console.


Supported Formats :
<http/https>://vpce-<endpoint_id>-<unique_id>.sts.<region_id>.vpce.amazonaws.com <http/https>://vpce-<endpoint_id>-<unique_id>-<availability_zone>.sts.<region_id>.vpce.amazonaws.com

sqs_queue_url SQS Queue Name The SQS queue URL.
sqs_batch_size SQS Batch Size The maximum number of messages to pull from the SQS queue in one batch. Enter an integer between 1 and 10 inclusive. Set a larger value for small files, and a smaller value for large files. The default SQS batch size is 10. If you are dealing with large files and your system memory is limited, set this to a smaller value.
s3_file_decoder S3 File Decoder The decoder to use to parse the corresponding log files. The decoder is set according to the Data Type you select. If you select a Custom Data Type, choose one from Cloudtrail, Config, ELB Access Logs, S3 Access Logs, CloudFront Access Logs, or Amazon Security Lake.
sourcetype Source Type The source type for the events to collect, automatically filled in based on the decoder chosen for the input.
interval Interval The length of time in seconds between two data collection runs. The default is 300 seconds.
index Index The index name where the Splunk platform puts the Amazon Security Lake data. The default is main.
sns_max_age SNS message max age Maximum age of the SNS message, in hours. SNS message max age must be between 1 to 336 hours (14 days). The default value is 96 hours (4 days). Messages having an age within the specified max age will be ingested.

Configure an Amazon Security Lake input using configuration files

When you configure inputs manually in inputs.conf, create a stanza using the following template and add it to $SPLUNK_HOME/etc/apps/Splunk_TA_aws/local/inputs.conf. If the file or path does not exist, create it.

[aws_sqs_based_s3://test_input]
aws_account = test-account
interval = 300
private_endpoint_enabled = 0
s3_file_decoder = AmazonSecurityLake
sourcetype = aws:asl
sqs_batch_size = 10
sqs_queue_region = us-west-1
sqs_queue_url = https://sqs.us-west-1.amazonaws.com/<account-id>/parquet-test-queue
sqs_sns_validation = 0
using_dlq = 1
sns_max_age = 96

Some of these settings have default values that can be found in $SPLUNK_HOME/etc/apps/Splunk_TA_aws/default/inputs.conf:

[aws_sqs_based_s3]
using_dlq = 1

The previous values correspond to the default values in Splunk Web, as well as some internal values that are not exposed in Splunk Web for configuration. If you copy this stanza to your $SPLUNK_HOME/etc/apps/Splunk_TA_aws/local and use it as a starting point to configure your inputs.conf manually, change the [aws_sqs_based_s3]stanza title from aws_sqs_based_s3 to aws_sqs_based_s3://<name> and add the additional parameters that you need for your deployment.

Valid values for s3_file_decoder are CustomLogs, CloudTrail, ELBAccessLogs, CloudFrontAccessLogs, S3AccessLogs, Config.

If you want to ingest custom logs other than the natively supported AWS log types, you must set s3_file_decoder = CustomLogs. This setting lets you ingest custom logs into the Splunk platform instance, but it does not parse the data. To process custom logs into meaningful events, you need to perform additional configurations in props.conf and transforms.conf to parse the collected data to meet your specific requirements.

For more information on these settings, see /README/inputs.conf.spec under your add-on directory.

Automatically scale data collection with Amazon Security Lake inputs

With the Amazon Security Lake input type, you can take full advantage of the auto-scaling capability of the AWS infrastructure to scale out data collection by configuring multiple inputs to ingest logs from the same S3 bucket without creating duplicate events. This is particularly useful if you are ingesting logs from a very large S3 bucket and hit a bottleneck in your data collection inputs.

  1. Create an AWS auto scaling group for your heavy forwarder instances where the SQS-based S3 inputs is running.
    To create an auto-scaling group, you can either specify a launch configuration or create an AMI to provision new EC2 instances that host heavy forwarders, and use bootstrap script to install the Splunk Add-on for AWS and configure SQS-based S3 Amazon Security Lake inputs. For detailed information about the auto-scaling group and how to create it, see http://docs.aws.amazon.com/autoscaling/latest/userguide/AutoScalingGroup.html.
  2. Set CloudWatch alarms for one of the following Amazon SQS metrics:
    • ApproximateNumberOfMessagesVisible: The number of messages available for retrieval from the queue.
    • ApproximateAgeOfOldestMessage: The approximate age (in seconds) of the oldest non-deleted message in the queue.
    For instructions on setting CloudWatch alarms for Amazon SQS metrics, see http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/SQS_AlarmMetrics.html.
  3. Use the CloudWatch alarm as a trigger to provision new heavy forwarder instances with SQS-based S3 inputs configured to consume messages from the same SQS queue to improve ingestion performance.
Last modified on 18 April, 2024
PREVIOUS
Configure VPC Flow Logs inputs for the Splunk Add-on for AWS
  NEXT
Configure CloudTrail Lake inputs for the Splunk Add-on for AWS

This documentation applies to the following versions of Splunk® Supported Add-ons: released


Was this documentation topic helpful?


You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters