Splunk® Supported Add-ons

Splunk Add-on for AWS

Acrobat logo Download manual as PDF


Acrobat logo Download topic as PDF

Hardware and software requirements for the Splunk Add-on for AWS

To install and configure the Splunk Add-on for Amazon Web Services (AWS), you must have admin or sc_admin role permissions.


Version 7.0.0 of the Splunk Add-On for AWS added support for ingesting data from the Amazon Security Lake service. If you use the Splunk Add-on for Amazon Security Lake to ingest Amazon Security Lake data, you must remove it from your Splunk platform deployment before installing version 7.0.0 or higher of this add-on, as objects in the Splunk Add-on for Amazon Security Lake conflict with the Splunk Add-On for AWS.

Splunk platform requirements

There are no Splunk platform requirements specific to the Splunk Add-on for AWS.

For Splunk Enterprise system requirements, see System requirements for use of Splunk Enterprise on-premises in the Splunk Enterprise Installation Manual.

For information about installation locations and environments, see Install the Splunk Add-on for AWS.

The field alias functionality is compatible with the current version of this add-on. The current version of this add-on does not support older field alias configurations.

For more information about the field alias configuration change, refer to the Splunk Enterprise Release Notes.

AWS account prerequisites

To set up your AWS configuration to work with your Splunk platform instance, make sure you have the following AWS account privileges:

  • A valid AWS account with permissions to configure the AWS services that provide your data.
  • Permission to create Identity and Access Management (IAM) roles and users. This lets you set up AWS account IAM roles or Amazon Elastic Compute Cloud (EC2) IAM roles to collect data from your AWS services.

When configuring your AWS account to send data to your Splunk platform deployment, the best practice is that you should not allow "*" (all resource) statements as part of action elements. This level of access could potentially grant unwanted and unregulated access to anyone given this policy document setting. The best practice is to write a refined policy describing the specific action allowed by specific users or specific accounts, or required by the specific policy holder.

For more information, see the Basic examples of Amazon SQS policies topic in the Amazon Simple Queue Service Developer Guide.

AWS region limitations

The Splunk Add-on for AWS supports all services offered by AWS in each region. To learn which worldwide geographic regions support which AWS services, see the Region Table in the AWS global infrastructure documentation.

In the AWS China region, the add-on supports only the services that AWS supports in that region. For an up-to-date list of what products and services are supported in this region, see https://www.amazonaws.cn/en/products/.

For an up-to-date list of what services and endpoints are supported in AWS GovCloud region, see https://docs.aws.amazon.com/govcloud-us/latest/UserGuide/using-services.html.

Network configuration requirements

The Splunk Add-on for AWS makes REST API calls using HTTPS on port 443. Data inputs for this add-on use large amounts of memory. See Sizing, performance, and cost considerations for the Splunk Add-on for AWS for more information.

AWS encryption requirements

Amazon Web Services supports the following server-side encryption types:

  • Server-side encryption with Amazon S3-managed encryption keys (SSE-S3). For SSE-S3 configurations, the unique key is used for encrypting each object)
  • Server-side encryption with AWS Key Management Service (SSE-KMS). SSE-KMS will manage encryption. AWS will manage the master key.
  • Server-side encryption with customer-provided encryption keys (SSE-C). KMS service will manage encryption/ The client needs to provide a custom master key.

The Splunk Add-on for AWS supports all server-side encryptions. Client-side encryption is not supported. Server side encryption is handled by AWS. AWS SDK for Python does not support client-side encryption.

Requirements For Amazon Kinesis Firehose

The Splunk Add-on for Amazon Web Services requires specific configurations for Amazon Kinesis Firehose push-based data collection. See What is Amazon Kinesis Firehose? in the AWS documentation.

SSL requirements

Amazon Kinesis Firehose requires the HTTP Event Collector (HEC) endpoint to be terminated with a valid CA-signed certificate matching the DNS hostname used to connect to your HEC endpoint.

You must use a trusted CA-signed certificate. Self-signed certificates are not supported.

If you are sending data directly into Splunk Enterprise indexers in your own internal network or AWS VPC, a CA-signed certificate must be installed to each of the indexers. If you are using an ELB to send data, you must install a CA-signed certificate on the load balancer.

Paid Splunk Cloud users are provided an ELB with a proper CA-signed certificate and a hostname for each stack. For ELB users on distributed Splunk Enterprise deployments, see Configure an Elastic Load Balancer for the Splunk Add-on for Amazon Web Services topic in this manual for information on how to configure an ELB with proper SSL certifications.

Event formatting requirements

The Splunk Add-on for Amazon Web Services also supports data collection using either of the two HTTP Event Collector endpoint types: raw and event. If you collect data using the raw endpoint, no special formatting is required for most source types. The aws:cloudwatchlogs:vpcflow contains a nested events JSON array that cannot be parsed by the HTTP Event Collector. Prepare this data for the Splunk platform using an AWS Lambda function that extracts the nested JSON events correctly into a newline-delimited set of events. All other source types can be sent directly to the raw endpoint without any preprocessing.

See the example Kinesis Firehose lambda function to remove the JSON wrapper around VPC Flow Logs before it reaches Splunk: https://github.com/ranjitkk/ranjit_aws_repo_public/blob/main/Splunk_FlowLogs_Firehose_processor.py.

If you collect data using the event endpoint, format your events into the JSON format expected by HTTP Event Collector before sending them from Amazon Kinesis Firehose to the Splunk platform. You can apply an AWS Lambda blueprint to preprocess your events into the JSON structure and set event-specific fields, which allows you greater control over how your events are handled by the Splunk platform. For example, you can create and apply a Lambda blueprint that sends data from the same Firehose stream to different indexes depending on event type.

For information about the required JSON structure, see Format events for HTTP Event Collector.

Last modified on 03 April, 2024
PREVIOUS
Source types for the Splunk Add-on for AWS
  NEXT
Sizing, performance, and cost considerations for the Splunk Add-on for AWS

This documentation applies to the following versions of Splunk® Supported Add-ons: released


Was this documentation topic helpful?


You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters