Splunk® Supported Add-ons

Splunk Add-on for AWS

Download manual as PDF

Download topic as PDF

Sizing, performance, and cost considerations for the Splunk Add-on for AWS

Before you configure the Splunk Add-on for Amazon Web Services (AWS), review these sizing, performance, and cost considerations.

General

See the following table for the recommended maximum daily indexing volume on a clustered indexer for different AWS source types. This information is based on a generic Splunk hardware configuration. Adjust the number of indexers in your cluster based on your actual system performance. Add indexers to a cluster to improve indexing and search retrieval performance. Remove indexers to a cluster to avoid within-cluster data replication traffic.

Source type Daily indexing volume per indexer (GB)
aws:cloudwatchlogs:vpcflow 25-30
aws:s3:accesslogs 80- 20
aws:cloudtrail 150-200
aws:billing 50- 00

These sizing recommendations are based on the Splunk platform hardware configurations in the following table. You can also use the System requirements for use of Splunk Enterprise on-premises in the Splunk Enterprise Installation Manual as a reference.

Splunk platform type CPU cores RAM EC2 instance type
Search head 8 16 GB c4.xlarge
Indexer 16 64 GB m4.4xlarge

Input configuration screens require data transfer from AWS to populate the services, queues, and buckets available to your accounts. If your network to AWS is slow, data transfer might be slow load. If you encounter timeout issues, you can manually type in resource names.

Performance for the Splunk Add-on for AWS data inputs

The rate of data ingestion for this add-on depends on several factors: deployment topology, number of keys in a bucket, file size, file compression format, number of events in a file, event size, and hardware and networking conditions.

See the following tables for measured throughput data achieved under certain operating conditions. Use the information to optimize the Splunk Add-on for AWS add-on in your own production environment. Because performance varies based on user characteristics, application usage, server configurations, and other factors, specific performance results cannot be guaranteed. Contact Splunk Support for accurate performance tuning and sizing.

The Kinesis input for the Splunk Add-on for AWS has its own performance data. See Configure Kinesis inputs for the Splunk Add-on for AWS.

Reference hardware and software environment

Throughput data and conclusions are based on performance testing using Splunk platform instances (dedicated heavy forwarders and indexers) running on the following environment:

Instance type M4 Double Extra Large (m4.4xlarge)
Memory 64 GB
Compute Units (ECU) 53.5
vCPU 16
Storage (GB) 0 (EBS only)
Arch 64-bit
EBS optimized (max bandwidth) 2000 Mbps
Network performance High

The following settings are configured in the outputs.conf file on the heavy forwarder:

useACK = true

maxQueueSize = 15MB

Measured performance data

The throughput data is the maximum performance for each single input achieved in performance testing under specific operating conditions and is subject to change when any of the hardware and software variables changes. Use this data as a rough reference only.

Single-input max throughput

Data input Source type Max throughput (KBs) Max EPS (events) Max throughput (GB/day)
Generic S3 aws:elb:accesslogs
(plain text, syslog, event size 250 B, S3 key size 2 MB)
17,000 86,000 1,470
Generic S3 aws:cloudtrail
(gz, json, event size 720 B, S3 key size 2 MB)
11,000 35,000 950
Incremental S3 aws:elb:accesslogs
(plain text, syslog, event size 250 B, S3 key size 2 MB)
11,000 43,000 950
Incremental S3 aws:cloudtrail
(gz, json, event size 720 B, S3 key size 2 MB)
7,000 10,000 600
SQS-based S3 aws:elb:accesslogs
(plain text, syslog, event size 250 B, S3 key size 2 MB)
12,000 50,000 1,000
SQS-based S3 aws:elb:accesslogs
(gz, syslog, event size 250 B, S3 key size 2 MB)
24,000 100,000 2,000
SQS-based S3 aws:cloudtrail
(gz, json, event size 720 B, S3 key size 2 MB)
13,000 19,000 1,100
CloudWatch logs [1] aws:cloudwatchlog:vpcflow 1,000 6,700 100
CloudWatch
(ListMetric, 10,000 metrics)
aws:cloudwatch 240 (Metrics) NA NA
CloudTrail aws:cloudtrail
(gz, json, sqs=1,000, 9,000 events/key)
5,000 7,000 400
Kinesis aws:cloudwatchlog:vpcflow
(json, 10 shards)
15,000 125,000 1,200
SQS aws:sqs
(json, event size 2,800)
N/A 160 N/A

[1] API throttling error occurs if input streams are greater than 1,000.

Multi-inputs max throughput

The following throughput data was measured with multiple inputs configured on a heavy forwarder in an indexer cluster distributed environment.

Consolidate AWS accounts during add-on configuration to reduce CPU usage and increase throughput performance.

Data input Source type Max throughput (KBs) Max EPS (events) Max throughput (GB/day)
Generic S3 aws:elb:accesslogs
(plain text, syslog, event size 250 B, S3 key size 2 MB)
23,000 108,000 1,980
Generic S3 aws:cloudtrail
(gz, json, event size 720 B, S3 key size 2 MB)
45,000 130,000 3,880
Incremental S3 aws:elb:accesslogs
(plain text, syslog, event size 250 B, S3 key size 2 MB)
34,000 140,000 2,930
Incremental S3 aws:cloudtrail
(gz, json, event size 720 B, S3 key size 2 MB)
45,000 65,000 3,880
SQS-based S3 [1] aws:elb:accesslogs
(plain text, syslog, event size 250 B, S3 key size 2 MB)
35,000 144,000 3,000
SQS-based S3 [1] aws:elb:accesslogs
(gz, syslog, event size 250 B, S3 key size 2 MB)
42,000 190,000 3,600
SQS-based S3 [1] aws:cloudtrail
(gz, json, event size 720 B, S3 key size 2 MB)
45,000 68,000 3,900
CloudWatch logs aws:cloudwatchlog:vpcflow 1,000 6,700 100
CloudWatch (ListMetric) aws:cloudwatch
(10,000 metrics)
240 (metrics/s) NA NA
CloudTrail aws:cloudtrail
(gz, json, sqs=100, 9,000 events/key)
20,000 15,000 1,700
Kinesis aws:cloudwatchlog:vpcflow
(json, 10 shards)
18,000 154,000 1,500
SQS aws:sqs
(json, event size 2.8K)
N/A 670 N/A

[1] Performance testing of the SQS-based S3 input indicates that optimal performance throughput is reached when running four inputs on a single heavy forwarder instance. To achieve higher throughput performance beyond this bottleneck, you can further scale out data collection by creating multiple heavy forwarder instances each configured with up to four SQS-based S3 inputs to concurrently ingest data by consuming messages from the same SQS queue.

Max inputs benchmark per heavy forwarder

The following input number ceiling was measured with multiple inputs configured on a heavy forwarder in an indexer cluster distributed environment. CPU and memory resources were utilized to their fullest.

It is possible to configure more inputs than the maximum number indicated in the table if you have a smaller event size, fewer keys per bucket, or more available CPU and memory resources in your environment.

Data input Sourcetype Format Number of keys/bucket Event size Max inputs
S3 aws:s3 zip, syslog 100,000 100 B 300
S3 aws:cloudtrail gz, json 1,300,000 1 KB 30
Incremental S3 aws:cloudtrail gz, json 1,300,000 1 KB 20
SQS-based S3 aws:cloudtrail, aws:config gz, json 1,000,000 1 KB 50

Memory usage benchmark for generic S3 inputs

Event size Number of events per key Total number of keys Archive type Number of inputs Memory used
1,000 1,000 10,000 zip 20 20 G
1,000 1,000 1,000 zip 20 12 G
1,000 1,000 10,000 zip 10 18 G
100 B 1,000 10,000 zip 10 15 G

If you do not achieve the expected AWS data ingestion throughput, see Troubleshoot the Splunk Add-on for AWS.

CloudTrail

The following table provides general guidance on sizing, performance, and cost considerations for the CloudTrail data input:

Consideration Notes
Sizing and performance None.
AWS cost Using CloudTrail itself does not incur charges, but standard S3, SNS, and SQS charges apply.
See https://aws.amazon.com/pricing/services/.

Config

The following table provides general guidance on sizing, performance, and cost considerations for the Config data input:

Consideration Notes
Sizing and performance None.
AWS cost Using Config incurs charges from AWS. See http://aws.amazon.com/config/pricing/.
In addition, standard S3, SNS, and SQS charges apply. See http://aws.amazon.com/pricing/services/.

Config Rules

The following table provides general guidance on sizing, performance, and cost considerations for the Config Rules data input:

Consideration Notes
Sizing and performance None.
AWS cost None.

CloudWatch

The following table provides general guidance on sizing, performance, and cost considerations for the CloudWatch data input:

Consideration Notes
Sizing and performance The smaller the granularity you configure, the more events you collect.
Create separate inputs that match your needs for different regions, services, and metrics. For each input, configure a granularity that matches the precision that you require, setting a larger granularity value in cases where indexing fewer, less-granular events is acceptable. You can increase granularity temporarily when a problem is detected.

AWS rate-limits the number of free API calls against the CloudWatch API. With a period of 300 and a polling interval of 1,800, collecting data for 2 million metrics does not, by itself, exceed the current default rate limit, but collecting 4 million metrics does exceed it. If you have millions of metrics to collect in your environment, consider paying to have your API limit raised, or remove less essential metrics from your input and configure larger granularities in order to make fewer API calls.

AWS cost Using CloudWatch and making requests against the CloudWatch API incurs charges from AWS.
See https://aws.amazon.com/cloudwatch/pricing/.

CloudWatch Logs (VPC Flow Logs)

The following table provides general guidance on sizing, performance, and cost considerations for the CloudWatch Logs (VPC Flow Logs) data input:

Consideration Notes
Sizing and performance AWS limits each account to 10 requests per second, each of which returns no more than 1 MB of data. In other words, the data ingestion and indexing rate is no more than 10 MB/s. The add-on modular input can process up to 4,000 events per second in a single log stream.
Best practices:
  • If volume is a concern, configure the only_after parameter to limit the amount of historical data you collect.
  • If you have high volume VPC Flow Logs, configure one or more Kinesis inputs to collect them instead of using the CloudWatch Logs input.
AWS cost Using CloudWatch Logs incurs charges from AWS. See https://aws.amazon.com/cloudwatch/pricing/.
Transferring data out of CloudWatch Logs incurs charges from AWS. See https://aws.amazon.com/ec2/pricing/.

Inspector

The following table provides general guidance on sizing, performance, and cost considerations for the Inspector data input:

Consideration Notes
Sizing and performance None.
AWS cost Using Amazon Inspector incurs charges from AWS. See https://aws.amazon.com/inspector/pricing/.

Kinesis

The following table provides general guidance on sizing, performance, and cost considerations for the Kinesis data input:

Consideration Notes
Sizing and performance See Performance reference for the Kinesis input in the Splunk Add-on for AWS.
AWS cost Using Amazon Kinesis incurs charges from AWS. See https://aws.amazon.com/kinesis/streams/pricing/.

S3

The following table provides general guidance on sizing, performance, and cost considerations for the S3 data input:

Consideration Notes
Sizing and performance AWS throttles S3 data collection at the bucket level, so expect some delay before all data arrives in your Splunk platform.
You can configure multiple S3 inputs for a single S3 bucket to improve performance. The Splunk platform dedicates one process for each data input, so provided that your system has sufficient processing power, performance improves with multiple inputs. See Performance reference for the S3 input in the Splunk Add-on for AWS.
AWS cost Using S3 incurs charges from AWS. See https://aws.amazon.com/s3/pricing/.

Billing

The following table provides general guidance on sizing, performance, and cost considerations for the Billing data input:

Consideration Notes
Sizing and performance Detailed billing reports can be very large in size, depending on your environment. If you configure the add-on to collect detailed reports, it collects all historical reports available in the bucket by default. In addition, for each newly finalized monthly and detailed report, the add-on collects new copies of the same report once per interval until the etag is unchanged.
Configure separate inputs for each billing report type that you want to collect. Use the regex and interval parameters in the input configuration page of the add-on to limit the number of reports that you collect with each input.
AWS cost Billing reports themselves do not incur charges, but standard S3 charges apply.
See https://aws.amazon.com/s3/pricing/.

SQS

The following table provides general guidance on sizing, performance, and cost considerations for the SQS data input:

Consideration Notes
Sizing and performance None.
AWS cost Using SQS incurs charges from AWS. See https://aws.amazon.com/sqs/pricing/.

SNS

The following table provides general guidance on sizing, performance, and cost considerations for the SNS data input:

Consideration Notes
Sizing and performance None.
AWS cost Using SNS incurs charges from AWS. See https://aws.amazon.com/sns/pricing/.
Last modified on 28 August, 2020
PREVIOUS
Hardware and software requirements for the Splunk Add-on for AWS
  NEXT
Deploy the Splunk Add-on for AWS

This documentation applies to the following versions of Splunk® Supported Add-ons: released


Was this documentation topic helpful?

Enter your email address, and someone from the documentation team will respond to you:

Please provide your comments here. Ask a question or make a suggestion.

You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters