Docs » Connect to your cloud service provider » Connect to AWS and send data to Splunk Observability Cloud » Collect logs from your AWS services

Collect logs from your AWS services πŸ”—

When setting up an AWS connection, you can choose to import logs from a Cloudwatch log group or an S3 bucket.

To set up log collection, follow these steps:

  1. Open the link to a CloudFormation template.

  2. Adjust the settings.

  3. Deploy the template to create splunk-aws-logs-collector, an AWS Lambda function used to transform log entries, enrich them with metadata, and send them to Splunk Observability Cloud.

How does log collection work? πŸ”—

The Splunk Observability Cloud back end runs a periodic job which goes through CloudWatch log groups and services in your account. This job adds the appropriate subscriptions and notifications to trigger the splunk-aws-logs-collector function.

Splunk Observability Cloud adds subscription filters to log groups for the selected services in the integration, or for all of the supported services when none is selected. For instance, if you select AWS/Lambda in the integration, Observability Cloud will add subscription filters to /aws/lambda/* log groups only. Splunk Observability Cloud doesn’t capture logs from all CloudWatch log groups.

Managing subscriptions πŸ”—

Subscriptions are managed every 5 minutes, which is not configurable at the moment. If you decide to turn off the integration or a particular service, the job will attempt to remove those subscriptions.

If a new log group is created for a service in the integration, Observability Cloud will add a subscription filter to this newly created log group. Afterwards, whenever new log events are added to the log group, AWS triggers splunk-aws-logs-collector lambda automatically in near real time.

Which services can you collect logs from? πŸ”—

You can collect logs from the following services:

  • Services storing their logs in Cloudwatch. Logs are stored in log groups which start with aws/<servicename>. For example: aws/lambda, aws/rds, or aws/eks

  • WAF CloudWatch logs

  • API Gateway execution logs

  • AWS Glue continuous logs if a default log group name is used

  • Network and Application Load Balancers access logs (classic Load Balancers not supported) from S3

  • S3 access logs from S3

  • Redshift access logs from S3

  • CloudFront access logs from S3

Limitations πŸ”—

The following restrictions apply:

  • Log sync can only be enabled for a single AWS integration per AWS account. Note the integration may cover multiple services and regions.

  • Deployment in China or Gov regions requires additional manual steps. See the available CloudFormation templates on GitHub.

Collect logs from unsupported services πŸ”—

CloudWatch log groups also store logs from unsupported services. If you want to capture those logs, add /aws/<namespace> to the list of custom namespaces in the integration object. While this option is not available in the Splunk Observability UI, you can easily do it via API, or by adding subscription filters.

Collect logs via API πŸ”—

To capture logs from unsupported services via the API, follow these steps:

  1. Use a GET request to retrieve existing integration object:

curl https://app.<realm><integrationId> \
  -H 'x-sf-token: <user API access token>'
  1. Update the retrieved object by adding or modifying the customNamespaceSyncRules field by executing:

    "customNamespaceSyncRules": [
            "namespace": "aws/<namespace>"
    "enabled": true,
    "id": "E1c1_huAAAA",
  • Namespaces must use lowercase only

  • Some fields are omitted for brevity

  1. Use a PUT request to update your integration:

curl https://app.<realm><integrationId> \
  -H PUT \
  -H 'x-sf-token: <user API access token>' \
  -H 'content-type: application/json' \
  --data-raw '<updated integration JSON here>'

Collect logs manually with subscription filters πŸ”—

Alternatively, you can add a subscription filter to selected CloudWatch log groups on your own. You may manually add the splunk-aws-logs-collector lambda as a CW log group subscriber. You can use any name as a subscription filter name except for Splunk Log Collector, since such subscriptions are managed by Splunk Observability, and they would be removed automatically.

Metadata πŸ”—

Log events from AWS services are enriched with relevant metadata. Some of the metadata is common to all services, while some other is service-specific.

Common metadata πŸ”—

Field name




The AWS Account ID of the resource that produced the logs

awsAccountId: 123456790


The AWS region of the resource that produced the logs

region: us-east-1


The name and version of aws-log-collector that sends these logs

logForwarder: splunk_aws_log_forwarder:1.0.1

Service-specific metadata πŸ”—

Services that store logs in CloudWatch Logs πŸ”—

Field name




Same as logGroup, unless overridden by service specific host

logGroup: /aws/lambda/my_function


Source CloudWatch log group name

logGroup: /aws/lambda/my_function


Source CloudWatch log stream name

logStream: 2020/07/31/[1]e46fcdcac7094436bd846edb431a3f1


Service name

source: lambda


aws: prefixed service name

sourcetype: aws:lambda

API Gateway, ApplicationELB, CloudFront, EKS, Lambda, NetworkELB, RDS, Redshift, S3 πŸ”—

Field name




AWS tags associated with the resource that generated logs

name: my_func_name env: prod myCustomTag: someValue

API Gateway πŸ”—

Field name




API gateway ARN

arn: arn:aws:apigateway:us-east-1::/restapis/kgiqlx3nok/stages/prod



host: arn:aws:apigateway:us-east-1::/restapis/kgiqlx3nok/stages/prod


The API Gateway Stage name

apiGatewayStage: prod


The API Gateway ID

apiGatewayId: kgiqlx3nok

Application Load Balancer πŸ”—

Field name




Load balancer ARN

elbArn: arn:aws:elasticloadbalancing:us-east-1:1234567890:loadbalancer/app/my-loadbalancer/50dc6c495c0c9188


Target group ARN (when available)

targetGroupArn: arn:aws:elasticloadbalancing:us-east-1:1234567890:loadbalancer/app/my-loadbalancer/50dc6c495c0c9188

CloudFront πŸ”—

Field name




CloudFront distribution ARN

distributionArn: arn:aws:cloudfront::1234567890:distribution/EMLARXS9EXAMPLE

EKS πŸ”—

Field name




EKS cluster ARN

arn: arn:aws:rds:us-east-1:123456790:cluster/test-eks-cluster


EKS cluster host

host: test-eks-cluster


The EKS cluster name

eksClusterName: test-eks-cluster

Lambda πŸ”—

Field name




The ARN of the lambda function that generated the logs

arn: arn:aws:lambda:us-east-1:123456790:function:my_function


Lambda host

host: arn:aws:lambda:us-east-1:123456790:function:my_function


The name of the lambda

functionName: my_function

Network Load Balancer πŸ”—

Field name




Load balancer ARN

elbArn: arn:aws:elasticloadbalancing:us-east-1:1234567890:loadbalancer/net/my-netlb/c6e77e28c25b2234

RDS PostgreSQL πŸ”—

Field name




DB host ARN

arn: arn:aws:rds:us-east-1:123456790:db:druid-lab0


The host of RDS

host: druid-lab0


The type of DB

dbType: postgresql

RDS, other than PostgreSQL πŸ”—

Field name




DB host ARN

arn: arn:aws:rds:us-east-1:123456790:db:test-database-1


The host of RDS

host: test-database-1


The name of the RDS log

dbLogName: error

Redshift πŸ”—

Field name




Redshift cluster ARN

clusterArn: arn:aws:redshift:us-east-1:1234567890:cluster:redshift-cluster-1


Redshit log type. Possible: connectionlog, useractivitylog, or userlog

logType: userlog

S3 πŸ”—

Field name




S3 bucket ARN

bucketArn: arn:aws:s3:::my-bucket


S3 object ARN (when available)

objectArn: arn:aws:s3:::my-bucket/sample.jpeg

Troubleshooting πŸ”—

CloudFormation stack was not created πŸ”—

You fully control the process of creating the CloudFormation stack, which is executed with the permissions associated with your user. The template contains a lambda function and a role required to forward logs from Cloudwatch and S3 buckets. If any errors occur, AWS displays a specific error message.

To learn more about supported templates, see the README on GitHub.

I created an integration, but I don’t see any logs πŸ”—

If you created the integration recently, it may take some time for the logs to appear in your account. The job that makes your logs notify Splunk AWS Log Collector runs every 5 minutes, so it might take that long to subscribe to a new resource. AWS logs delivery inside AWS (to CloudWatch log groups, or to S3 buckets) and AWS lambda triggering can introduce additional delay. Check AWS documentation for more details.

If you still don’t see any logs after 15 minutes, check the IAM policy you’ve used to set up the AWS connection. We recommend using the provided IAM policy. If you still don’t see the logs, please contact our support.

You can enable debug mode on the log forwarding lambda function: Add LOG_LEVEL=DEBUG in the Configuration > Environment variables section. If you see log forwarding calls fail due to a 503 HTTP error, you may be exceeding logs limit. To fix this, contact our support.

CloudFront access logs are not being collected πŸ”—

CloudFront is a global service, and its logs can be stored in any of the standard AWS regions. Each CloudFront instance can have an S3 target bucket to access configured logs. Splunk AWS log collection can only grab the logs if the S3 bucket is located in a region Splunk AWS log collection can access.Use the provided IAM policy to ensure the Splunk Observability Cloud back end has the required permissions.

I don’t see logs from some instances πŸ”—

Make sure your IAM policy allows access to the instances, their regions, or the regions where they send logs. If the service instance was recently created, it might take up to 15 minutes for the Splunk Observability Cloud back end to start gathering logs from it.

AWS allows you to configure only one notification of a given kind when a new log file appears, and S3 event files are created. If the bucket where an instance’s logs are stored already notifies another lambda function of a file creation, Observability Cloud cannot add its subscription on top of that. You can either remove the pre-existing notification configuration, or narrow it by specifying a prefix and a suffix in such a way that the log files won’t be triggering your pre-existing lambda function. If that’s not possible, contact us for assistance to modify your AWS architecture to work around the limitation.

I don’t see logs from some of my S3 buckets πŸ”—

Some AWS services use S3 buckets to store their logs, and sometimes the S3 bucket is located in a different region from the service that produces those logs. In such cases make sure to deploy the splunk-aws-logs-collector lambda function using the CloudFormation template in all AWS regions where S3 buckets with logs are located.

I have disabled logs collection, but logs are still gathered by Observability Cloud πŸ”—

It may take up to 15 minutes for the Observability Cloud back end to cancel log subscriptions. There may be additional delays introduced by the AWS logs delivery process.

The back end needs log related permissions to cancel log subscriptions. If log related permissions are removed from the AWS IAM policy (or the entire policy is removed), the back end cannot run the cleanup procedure. Make sure to disable the log collection on Observability Cloud’s side first, and clean up on AWS’ side later.

I disabled the integration or changed its settings, but logs are still being collected! πŸ”—

If you disable a part or all the integration, our back end job will attempt to clear all notifications and subscriptions it has previously created, which might take up to 15 minutes. However, if you also remove IAM permissions, the attempt may fail.

To stop sending any logs to Observability Cloud, delete the Splunk AWS Logs collector lambda from the region where you wish to stop collecting logs.