Troubleshoot AWS CloudWatch Log data ingestion
Troubleshoot AWS CloudWatch log data ingestion process.
AWS CloudWatch log data cannot be found
AWS CloudWatch log data cannot be found.
Cause
AWS CloudWatch is not configured correctly and AWS CloudWatch log data is not being ingested.
Solution
- Verify that each AWS resource being monitored is configured to send its logs to an Amazon CloudWatch log group for the accounts and regions that you onboarded. The following table shows the format of the Log group names in AWS CloudWatch.
Service Log group pattern API Gateway API-Gateway-Execution-Logs_<rest-api-id>/<stage_name>
Lambda Function /aws/lambda/<lambda-function-name>
EKS /aws/eks/<cluster-name>/cluster
Cloud HSM /aws/cloudhsm/<cluster-name>
Document DB /aws/docdb/<db-cluster-name>/audit
/aws/docdb/<db-cluster-name>/profile
RDS /aws/rds/cluster/<db-name>/error
- If you don't see the log groups, verify that logging has been enabled on the resources. Refer to teh AWS CloudWatch documentation for more information. Logging to CloudWatch is enabled by default for Lambda and Cloud HSM.
- Navigate to Data Management. Click the Data Input Details tab, and go to the Account Establishment Details section.
- If a stack is in FAILED state, refer to Deployment Status: Failed for more troubleshooting steps.
- Verify that the Splunk HTTP Event Collector (HEC) configuration is correct. Refer to Troubleshoot the HEC Configuration for more troubleshooting steps. Make sure the indexer acknowledgement is disabled for the HEC token of the input you are troubleshooting.
- Verify that the data ingestion pipeline has been setup correctly in the account and region.
- Navigate to Amazon EventBridge console and under Rules verify that
SplunkDMCWLogsEventsScheduleRule
exists". - Verify that the target for the rule is set to the
SplunkDMCWLogsSubscriptionFilterManage
Lambda function and the status is Enabled. - Verify that the Event pattern for
SplunkDMCWLogsEventsScheduleRule
is correct. - Click on "Metrics for the rule" and verify if the event rule was last invoked/triggered. Select the appropriate time range.
- Navigate to the CloudWatch log group and click on the Details section under Subscription filters. Verify that the destination ARN target is set to a Kinesis firehose delivery stream called
SplunkDMCloudWatchLogsDeliveryStream<
. For example assume you have a log group for the document DB instance/aws/docdb/docdb-2021-10-26-03-02/profiler
, and it has one subscription filter configured. When you click on the 1 subscription link, it shows a destination ARN target which is set to a Kinesis firehose delivery stream calledSplunkDMCloudWatchLogsDeliveryStream
. - Navigate to the Amazon Kinesis console for the account and region you are troubleshooting. Click on Delivery Streams, select
SplunkDMCloudWatchLogsDeliveryStream
. Verify that status is active under Delivery stream details. - Click on Configuration and verify the source record transformation is enabled and the Lambda function is set to
SplunkDMCloudWatchLogsEventProcessor
. - If any AWS resource is missing or misconfigured, delete the Cloudformation stack, go to the Data Manager and select the input. Click on the Setup AWS Account tab and follow the instructions to recreate the stack in this account and region.
- Navigate to Amazon EventBridge console and under Rules verify that
- Check the logs and metrics on the Kinesis Firehose Delivery Stream to see if the data is getting ingested to Splunk. Refer to Troubleshoot AWS Kinesis Firehose data ingestion.for more details.
- If there are no failures seen on Kinesis Firehose Delivery Stream but your data still cannot be found then troubleshoot the HEC token metrics. Refer to Per-token metrics in the Splunk Enterprise Getting Data In manual for more information.
- If the configuration is correct and your data still cannot be found, debug the
SplunkDMCloudWatchLogsEventProcessor
Lambda function.- Navigate to the Lambda console for the account and region you are troubleshooting and click on
SplunkDMCloudWatchLogsEventProcessor
. - Select Monitor and verify that the Lambda function was invoked by looking at the invocation metrics. Make sure to select the appropriate time range.
- If the Lambda function was invoked in that time interval, then check the Throttles and Error count metrics. If any of the Throttles and Error count metrics is non-zero, check the logs of the Lambda function by clicking on View logs in CloudWatch.
- Navigate to the Lambda console for the account and region you are troubleshooting and click on
- If the configuration is correct and your data still cannot be found, Contact Splunk Support.
Troubleshoot AWS IAM User data ingestion | Troubleshoot AWS Kinesis Firehose data ingestion |
This documentation applies to the following versions of Data Manager: 1.3.1
Feedback submitted, thanks!