Data Manager

User Manual

This documentation does not apply to the most recent version of Data Manager. For documentation on the most recent version, go to the latest release.

Troubleshooting for Amazon CloudWatch Log data in Data Manager

Troubleshooting tips include, but are not limited to, the following items that can assist throughout the onboarding process.

Prerequisite troubleshooting

The following tips are specific to prerequisite troubleshooting.

AWS account validation failure

During single account onboarding, use the following troubleshooting tips if you see messages about account validation.

Message Tips
Required role does not exist From the AWS console, verify that the SplunkDMReadOnly IAM role exists in the chosen AWS account.
Incorrect IAM role name From the AWS console, double check the spelling for SplunkDMReadOnly.
Incorrect trust relationship of SplunkDMReadOnly IAM role

From the AWS console, verify that the trust relationship of the SplunkDMReadOnly IAM role matches the relationship displayed in the Data Manager UI.

  1. From the Splunk Cloud menu bar, click Apps > Data Manager.
  2. Click New Configuration.
  3. Select Cloud Data Platform of Amazon Web Services.
  4. Select the AWS data sources to onboard and click Single Account.
  5. From prerequisite step #1 Create a role, click View Trust Relationship.

AWS control account validation failure

During multiple account onboarding, use the following troubleshooting tips if you see messages about control account validation.

Message Tips
Required roles do not exist From the AWS console, verify that SplunkDMReadOnly and AWSCloudFormationStackSetAdministrationRole IAM roles exist in the AWS account chosen as the control account.
Incorrect IAM role name From the AWS console, double check the spelling of the role names SplunkDMReadOnly and AWSCloudFormationStackSetAdministrationRole.
Incorrect trust relationship of SplunkDMReadOnly or AWSCloudFormationStackSetAdministrationRole IAM role.

From the AWS console, verify the trust relationship of SplunkDMReadOnly and AWSCloudFormationStackSetAdministrationRole IAM roles are the same as the relationships displayed in the Data Manager UI.

  1. From the Splunk Cloud menu bar, click Apps > Data Manager.
  2. Click New Configuration.
  3. Select Cloud Data Platform of Amazon Web Services.
  4. Select the AWS data sources to onboard and click Multiple Accounts.
  5. From prerequisite step #2 of Create the AWSCloudFormationStackSetAdministrationRole in control account & step #3 of Create the SplunkDMReadOnly role in the control account, click View Trust Relationship.
Incorrect policy in AWSCloudFormationStackSetAdministrationRole IAM role

From the AWS console, verify that the policy attached to or inline with the AWSCloudFormationStackSetAdministrationRole IAM role is the same as the policy displayed in the Data Manager UI.

  1. From the Splunk Cloud menu bar, click Apps > Data Manager.
  2. Click New Configuration.
  3. Select Cloud Data Platform of Amazon Web Services.
  4. Select the AWS data sources to onboard and click Multiple Accounts.
  5. From prerequisite step #2 of Create the AWSCloudFormationStackSetAdministrationRole in control account, click View Role Policy.
Incorrect permissions in SplunkDMReadOnly IAM role From the AWS console, verify that the policy attached to or inline with the SplunkDMReadOnly IAM role is the same as the policy displayed in the Data Manager UI.
  1. From the Splunk Cloud menu bar, click Apps > Data Manager.
  2. Click New Configuration.
  3. Select Cloud Data Platform of Amazon Web Services.
  4. Select the AWS data sources to onboard and click Multiple Accounts.
  5. From prerequisite step #3 of Create the SplunkDMReadOnly in control account, click View Role Policy.

AWS data account validation failure

During multiple account onboarding, use the following troubleshooting tips if you see messages about data account validation.

Message Tips
Required roles do not exist From the AWS console, verify that SplunkDMReadOnly and AWSCloudFormationStackSetExecutionRole IAM roles exist in the AWS account chosen as data accounts.
Incorrect IAM role name From the AWS console, double check the spelling of the role names SplunkDMReadOnly and AWSCloudFormationStackSetExecutionRole.
Incorrect trust relationship of SplunkDMReadOnly or AWSCloudFormationStackSetExecutionRole IAM role.

From the AWS console, verify the trust relationship of SplunkDMReadOnly and AWSCloudFormationStackSetExecutionRole IAM roles are the same as the relationships displayed in the Data Manager UI.

  1. From the Splunk Cloud menu bar, click Apps > Data Manager.
  2. Click New Configuration.
  3. Select Cloud Data Platform of Amazon Web Services.
  4. Select the AWS data sources to onboard and click Multiple Accounts.
  5. From prerequisite step #5 of Create the AWSCloudFormationStackSetExecutionRole in the data accounts & step #6 of Create the SplunkDMReadOnly role in the data accounts, click View Trust Relationship.
Incorrect policy in AWSCloudFormationStackSetExecutionRole IAM role

From the AWS console, verify that the policy attached to or inline with the AWSCloudFormationStackSetExecutionRole IAM role is the same as the policy displayed in the Data Manager UI.

  1. From the Splunk Cloud menu bar, click Apps > Data Manager.
  2. Click New Configuration.
  3. Select Cloud Data Platform of Amazon Web Services.
  4. Select the AWS data sources to onboard and click Multiple Accounts.
  5. From prerequisite step #5 of Create the AWSCloudFormationStackSetExecutionRole in the data accounts, click View Role Policy.
Incorrect permissions in SplunkDMReadOnly IAM role From the AWS console, verify that the policy attached to or inline with the SplunkDMReadOnly IAM role is the same as the policy displayed in the Data Manager UI.
  1. From the Splunk Cloud menu bar, click Apps > Data Manager.
  2. Click New Configuration.
  3. Select Cloud Data Platform of Amazon Web Services.
  4. Select the AWS data sources to onboard and click Multiple Accounts.
  5. From prerequisite step #6 of Create the SplunkDMReadOnly in control account, click View Role Policy.

CloudFormation template deployment troubleshooting

The following tips are specific to CloudFormation template deployment.

Failure to create SplunkCloudConnectS3BucketTemplate stack

During bucket creation, use the following troubleshooting tips if you see messages about failure to create buckets.

Message Tips
Stack creation fails with "Bucket with name XXX already exists" or "Resource already exists" when the bucket is removed and recreated Sometimes it takes time for a bucket name to be available again after deletion. Wait for up to two hours and try creating the stack again.

Stack or StackSet deployment failure

During Stack or StackSet deployment, use the following troubleshooting tips if you see messages about failures due to SplunkCC<data-source>FailedEventsS3BucketLambda or SplunkCC<data-source>DataSrcDiscLambda.

Message Tips
Stack creation fails with "Bucket with name XXX already exists" or "Resource already exists" when the bucket is removed Sometimes it takes time for a bucket name to be available again after deletion. Wait for up to two hours and try creating the stack again.

Mismatch deployment status

During deployment, use the following troubleshooting tips if you see messages about mismatch status.

Message Tips
StackSet version mismatch. Did not create the tag for SplunkDMVersion or did not provide the correct version value while creating the StackSet.



In the Data Manager, complete the following steps:

  1. Select the input with mismatch status
  2. Take note of Splunk StackSet Version.

In the AWS console, complete the following steps:

  1. Navigate to the Stack or StackSet details page in AWS CloudFormation service, and click the Actions dropdown menu.
  2. Select one of the following options:
    • For stack, click Update.
    • For StackSect, click Edit stackset details.
  3. Revise the SplunkCloudConnectVersion value to the Splunk StackSet Version value that you previously took note of.
  4. After the Stack or StackSet update operation completes, refresh the deployment status and the new status should be Success.
Stack or Stack instance count mismatch Did not create all the Stacks or Stack instances, so the expected Stack or Stack instance count does not match with the count that is present in the AWS accounts.



Complete the following steps for a single account scenario:

  1. In the AWS console, go to the data account.
  2. In the AWS console, navigate to AWS CloudFormation service.
  3. In the AWS console, go to the region where the stack has not been created.
  4. In Data Manager, create a Stack by following the instruction "Setup AWS Accounts" tab of input details panel.

Complete the following steps for a multiple account scenario:

  1. In the AWS console, go to the control account
  2. In the AWS console, open the StackSet details page from the control account and select Actions > Add stacks to StackSet and create stack instances in the problem regions where stack instances are missing by following the instruction "Setup AWS Accounts" tab of SCC input details panel.
  3. After the Stack or StackSet update operation completes, refresh the deployment status and the new status should be Success.
Stack or Stack instance version mismatch Did not create the tag of SplunkDMVersion or did not provide the correct version value while creating the stack instance.



In the Data Manager, complete the following steps:

  1. Select the input with mismatch status
  2. Take note of Splunk StackSet Version.

In the AWS console, complete the following steps:

  1. Navigate to the Stack or StackSet details page in AWS CloudFormation service, and click the Actions dropdown menu.
  2. Select one of the following options:
    • For stack, click Update.
    • For StackSect, click Edit stackset details.
  3. Revise the SplunkCloudConnectVersion value to the Splunk StackSet Version value that you previously took note of.
  4. After the Stack or StackSet update operation completes, refresh the deployment status and the new status should be Success.

Data ingestion troubleshooting

The following tips are specific to data ingestion.

CloudTrail data ingestion problems

During data ingestion, use the following troubleshooting tips if you see messages about CloudTrail.

Message Tips
CloudTrail does not exist in the AWS region selected In the AWS console for the data accounts, navigate to CloudTrail service, access the list of trails, and verify that trails exist in regions selected as Home Regions. If there are no trails available in the AWS regions, a new multi-region trail must be created with CloudWatch Logs enabled. See https://docs.aws.amazon.com/awscloudtrail/latest/userguide/send-cloudtrail-events-to-cloudwatch-logs.html. After trails exist in the region, the whole StackSet must be deleted and created again using the same Cloudformation template from the control account.
CloudTrail CloudWatch Logs option is disabled If trails are available in the AWS regions selected, verify that each trail has CloudWatch Logs setting enabled. If it is enabled, Log group and IAM Role will show up in CloudWatch Logs section in each trail details page. If it is disabled, it must be enabled. See https://docs.aws.amazon.com/awscloudtrail/latest/userguide/send-cloudtrail-events-to-cloudwatch-logs.html. After the CloudWatch Logs setting is enabled, the whole StackSet must be deleted and created again using the same Cloudformation template from the control account.
Problem with AWS resources part of data pipeline If the previous CloudTrail and CloudWatch Logs troubleshooting did not resolve data ingestion issues, a more detailed diagnosis can be carried out:
  1. From AWS console of data account(s), navigate to CloudTrail service
  2. Pick a trail with a Home region that is an aws region(s) selected by the customer and view details of the trail
  3. Take note of Log group name in CloudWatch Logs section
  4. Navigate to CloudWatch service in the trail's Home region and then go to Logs → Log groups
  5. Find the Log group name you noted in step c and view details of the Log group
  6. Check out Subscription filters tab and verify that a filter that targets a CloudTrail firehose delivery stream arn exists
  7. From the trail's Home region, navigate to Kinesis service → Delivery Streams, select SplunkCCCloudTrailDeliveryStream, and verify that status is active and Source record transformation is enabled with SplunkCCCloudWatchLogProcessor as the Lambda function
  8. Go to Monitoring tab and inspect graphs to verify that events are arriving, being processed by Lambda and delivered to Splunk (useful graphs: Incoming XXX per second, Lambda function processing success and Delivery to Splunk success)

Change graph time interval to find plots and check if there are error logs in Splunk logs and Amazon S3 logs tabs

GuardDuty data ingestion problems

During data ingestion, use the following troubleshooting tips if you see messages about GuardDuty.

Message Tips
GuardDuty is disabled or suspended in the AWS regions selected In the AWS console, complete the following steps:
  1. From the AWS console of the data accounts, navigate to GuardDuty service in the AWS regions selected.
  2. Verify that the service is enabled.
  3. If it is diabled, click Get Started.
  4. If it is suspended, click Re-enable GuardDuty.
If the service is enabled and not suspended, but data ingestion issues remain In the AWS console, complete the following steps:
  1. From the AWS console of the data accounts, navigate to GuardDuty service in the AWS regions selected.
  2. Verify that there are findings.
  3. Navigate to EventBridge service > Rules in the same region and select SplunkDMGuardDutyEventBridgePatternRule.
  4. From rule details page, verify the following settings:
    • status = Enabled
    • event pattern = { "source": ["aws.guardduty"] }
    • Target = SplunkDMGuardDutyDeliveryStream
    • Monitoring = Metrics for the rule graph has TriggeredRules and Invocation metrics available. If not, revise the graph time interval and metric period. This rule is triggered only when new GuardDuty findings are available in the event bus. Therefore, the invocation graph can be empty depending on how many new findings the GuardDuty service in the region produces.
  5. From the region, navigate to Kinesis service > Delivery Streams, select SplunkDMGuardDutyDeliveryStream, and verify that status is active.
  6. Go to the Monitoring tab and inspect the graphs to verify that events are arriving and being delivered to Splunk. Useful graphs include: Incoming XXX per second, Lambda function processing success, and Delivery to Splunk success)
  7. Change the graph time interval to find plots and check if there are error logs in Splunk logs and Amazon S3 logs tabs.

Security Hub data ingestion problems

During data ingestion, use the following troubleshooting tips if you see messages about Security Hub.

Message Tips
Security Hub is disabled or suspended in the AWS regions selected In the AWS console, complete the following steps:
  1. From the AWS console of the data accounts, navigate to Security Hub service in the AWS regions selected.
  2. Verify that the service is enabled.
  3. If it is diabled, click 'Go to Security Hub.
  4. Enable the service.
Problem with AWS resources part of data pipeline If the previous Security Hub troubleshooting did not resolve data ingestion issues, a more detailed diagnosis can be carried out:
  1. From the AWS console of the data accounts, navigate to Security Hub service in the AWS regions selected.
  2. Verify that there are findings.
  3. Navigate to EventBridge service > Rules in the same region and select SplunkDMSecurityHubEventBridgePatternRule.
  4. From rule details page, verify the following settings:
    • status = Enabled
    • Monitoring = Metrics for the rule graph has TriggeredRules and Invocation metrics available. If not, revise the graph time interval and metric period. This rule is triggered only when new Security Hub findings are available in the event bus. Therefore, the invocation graph can be empty depending on how many new findings the Security Hub service in the region produces.
    • Targe = SplunkDMSecurityHubDeliveryStream
    • event pattern = { "source": ["aws.securityhub"] }
  5. From the region, navigate to Kinesis service > Delivery Streams, select SplunkDMSecurityHubDeliveryStream, and verify that the status is active.
  6. Go to the Monitoring tab and inspect the graphs to verify that events are arriving and being delivered to Splunk. Useful graphs include: Incoming XXX per second, Lambda function processing success, and Delivery to Splunk success)
  7. Change the graph time interval to find plots and check if there are error logs in Splunk logs and Amazon S3 logs tabs.

IAM Access Analyzer data ingestion problems

During data ingestion, use the following troubleshooting tips if you see messages about IAM Access Analyzer.

Message Tips
IAM Access Analyzer is not created in the AWS regions selected In the AWS console, complete the following steps:
  1. From the AWS console of the data accounts, navigate to Access reports > Analyzers within IAM service in the AWS regions selected.
  2. Verify that there is an access analyzer.
  3. If you cannot navigate to Access reports > Analyzers, there is no access analyzer. Create a new analyzer by clicking Create analyzer.
Problem with AWS resources part of data pipeline If the previous IAM Access Analyzer troubleshooting did not resolve data ingestion issues, a more detailed diagnosis can be carried out:
  1. From the AWS console of the data accounts, navigate to IAM service > Access reports > Access Analyzers in the AWS regions selected.
  2. Verify that there are active findings.
  3. Navigate to EventBridge service > Rules in the same region and select SplunkDMIAMAccessAnalyzerEventBridgeRule.
  4. From rule details page, verify the following settings:
    • status = Enabled
    • event pattern = { "detail-type": [ "Access Analyzer Finding" ], "source": [ "aws.access-analyzer" ] }
    • Target = SplunkDMIAMAccessAnalyzerDeliveryStream
    • Monitoring = Metrics for the rule graph has TriggeredRules and Invocation metrics available. If not, revise the graph time interval and metric period. This rule is triggered only when new Access Analyzer findings are available in the event bus. Therefore, the invocation graph can be empty depending on how many new findings the Access Analyzer service in the region produces.
  5. From the region, navigate to Kinesis service > Delivery Streams, select SplunkDMIAMAccessAnalyzerDeliveryStream, and verify that the status is active.
  6. Go to the Monitoring tab and inspect the graphs to verify that events are arriving and being delivered to Splunk. Useful graphs include: Incoming XXX per second, Lambda function processing success, and Delivery to Splunk success)
  7. Change the graph time interval to find plots and check if there are error logs in Splunk logs and Amazon S3 logs tabs.

IAM Credential Report data ingestion problems

During data ingestion, use the following troubleshooting tips if you see messages about IAM Credential Report.

Message Tips
Problem with AWS resources part of data pipeline In the AWS console, complete the following steps:
  1. From the AWS console of the data accounts, navigate to EventBridge service → Rules in the us-east-1 region and select SplunkDMIAMCredentialReportScheduleRule.
  2. From rule details page, verify the following settings:
    • status = Enabled
    • Target = SplunkDMIAMCredentialReport Lambda function
    • Monitoring = Metrics for the rule graph has TriggeredRules and Invocation metrics available. If not, revise the graph time interval and metric period. This rule periodically triggers the lambda function, so the graph should show invocations at the same interval.
  3. Navigate to Lambda service > Functions in the same region and select SplunkDMIAMCredentialReport.
  4. Go to the Monitor > Metrics tab and inspect the graphs to verify that the function is getting invoked using Invocations graph.
  5. Go to the Monitor > Logs, check out a log stream by selecting the log stream name, and verify that events are being successfully sent and that there is no error.

Metadata ingestion problems

During data ingestion, use the following troubleshooting tips if you see messages about Metadata.

Message Tips
Problem with AWS resources part of data pipeline In the AWS console, complete the following steps:
  1. From the AWS console of the data accounts, navigate to EventBridge service → Rules in the us-east-1 region and select SplunkDMMetadataIAMUsersScheduleRule.
  2. From rule details page, verify the following settings:
    • status = Enabled
    • Target = SplunkDMMetadataIAMUsers Lambda function
    • Monitoring = Metrics for the rule graph has TriggeredRules and Invocation metrics available. If not, revise graph time interval and metric period. This rule periodically triggers the lambda function, so the graph should show invocations at the same interval.
  3. Navigate to Lambda service > Functions in the same region and select SplunkDMMetadataIAMUsers.
  4. Go to the Monitor > Metrics tab and inspect the graphs to verify that the function is getting invoked using Invocations graph.
  5. Go to the Monitor > Logs tab, check out a log stream by selecting the log stream name, and verify that events are being successfully sent and that there is no error.

EC2 Security Groups data ingestion problems

During data ingestion, use the following troubleshooting tips if you see messages about EC2 Security Groups.

Message Tips
Problem with AWS resources part of data pipeline In the AWS console, complete the following steps:
  1. From the AWS console of the data accounts, navigate to EC2 service > Network & Security > Security Groups in the AWS regions selected.
  2. Verify that there are Security Groups in the regions.
  3. Navigate to EventBridge service > Rules in the same region and select SplunkDMMetadataEC2SGPatternRule.
  4. From rule details page, verify the following settings:
    • status = Enabled
    • event pattern = { "detail-type": [ "AWS API Call via CloudTrail" ], "source": [ "aws.ec2" ], "detail": { "eventSource": [ "ec2.amazonaws.com" ], "eventName": [ "CreateSecurityGroup" ] } }
    • Target = SplunkDMMetadataEC2SGPatternRule Lambda function
    • Monitoring = Metrics for the rule graph has TriggeredRules and Invocation metrics available. If not, revise graph time interval and metric period. This rule is triggered only when new Security Groups are created, so the invocation graph can be empty depending on the last time a new security group was created.
  5. Go back to EventBridge service > Rules in the region and select SplunkDMMetadataEC2SGScheduleRule.
  6. From rule details page, verify the following settings:
    • status = Enabled
    • Target = SplunkDMMetadataEC2SGPatternRule Lambda function
    • Monitoring = Metrics for the rule graph has TriggeredRules and Invocation metrics available. If not, revise the graph time interval and metric period. This rule periodically triggers the lambda function, so the graph should show invocations at the same interval.
  7. Navigate to Lambda service > Functions in the same region and select SplunkDMMetadataEC2SGPatternRule.
  8. Go to the Monitor > Metrics tab and inspect the graphs to verify that the function is getting invoked using Invocations graph.
  9. Go to the Monitor > Logs tab, check out a log stream by selecting the log stream name, and verify that events are being successfully sent and that there is no error.

EC2 Network ACLs data ingestion problems

During data ingestion, use the following troubleshooting tips if you see messages about EC2 Network ACLs.

Message Tips
Problem with AWS resources part of data pipeline In the AWS console, complete the following steps:
  1. From the AWS console of the data accounts, navigate to 'PC service > Security > Network ACLs in the AWS regions selected.
  2. Verify that there are Network ACLs in the regions.
  3. Navigate to EventBridge service > Rules in the same region and select SplunkDMMetadataEC2NetworkAclPatternRule.
  4. From rule details page, verify the following settings:
    • status = Enabled
    • event pattern = { "detail-type": [ "AWS API Call via CloudTrail" ], "source": [ "aws.ec2" ], "detail": { "eventSource": [ "ec2.amazonaws.com" ], "eventName": [ "CreateNetworkAcl", "CreateNetworkAclEntry" ] } }
    • Target = SplunkDMMetadataEC2NetworkAcl Lambda function
    • Monitoring = Metrics for the rule graph has TriggeredRules and Invocation metrics available. If not, revise graph time interval and metric period. This rule is triggered only when new Network ACLs are created, so the invocation graph can be empty depending on the last time a new Network ACL was created.
  5. Go back to EventBridge service > Rules in the region and select SplunkDMMetadataEC2NetworkAclScheduleRule.
  6. From rule details page, verify the following settings:
    • status = Enabled
    • Target = SplunkDMMetadataEC2NetworkAcl Lambda function
    • Monitoring = Metrics for the rule graph has TriggeredRules and Invocation metrics available. If not, revise graph time interval and metric period. This rule periodically triggers the lambda function, so the graph should show invocations at the same interval.
  7. Navigate to Lambda service > Functions in the same region and select SplunkDMMetadataEC2NetworkAcl.
  8. Go to the Monitor > Metrics tab and inspect the graphs to verify that the function is getting invoked using Invocations graph.
  9. Go to the Monitor > Logs tab, check out a log stream by selecting the log stream name, and verify that events are being successfully sent and that there is no error.

EC2 Instances data ingestion problems

During data ingestion, use the following troubleshooting tips if you see messages about EC2 Security Groups.

Message Tips
Problem with AWS resources part of data pipeline In the AWS console, complete the following steps:
  1. From the AWS console of the data accounts, navigate to EC2 service > Instances > Instances in the AWS regions selected.
  2. Verify that there are EC2 Instances in the regions.
  3. Navigate to EventBridge service > Rules in the same region and select SplunkDMMetadataEC2InstPatternRule.
  4. From rule details page, verify the following settings:
    • status = Enabled
    • event pattern = { "detail-type": [ "EC2 Instance State-change Notification" ], "source": [ "aws.ec2" ], "detail": { "state": [ "running" ] } }
    • Target = plunkDMMetadataEC2Inst Lambda function
    • Monitoring = Metrics for the rule graph has TriggeredRules and Invocation metrics available. If not, revise the graph time interval and metric period. This rule is triggered only when EC2 instances change their state to "running," so the invocation graph can be empty depending on the last time an EC2 instance state changed to "running."
  5. Go back to EventBridge service > Rules in the region and select SplunkDMMetadataEC2InstScheduleRule.
  6. From rule details page, verify the following settings:
    • status = Enabled
    • Target = SplunkDMMetadataEC2Inst Lambda function
    • Monitoring = Metrics for the rule graph has TriggeredRules and Invocation metrics available. If not, revise the graph time interval and metric period. This rule periodically triggers the lambda function, so the graph should show invocations at the same interval.
  7. Navigate to Lambda service > Functions in the same region and select SplunkDMMetadataEC2Inst.
  8. Go to the Monitor > Metrics tab and inspect the graphs to verify that the function is getting invoked using Invocations graph.
  9. Go to the Monitor > Logs tab, check out a log stream by selecting the log stream name, and verify that events are being successfully sent and that there is no error.

Search for events and logs

Use the following searches to find events and logs. From the Splunk Cloud menu bar, click Apps > Search & Reporting.

Search for AWS events associated with a specific input ID.

index=aws_security datamanager_input_id=<input_id>

Search for Data Manager logs.

index=_internal source=/opt/splunk/var/log/splunk/data_manager_app.log

Last modified on 03 November, 2021
 

This documentation applies to the following versions of Data Manager: 1.4.0


Was this topic useful?







You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters