Splunk® Content Packs for ITSI and IT Essentials Work

Splunk Content Packs for ITSI and IT Essentials Work

Acrobat logo Download manual as PDF


Acrobat logo Download topic as PDF

Install and configure the Content Pack for Splunk Observability Cloud

Follow these high-level steps to configure the Content Pack for Splunk Synthetic Observability Cloud`:

  1. Install and configure the Splunk Infrastructure Monitoring and Splunk Synthetic Monitoring Add-ons. Note, these add-ons have to be installed on search heads in a distributed deployment.
  2. Install the content pack.
  3. Import your entities.
  4. Review and tune KPIs thresholds.

Prerequisites

Step 1: Install and configure the Splunk Infrastructure Monitoring and Splunk Synthetic Monitoring Add-ons

This content pack depends on data collected in the Splunk Infrastructure Monitoring Add-on and the Splunk Synthetic Monitoring Add-on. You can safely install add-ons on all tiers of a distributed Splunk platform deployment, including heavy forwarders, indexers, or search heads. Note, these add-ons have to be installed on search heads in a distributed deployment. Download the latest versions of the Splunk Synthetic Monitoring Add-on and the Splunk Infrastructure Monitoring Add-on from Splunkbase.

Step 2: Create modular inputs for Splunk Infrastructure Monitoring and Splunk Application Performance Monitoring

After you have configured the Splunk Infrastructure Monitoring and Splunk Synthetic Monitoring Add-ons, you need to create modular inputs for the data for this content pack. We have prebuilt modular input queries for use with this content pack. See Configure inputs in the Splunk Infrastructure Monitoring Add-on in the Splunk® Infrastructure Monitoring Add-on manual to learn more about modular inputs.

Follow these steps to create a modular input for each data input you want to set up:

  1. Go to Settings > Data inputs.
  2. Locate the Splunk Infrastructure Monitoring Data Streams.
  3. Select +Add new.
  4. Enter a name for the input.
  5. Enter an Organization ID. This is the ID of the organization used to authenticate and fetch data from Splunk Infrastructure Monitoring. You can find your Organization ID on the Account Settings page in Splunk Observability Cloud or the Configuration page in the Splunk Infrastructure Monitoring Add-on.
  6. In the SignalFlow Program field, enter modular input query from the list of modular input queries that follows these steps.
  7. Adjust the Metric Resolution and Additional Metadata Flag for your needs.
  8. Set Interval to 300.
  9. Set sourcetype to Manual.
  10. Enter stash for Source type.
  11. Enter $decideOnStartup for Host.
  12. Select sim_metrics for Index.

Repeat these steps for each data input you want to set up.

Modular input queries

Splunk Application Performance Monitoring (APM) Errors

"error_durations_p99 = data('service.request' + '.duration.ns.' + 'p99', filter=filter('sf_environment', '*') and filter('sf_service', '*') and filter('sf_error','*') and not filter('sf_dimensionalized', '*') and filter('sf_error', 'true'), rollup='max').mean(by=['sf_service', 'sf_environment', 'sf_error'], allow_missing=['sf_httpMethod']).publish(label='error_durations_p99') | non_error_durations_p99 = data('service.request' + '.duration.ns.' + 'p99', filter=filter('sf_environment', '*') and filter('sf_service', '*') and filter('sf_error','*') and not filter('sf_dimensionalized', '*') and filter('sf_error', 'false'), rollup='max').mean(by=['sf_service', 'sf_environment', 'sf_error'], allow_missing=['sf_httpMethod']).publish(label='non_error_durations_p99') | error_durations = data('service.request' + '.duration.ns.' + 'median', filter=filter('sf_environment', '*') and filter('sf_service', '*') and filter('sf_error','*') and not filter('sf_dimensionalized', '*') and filter('sf_error', 'true'), rollup='max').mean(by=['sf_service', 'sf_environment', 'sf_error'], allow_missing=['sf_httpMethod']).publish(label='error_durations') | non_error_durations = data('service.request' + '.duration.ns.' + 'median', filter=filter('sf_environment', '*') and filter('sf_service', '*') and filter('sf_error','*') and not filter('sf_dimensionalized', '*') and filter('sf_error', 'false'), rollup='max').mean(by=['sf_service', 'sf_environment', 'sf_error'], allow_missing=['sf_httpMethod']).publish(label='non_error_durations') | error_counts = data('service.request' + '.count', filter=filter('sf_environment', '*') and filter('sf_service', '*') and filter('sf_error','*') and not filter('sf_dimensionalized', '*') and filter('sf_error', 'true'), rollup='sum').sum(by=['sf_service', 'sf_environment', 'sf_error'], allow_missing=['sf_httpMethod']).publish(label='error_counts') | non_error_counts = data('service.request' + '.count', filter=filter('sf_environment', '*') and filter('sf_service', '*') and filter('sf_error','*') and not filter('sf_dimensionalized', '*') and filter('sf_error', 'false'), rollup='sum').sum(by=['sf_service', 'sf_environment', 'sf_error'], allow_missing=['sf_httpMethod']).publish(label='non_error_counts')"

Splunk Application Performance Monitoring (APM) Rate (Throughput)

"thruput_avg_rate = data('service.request.count', filter=filter('sf_environment', '*') and filter('sf_service', '*') and (not filter('sf_dimensionalized', '*')), rollup='rate').sum(by=['sf_service', 'sf_environment']).publish(label='thruput_avg_rate')"

Splunk Infrastructure Monitoring AWS EC2 input stream

"data('CPUUtilization', filter=filter('stat', 'mean') and filter('namespace', 'AWS/EC2') and filter('InstanceId', '*'), rollup='average').publish(); data('NetworkIn', filter=filter('stat', 'sum') and filter('namespace', 'AWS/EC2') and filter('InstanceId', '*'), rollup='sum').publish(); data('NetworkOut', filter=filter('stat', 'sum') and filter('namespace', 'AWS/EC2') and filter('InstanceId', '*'), rollup='sum').publish(); data('NetworkPacketsIn', filter=filter('stat', 'sum') and filter('namespace', 'AWS/EC2') and filter('InstanceId', '*'), rollup='sum').publish(); data('NetworkPacketsOut', filter=filter('stat', 'sum') and filter('namespace', 'AWS/EC2') and filter('InstanceId', '*'), rollup='sum').publish(); data('DiskReadBytes', filter=filter('stat', 'sum') and filter('namespace', 'AWS/EC2') and filter('InstanceId', '*'), rollup='sum').publish(); data('DiskWriteBytes', filter=filter('stat', 'sum') and filter('namespace', 'AWS/EC2') and filter('InstanceId', '*'), rollup='sum').publish(); data('DiskReadOps', filter=filter('stat', 'sum') and filter('namespace', 'AWS/EC2') and filter('InstanceId', '*'), rollup='sum').publish(); data('DiskWriteOps', filter=filter('stat', 'sum') and filter('namespace', 'AWS/EC2') and filter('InstanceId', '*'), rollup='sum').publish(); data('StatusCheckFailed', filter=filter('stat', 'count') and filter('namespace', 'AWS/EC2') and filter('InstanceId', '*'), rollup='sum').publish(); " 

Splunk Infrastructure Monitoring AWS Lambda input stream

"data('Duration', filter=filter('stat', 'mean') and filter('namespace', 'AWS/Lambda') and filter('Resource', '*'), rollup='average').publish(); data('Errors', filter=filter('stat', 'sum') and filter('namespace', 'AWS/Lambda') and filter('Resource', '*'), rollup='sum').publish(); data('ConcurrentExecutions', filter=filter('stat', 'sum') and filter('namespace', 'AWS/Lambda') and filter('Resource', '*'), rollup='sum').publish(); data('Invocations', filter=filter('stat', 'sum') and filter('namespace', 'AWS/Lambda') and filter('Resource', '*'), rollup='sum').publish(); data('Throttles', filter=filter('stat', 'sum') and filter('namespace', 'AWS/Lambda') and filter('Resource', '*'), rollup='sum').publish(); "

Splunk Infrastructure Monitoring Azure input stream

"data('Percentage CPU', filter=filter('primary_aggregation_type', 'true') and filter('aggregation_type', 'average'), rollup='average').promote('azure_resource_name').publish(); data('Network In', filter=filter('primary_aggregation_type', 'true') and filter('aggregation_type', 'total'), rollup='sum' ).promote('azure_resource_name').publish(); data('Network Out', filter=filter('primary_aggregation_type', 'true') and filter('aggregation_type', 'total'), rollup='sum').promote('azure_resource_name').publish(); data('Inbound Flows', filter=filter('primary_aggregation_type', 'true') and filter('aggregation_type', 'average'), rollup='average').promote('azure_resource_name').publish(); data('Outbound Flows', filter=filter('primary_aggregation_type', 'true') and filter('aggregation_type', 'average'), rollup='average').promote('azure_resource_name').publish(); data('Disk Write Operations/Sec', filter=filter('primary_aggregation_type', 'true') and filter('aggregation_type', 'average'), rollup='average').promote('azure_resource_name').publish(); data('Disk Read Operations/Sec', filter=filter('primary_aggregation_type', 'true') and filter('aggregation_type', 'average'), rollup='average').promote('azure_resource_name').publish(); data('Disk Read Bytes', filter=filter('primary_aggregation_type', 'true') and filter('aggregation_type', 'total'), rollup='sum' ).promote('azure_resource_name').publish(); data('Disk Write Bytes', filter=filter('primary_aggregation_type', 'true') and filter('aggregation_type', 'total'), rollup='sum').promote('azure_resource_name').publish();" | "data('FunctionExecutionCount', filter=filter('primary_aggregation_type', 'true') and filter('aggregation_type', 'total') and filter('is_Azure_Function', 'true'), rollup='sum').publish(); data('Requests', filter=filter('primary_aggregation_type', 'true') and filter('aggregation_type', 'total') and filter('is_Azure_Function', 'true'), rollup='sum').publish(); data('FunctionExecutionUnits', filter=filter('primary_aggregation_type', 'true') and filter('aggregation_type', 'total') and filter('is_Azure_Function', 'true'), rollup='sum').publish(); data('AverageMemoryWorkingSet', filter=filter('primary_aggregation_type', 'true') and filter('aggregation_type', 'average') and filter('is_Azure_Function', 'true'), rollup='Average').publish(); data('AverageResponseTime', filter=filter('primary_aggregation_type', 'true') and filter('aggregation_type', 'average') and filter('is_Azure_Function', 'true'), rollup='Average').publish(); data('BytesSent', filter=filter('primary_aggregation_type', 'true') and filter('aggregation_type', 'total') and filter('is_Azure_Function', 'true'), rollup='sum').publish(); data('BytesReceived', filter=filter('primary_aggregation_type', 'true') and filter('aggregation_type', 'total') and filter('is_Azure_Function', 'true'), rollup='sum').publish(); data('CpuTime', filter=filter('primary_aggregation_type', 'true') and filter('aggregation_type', 'total') and filter('is_Azure_Function', 'true'), rollup='sum').publish(); data('Http5xx', filter=filter('primary_aggregation_type', 'true') and filter('aggregation_type', 'total') and filter('is_Azure_Function', 'true'), rollup='sum').publish();"

Splunk Infrastructure Monitoring Docker Containers input stream

"data('cpu.usage.total', filter=filter('plugin', 'docker'), rollup='rate').promote('plugin-instance', allow_missing=True).publish('DSIM:Docker Containers'); data('cpu.usage.system', filter=filter('plugin', 'docker'), rollup='rate').promote('plugin-instance', allow_missing=True).publish('DSIM:Docker Containers'); data('memory.usage.total', filter=filter('plugin', 'docker')).promote('plugin-instance', allow_missing=True).publish('DSIM:Docker Containers'); data('memory.usage.limit', filter=filter('plugin', 'docker')).promote('plugin-instance', allow_missing=True).publish('DSIM:Docker Containers'); data('blkio.io_service_bytes_recursive.write', filter=filter('plugin', 'docker'), rollup='rate').promote('plugin-instance', allow_missing=True).publish('DSIM:Docker Containers'); data('blkio.io_service_bytes_recursive.read', filter=filter('plugin', 'docker'), rollup='rate').promote('plugin-instance', allow_missing=True).publish('DSIM:Docker Containers'); data('network.usage.tx_bytes', filter=filter('plugin', 'docker'), rollup='rate').scale(8).promote('plugin-instance', allow_missing=True).publish('DSIM:Docker Containers'); data('network.usage.rx_bytes', filter=filter('plugin', 'docker'), rollup='rate').scale(8).promote('plugin-instance', allow_missing=True).publish('DSIM:Docker Containers');"

Splunk Infrastructure Monitoring GCP input stream

"data('instance/cpu/utilization', filter=filter('instance_id', '*'), rollup='average').publish(); data('instance/network/sent_packets_count', filter=filter('instance_id', '*'), rollup='sum').publish(); data('instance/network/received_packets_count', filter=filter('instance_id', '*'), rollup='sum').publish(); data('instance/network/received_bytes_count', filter=filter('instance_id', '*'), rollup='sum').publish(); data('instance/network/sent_bytes_count', filter=filter('instance_id', '*'), rollup='sum').publish(); data('instance/disk/write_bytes_count', filter=filter('instance_id', '*'), rollup='sum').publish(); data('instance/disk/write_ops_count', filter=filter('instance_id', '*'), rollup='sum').publish(); data('instance/disk/read_bytes_count', filter=filter('instance_id', '*'), rollup='sum').publish();data('instance/disk/read_ops_count', filter=filter('instance_id', '*'), rollup='sum').publish();" | "data('function/execution_times', rollup='latest').scale(0.000001).publish(); data('function/user_memory_bytes', rollup='average').publish(); data('function/execution_count', rollup='sum').publish(); data('function/active_instances', rollup='latest').publish(); data('function/network_egress', rollup='sum').publish();"

Splunk Infrastructure Monitoring Kubernetes input stream

"data('container_cpu_utilization', filter=filter('k8s.pod.name', '*'), rollup='rate').promote('plugin-instance', allow_missing=True).publish('DSIM:Kubernetes'); data('container.memory.usage', filter=filter('k8s.pod.name', '*')).promote('plugin-instance', allow_missing=True).publish('DSIM:Kubernetes'); data('kubernetes.container_memory_limit', filter=filter('k8s.pod.name', '*')).promote('plugin-instance', allow_missing=True).publish('DSIM:Kubernetes'); data('pod_network_receive_errors_total', filter=filter('k8s.pod.name', '*'), rollup='rate').publish('DSIM:Kubernetes'); data('pod_network_transmit_errors_total', filter=filter('k8s.pod.name', '*'), rollup='rate').publish('DSIM:Kubernetes');"

Splunk Infrastructure Monitoring OS Hosts input stream

"data('cpu.utilization', filter=(not filter('agent', '*'))).promote('host','host_kernel_name','host_linux_version','host_mem_total','host_cpu_cores', allow_missing=True).publish('DSIM:Hosts (Smart Agent/collectd)'); data('memory.free', filter=(not filter('agent', '*'))).sum(by=['host']).publish('DSIM:Hosts (Smart Agent/collectd)'); data('disk_ops.read', rollup='rate').sum(by=['host.name']).publish('DSIM:Hosts (Smart Agent/collectd)'); data('disk_ops.write', rollup='rate').sum(by=['host.name']).publish('DSIM:Hosts (Smart Agent/collectd)'); data('memory.available', filter=(not filter('agent', '*'))).sum(by=['host']).publish('DSIM:Hosts (Smart Agent/collectd)'); data('memory.used', filter=(not filter('agent', '*'))).sum(by=['host']).publish('DSIM:Hosts (Smart Agent/collectd)'); data('memory.buffered', filter=(not filter('agent', '*'))).sum(by=['host']).publish('DSIM:Hosts (Smart Agent/collectd)'); data('memory.cached', filter=(not filter('agent', '*'))).sum(by=['host']).publish('DSIM:Hosts (Smart Agent/collectd)'); data('memory.active', filter=(not filter('agent', '*'))).sum(by=['host']).publish('DSIM:Hosts (Smart Agent/collectd)'); data('memory.inactive', filter=(not filter('agent', '*'))).sum(by=['host']).publish('DSIM:Hosts (Smart Agent/collectd)'); data('memory.wired', filter=(not filter('agent', '*'))).sum(by=['host']).publish('DSIM:Hosts (Smart Agent/collectd)'); data('df_complex.used', filter=(not filter('agent', '*'))).sum(by=['host']).publish('DSIM:Hosts (Smart Agent/collectd)'); data('df_complex.free', filter=(not filter('agent', '*'))).sum(by=['host']).publish('DSIM:Hosts (Smart Agent/collectd)'); data('memory.utilization', filter=(not filter('agent', '*'))).promote('host',allow_missing=True).publish('DSIM:Hosts (Smart Agent/collectd)'); data('vmpage_io.swap.in', filter=(not filter('agent', '*')), rollup='rate').promote('host',allow_missing=True).publish('DSIM:Hosts (Smart Agent/collectd)'); data('vmpage_io.swap.out', filter=(not filter('agent', '*')), rollup='rate').promote('host',allow_missing=True).publish('DSIM:Hosts (Smart Agent/collectd)'); data('if_octets.tx', rollup='rate').scale(8).mean(by=['host.name']).publish('DSIM:Hosts (Smart Agent/collectd)'); data('if_octets.rx', rollup='rate').scale(8).mean(by=['host.name']).publish('DSIM:Hosts (Smart Agent/collectd)'); data('if_errors.rx', rollup='delta').sum(by=['host.name']).publish('DSIM:Hosts (Smart Agent/collectd)'); data('if_errors.tx', rollup='delta').sum(by=['host.name']).publish('DSIM:Hosts (Smart Agent/collectd)');"


Step 3: Install the Content Pack for Splunk Observability Cloud

The Content Pack for Splunk Observability Cloud is automatically available for installation once you have installed the Splunk App for Content Packs on the search head with ITSI 4.9.0 or higher or IT Essentials Work 4.9.0 or higher. For steps to install the Splunk App for Content Packs, go to the installation instructions for the Splunk App for Content Packs. After you install the Splunk App for Content Packs, you can follow these steps install the content pack:

  1. From the ITSI main menu, click Configuration > Data Integrations.
  2. Click Add structure to your data.
  3. Select the Splunk Observability Cloud content pack.
  4. Review what's included in the content pack and then click Proceed.
    InstallCPforSplunkObservabilityCloud.png
  5. Configure the settings:
    • Choose which objects to install: For a first-time installation, select the items you want to install and deselect any you're not interested in. For an upgrade, the installer identifies which objects from the content pack are new and which ones already exist in your environment from a previous installation. You can selectively choose which objects to install from the new version or install all objects.
    • Choose a conflict resolution rule for the objects you install: For upgrades or subsequent installs, decide what happens to duplicate objects introduced from the content pack. Choose from these options:
      • Install as new: Any existing identical objects in your environment remain intact.
      • Replace existing: Existing identical objects are replaced with those from the new installation. Any changes you previously made to these objects are overwritten.
    • Import as enabled: Select whether to install objects as enabled or leave them in their original state. We recommend that you import objects as disabled to ensure your environment doesn't break from the addition of new content. This setting only applies to services, correlation searches, and aggregation policies. All other objects such as KPI base searches and saved searches are installed in their original state regardless of the option you choose.
    • Add a prefix to your new objects: Optionally, append a custom prefix to each object installed from the content pack. For example, you might prefix your objects with CP- to indicate they came from a content pack. This option can help you locate and manage the objects after installation.
    • Backfill service KPIs: Optionally backfill your ITSI environment with the previous seven days of KPI data. Consider enabling backfill if you want to configure adaptive thresholding and predictive analytics for the new services. This setting only applies to KPIs, not service health scores.
      InstallSettings.png
  6. When you've made your selections, click Install selected.
  7. Click Install to confirm the installation. When the installation completes you can view all objects that were installed in your environment. A green checkmark on the Data Integrations page indicates which content packs you've already installed.

Step 4: Enable your Splunk Observability Cloud entity searches

There are nine entity discovery searches included with this content pack. They are disabled by default. When you ready to get your data in, follow these steps to enable the entity discover searches for Splunk Observability Cloud.

  1. In Splunk Enterprise go to Settings > Searches, reports, and alerts.
  2. In the Type dropdown, select All.
  3. In the App dropdown, select Splunk Observability Cloud (DA-ITSI-CP-splunk-observability).
  4. In the Owner dropdown, select All.
  5. Depending on the searches you selected to install, these entity discovery searches are available to enable:
    • ITSI Import Objects - Get_OS_Hosts
    • ITSI Import Objects - Get_SIM_AWS_EC2
    • ITSI Import Objects - Get_SIM_AWS_Lambdas
    • ITSI Import Objects - Get_SIM_Azure_Functions
    • ITSI Import Objects - Get_SIM_Azure_VM
    • ITSI Import Objects - Get_SIM_GCP_Compute
    • ITSI Import Objects - Get_SIM_GCP_Functions
    • ITSI Import Objects - Get_SSM_Entities
    • ITSI Import Objects - Splunk-APM Application Entity Search
  6. Select Edit > Enable for each search you want to enable.

When you've finished enabling the entity searches to import your entities, go to the Service Analyzer > Default Analyzer to see your services and KPIs light up.

Content pack objects are disabled by default on import. If you didn't toggle the option to import the content pack objects as enabled you have to enable them under Configuration > Services. Once you have enabled the services the Service Analyzer will light up.

Step 5: Configure KPI thresholds

Some KPIs in this content pack have predetermined aggregate and per-entity thresholds configured. Go through the KPIs in each service and configure the aggregate and per-entity thresholds values to reasonable defaults based on your use case. For steps to configure KPI thresholds, see Configure KPI thresholds in ITSI in the Service Insights manual.

For a full list of the KPIs in this content pack, see the KPI reference for the Content Pack for Splunk Synthetic Monitoring.

KPI alerting

Because acceptable application performance varies widely per use case, KPI alerting isn't enabled by default in this content pack. To receive alerts for KPIs when aggregate KPI threshold values change, see Receive alerts when KPI severity changes in ITSI. ITSI generates notable events on the Episode Review page based on the alerting rules you configure.

Step 6. (Optional) Configure navigation suggestions for custom subdomains

If your Observability Cloud instance uses a custom subdomain, the navigation suggestion links for the SplunkAPM and Splunk Infrastructure Monitoring entity types have to be updated with your custom subdomain. Follow these steps for both the SplunkAPM and Splunk Infrastructure Monitoring entity types.

  1. Go to Configuration > Entities from the ITSI menu.
  2. Go to the Entity Types tab.
  3. Follow these steps to update the navigation suggestions for both the SplunkAPM and Splunk Infrastructure Monitoring entity types.
    1. Edit the entity type.
    2. Expand the Navigations section.
    3. Edit the navigation.
    4. Change the app.{sf_realm}.signalfx.com portion of the URL to {custom-subdomain}.signalfx.com.
    5. Click Save navigation.
    6. Repeat these steps for each navigation.
  4. Select Save to save the entity type when you are finished update each navigation.

Next steps

Now that you have installed and configured the Content Pack for Splunk Synthetic Monitoring, you can start using the dashboards and visualizations in the content pack to monitor your web applications. For instructions for using the content pack, see Use the Content Pack for Splunk Synthetic Monitoring.

Last modified on 27 October, 2021
PREVIOUS
Release Notes for the Content Pack for Splunk Observability Cloud
  NEXT
Use the Content Pack for Splunk Observability Cloud

This documentation applies to the following versions of Splunk® Content Packs for ITSI and IT Essentials Work: current


Was this documentation topic helpful?

You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters