Splunk® Content Packs for ITSI and IT Essentials Work

Splunk Content Packs for ITSI and IT Essentials Work

Acrobat logo Download manual as PDF


Acrobat logo Download topic as PDF

About the correlation searches in the Content Pack for Monitoring and Alerting

This topic describes the correlation searches included in the IT Service Intelligence (ITSI) Content Pack for Monitoring and Alerting. Enable the searches that are appropriate for the monitoring you want to conduct across your IT environment. To access these searches, you must first install and configure the Content Pack for Monitoring and Alerting. For instructions, see Install and configure the Content Pack for Monitoring and Alerting.

This content pack provides the following types of correlation searches:

Search type Description
Episode Monitoring Monitor the episodes in your environment. When an episode meets the conditions of the search, ITSI creates a notable event and the alert actions in the aggregation policy run. This content pack contains the following episode monitoring correlation searches:
Service Monitoring Monitor the health of the services and KPIs within your ITSI environment. The searches generate notable events based on various situations occurring with services, KPIs, and entities. This content pack contains the following service monitoring correlation searches:
Universal Alerting Universal Alerting simplifies and speeds up the process of onboarding external alert sources (such as Nagios or Solarwinds) into ITSI, and makes them more useful, more quickly. This content pack includes the following Universal Alerting correlation search:

Episode Monitoring - All Services and KPIs Return to Normal - Deprecated

While you can still use this correlation search, it has been deprecated in favor of a new and more flexible episode monitoring correlation search called "Episode Monitoring - Set Episode to Highest Alarm Severity." This correlation search was originally designed to report an "all clear" episode monitoring notable event when services and KPIs return to a healthy state, but it does not support similar all-clear behavior when the episode contains notable events from external alerts. This means its usefulness is limited, and the "Set Episode to Highest Alarm Severity" correlation search was built to support both Service and KPI notables, as well as notables from external sources.

This correlation search creates a notable event within an episode when every service and KPI in the episode returns to a healthy state. This notable event confirms that the services within the episode are all healthy. If you enable this correlation search, you can also choose to configure an action in one of your enabled aggregation policies to close the episode when this notable event is detected.

Which notable events qualify for this search

All notable events associated with a service health score or KPI result qualify for this correlation search.

This search excludes episodes created by the Default aggregation policy.

What to do before enabling

Review the search capacity and utilization in your environment to ensure the system has enough resources to run an additional search.

For more information about KPI importance values, see Set KPI importance values in ITSI in the Service Insights manual.

How to reduce noise

To reduce the number of alerts created by this search, increase the amount of time the services and KPIs have to be unhealthy before an event is created.

Episode Monitoring - Concentration of High and Critical Notable Events added to Episode

This correlation search creates an alert when multiple high and critical notable events across multiple services and KPIs are added to an episode in a short period of time. This alert may provide early detection of a potentially serious problem that is developing.

Which episodes, notable events, and KPIs qualify for this search

Only notable events with a severity level of High or Critical qualify for this search. Only KPIs with an importance value of Medium (5) or higher qualify for this search.

This search excludes episodes created by the Default aggregation policy.

What to do before enabling

Review the search capacity and utilization in your environment to ensure the system has enough resources to run an additional search.

How to reduce noise

Consider the following suggestions if this correlation search generates too many alerts in Episode Review:

  • Alter the search to only look for Critical severity levels.
  • Alter the search to look for higher volumes of notable events before creating an alert.
  • Alter the search to look for notable events across a larger number of services and KPIs before creating an alert.

Episode Monitoring - Critical Notable Event Added to Episode

This correlation search alerts you when a notable event with critical severity is added to an episode if the notable event is associated with a service health score or a KPI with high importance.

Which notable events, KPIs, and episodes qualify for this search

This correlation search supports the following event types:

  • All notable events associated with service health score results.
  • Notable events associated with aggregate KPI results. Only applies if the KPI importance value is High (6 or greater).
  • Notable events associated with per-entity KPI results. Only applies if the KPI importance value is High (6 or greater).

This search excludes episodes created by the Default aggregation policy.

What to do before enabling

To reduce overall noice, this correlation search only includes notable events from KPIs whose importance value is above 5, which is the default setting. Before enabling this search perform the following steps:

  • Review the importance values of KPIs that you know are strong indicators of the health of a service. Determine whether to increase their importance values above 5 so they qualify for this search.
  • Review the severity distribution for any KPIs whose importance value is already greater than the default setting (5) to ensure they're not producing excessive critical notable events.
  • Review the search capacity and utilization in your environment to ensure the system has enough resources to run an additional search.

For more information about KPI importance values, see Set KPI importance values in ITSI in the Service Insights manual.

How to reduce noise

Consider the following suggestions if this correlation search generates too many alerts in Episode Review:

  • Reduce the importance value of any KPI that's excessively critical.
  • Modify the search to increase the count of critical events that must be added to an episode before the correlation search generates an alert. For example, you might alert if the count of critical notable events in the episode is 5 or more.
  • Alter the search to look only for service health scores and aggregate KPI results

For more information about configuring correlation searches, see Generate events with correlation searches in ITSI in the Service Insights manual.

Episode Monitoring - Episode Risk Well Above Historical Average

This correlation search creates an alert when the severity level of an episode increases to more than three standard deviations above the historical severity levels of similar episodes. A historical episode is considered similar if it has the same title.

For this correlation search, "risk" represents a moment-in-time value. The search calculates risk by monitoring the number of notable events flowing into an episode, the severity of each event, and the importance values of the KPIs the events are tied to. For example, if an episode is amassing higher volumes of increasingly critical events, the entire episode's risk is on the rise. The search logic calculates a numeric measure of the current risk and compares it to historical risk scores of episodes with the same title. The overall goal of this correlation search is to determine if a particular issue is worse than it's ever been before.

Which episodes qualify for this search

All episodes generated in your environment qualify for this correlation search, except those created by the default aggregation policy. For more information on this policy, see About the default aggregation policy in ITSI in the Event Analytics manual.

What to do before enabling

This search depends on the itsi_episode_historical_risk_levels lookup that's dynamically populated by the ITSI Historical Episode Risk Levels Generator scheduled saved search. Before enabling this correlation search, you must schedule the saved search to regularly run.

To schedule the saved search, perform the following steps:

  1. On the ITSI navigation bar, click Search > Reports.
  2. Locate the ITSI Historical Episode Risk Levels Generator search and click Edit > Edit Schedule.
  3. Select Enable and Schedule Report.
  4. Click Save.

In addition, review the search capacity and utilization in your environment to ensure the system has enough resources to run an additional search.

How to reduce noise

Consider the following suggestions if this correlation search generates too many alerts in Episode Review:

  • Increase the upper bound to something beyond three standard deviations above historical severity levels.
  • Alter the search to filter out episode titles (itsi_group_title) that are known to be noisy.

Episode Monitoring - First Time Seen Episode

This correlation search creates a notable event when a high or critical notable event is added to an episode and there have been no other episodes with the same title in the last 30 days.

Which episodes and notable events qualify for this search

All episodes with a severity level of High or Critical qualify for this correlation search, except for those created by the default aggregation policy. For more information, see About the default aggregation policy in ITSI in the Event Analytics manual.

This search doesn't attempt to identify and exclude newly introduced systems and services that produce never before seen episode titles. These episodes always qualify for the search on their first episode.

What to do before enabling

Review the search capacity and utilization in your environment to ensure the system has enough resources to run an additional search.

How to reduce noise

If this correlation search generates too many alerts in Episode Review, consider altering the search to look back longer than 30 days when determining if an episode has ever been seen.

Episode Monitoring - Notable Event with Alert Attribute added to Episode

This correlation search creates an alert when an episode contains a notable event with the field alert=1. This search gives you the power to define explicit alert conditions in your environment by building logic into correlation searches to set an alert field to 1 when that notable event should produce an alert.

Which notable events and episodes qualify for this search

All notable event types qualify for this search. However, this search excludes episodes created by the Default aggregation policy.

What to do before enabling

By default, no correlation search in this content pack sets an alert field on notable events. Therefore, before enabling this search, enhance the logic for one or more correlation searches or define new custom correlation searches which set alert=1 when the presence of that notable event must produce an alert.

For more information about configuring correlation searches, see Overview of correlation searches in ITSI in the Event Analytics manual.

In addition, review the search capacity and utilization in your environment to ensure the system has enough resources to run an additional search.

How to reduce noise

If this correlation search produces too many false positive alerts, review and tune the custom logic you defined to set alert=1 in your correlation searches.

Episode Monitoring - Set Episode to Highest Alarm Severity

This correlation search creates a notable event for an episode when one of these conditions is met:

  • For each Event Type swim lane, if the highest swim lane severity differs from the overall episode severity, the episode severity is adjusted to match the severity of the "hottest" swim lane.
  • If all Event Type swim lanes have cleared (severity is normal/green), and if no new notable events have been added in about 10 minutes, the episode is closed.

Unlike the other Episode Monitoring correlation searches, Episode Monitoring - Set Episode to Highest Alarm Severity modifies the severity and status of episodes directly, rather than providing a human alert point.

Which notable events and episodes qualify for this search

All episodes generated in your environment qualify for this correlation search. It can be used with any episodes that are expected to have multiple Event Type "swim lanes". The four "Episodes by" Notable Event Aggregation Policies included in the Content Pack for Monitoring and Alerting are configured to use this correlation search.

What to do before enabling

This correlation search is recommended, especially if any of the four "Episodes by" Notable Event Aggregation Policies are enabled. It is configured to run Every 5 Minutes, and should not be configured to run on a shorter interval.

As always, review the search capacity and utilization in your environment to ensure the system has enough resources to run an additional search.

How to reduce noise

Noise Reduction is not applicable for this correlation search.


Service Monitoring - Degraded Service or KPI Returns to Normal

This correlation search creates a notable event when the severity of a service or KPI which previously created a notable event in an episode has returned to a healthy state. This event provides visual affirmation within the episode timeline that a previously unhealthy service or KPI is healthy again.

Which services and KPIs qualify for this search

To qualify for this correlation search, a service or KPI must have previously created a notable event within an episode. The notable event must have been produced by one of the Service Monitoring correlation searches from this content pack.

What to do before enabling

Review the search capacity and utilization in your environment to ensure the system has enough resources to run an additional search.

Service Monitoring - Entity Degraded

This correlation search creates a notable event when an entity's severity is high or critical for any KPI that's split by entity. This correlation search requires you to configure per-entity thresholds. For more information about per-entity thresholds, see Step 7: Set thresholds in Overview of creating KPIs in ITSI in the Service Insights manual.

Which entities qualify for this search

To qualify for this search, entities must be associated with a KPI with an importance value of medium (5) or higher and be associated with a KPI that has per-entity thresholds configured.

For more information about KPI importance values, see Set KPI importance values in ITSI in the Service Insights manual.

What to do before enabling

Perform the following steps before you enable this correlation search:

  1. Configure several KPIs to use per-entity thresholds, as the correlation search depends on per-entity thresholds.
  2. Review historical entity severity levels to identify and investigate entities which have been abnormal for long periods of time. Excessively unhealthy entities produce a large volume of notable events with this correlation search.
    1. From the ITSI main menu click Dashboards > Dashboards.
    2. Open the ITSI Service and KPI Severity Analytics dashboard.
    3. Use the result type filter to filter on Per-Entity Results.
    4. Review the historical performance of entity-level KPI results and determine if certain entities have performed abnormally for a long period of time.
  3. Review the search capacity and utilization in your environment to ensure the system has enough resources to run an additional search.

How to reduce noise

Consider the following suggestions if this correlation search generates too many notable events in Episode Review:

  • Make sure per-entity thresholds are tuned and accurate.
  • From the service configuration, lower the importance level of noisy KPIs below medium (5) so they will not qualify for this search
  • Configure a maintenance window for entities with known and expected degradations. For instructions, see Schedule maintenance downtime in ITSI.
  • Modify the correlation search to only alert on critical results.
  • Schedule the search to run less often. For example, once every 15 minutes.
  • Modify the search to filter out entities that you know produce a lot of alerts.
  • Modify the search to exclude results from KPIs and services that aren't considered high priority.

For more information about configuring correlation searches, see Generate events with correlation searches in ITSI in the Service Insights manual.

Service Monitoring - Entity for KPI with Highest (11) Importance Degraded

This correlation search creates a notable event when ALL of the following are true:

  • An entity's severity is high or critical for a KPI that's split by entity.
  • The KPI has configured per-entity thresholds.
  • The KPI has an importance value of 11.

Entity degradation like this can have a profound impact on the overall health score of a service.

For more information about KPI importance values, see Set KPI importance values in ITSI in the Service Insights manual.

KPIs with an importance value of 11 are a special case that represents a minimum health indicator for the service. When a KPI with an importance value of 11 reaches the Critical severity level, the overall service health score turns critical, regardless of the status of other KPIs in the service.

Which KPIs qualify for this search

Only KPIs with an importance value of 11 qualify for this search.

What to do before enabling

Perform the following steps before you enable this correlation search:

  1. Configure several KPIs to use per-entity thresholds, as the correlation search depends on per-entity thresholds.
  2. Review historical entity severity levels to identify and investigate entities which have been abnormal for long periods of time. Excessively unhealthy entities produce a large volume of notable events with this correlation search.
    1. From the ITSI main menu click Dashboards > Dashboards.
    2. Open the ITSI Service and KPI Severity Analytics dashboard.
    3. Use the result type filter to filter on Per-Entity Results.
    4. Review the historical performance of entity-level KPI results and determine if certain entities have performed abnormally for a long period of time.
  3. Review the search capacity and utilization in your environment to ensure the system has enough resources to run an additional search.

How to reduce noise

Consider the following suggestions if this correlation search generates too many notable events in Episode Review:

  • Make sure per-entity thresholds are tuned and accurate.
  • From the service configuration, lower the importance level of noisy KPIs below medium (5) so they will not qualify for this search
  • Configure a maintenance window for entities with known and expected degradations. For instructions, see Schedule maintenance downtime in ITSI.
  • Modify the correlation search to only alert on critical results.
  • Schedule the search to run less often. For example, once every 15 minutes.
  • Modify the search to filter out entities that you know produce a lot of alerts.
  • Modify the search to exclude results from KPIs and services that aren't considered high priority.

For more information about configuring correlation searches, see Generate events with correlation searches in ITSI in the Event Analytics manual.

Service Monitoring - KPI Degraded

This correlation search creates a notable event when the aggregate KPI severity is high or critical for any KPI with an importance value of medium (5) or higher.

Which KPIs qualify for this search

Only KPIs with importance values of medium (5) or higher qualify for this correlation search.

What to do before enabling

Review historical KPI severity levels to identify and investigate KPIs which have been abnormal for long periods of time. Excessively unhealthy KPIs produce a large volume of notable events with this correlation search.

  1. From the ITSI main menu, click Dashboards > Dashboards.
  2. Open the ITSI Service and KPI Severity Analytics dashboard.
  3. Use the result type filter to view KPI Results.
  4. Review the historical performance of KPI results and determine if certain KPIs have performed abnormally for a long period of time.

In addition, review the search capacity and utilization in your environment to ensure the system has enough resources to run an additional search.

How to reduce noise

Consider the following suggestions if this correlation search generates too many notable events in Episode Review:

  • Make sure KPI thresholds are tuned and accurate. For more information about thresholding, see Configure KPI thresholds in ITSI.
  • From the service configuration, lower the importance level of noisy KPIs below medium (5) so they will not qualify for this search
  • Modify the correlation search to only alert on critical results.
  • For poorly tuned thresholds or KPIs that are difficult to threshold, consider using the INFO level severity only.
  • Schedule the search to run less often. For example, once every 15 minutes.
  • Modify the search to exclude results from KPIs and services that aren't considered high priority.

For more information about configuring correlation searches, see Generate events with correlation searches in ITSI in the Event Analytics manual.

Service Monitoring - Rarely Degraded Service or KPI

Creates a notable event when the severity of a service health score or KPI is not normal and the service or KPI has otherwise been normal for the last seven days.

Which services and KPIs qualify for this search

Services and KPIs must have the following characteristics to qualify for this search:

  • Services must have a severity level of Medium or higher.
  • Aggregate KPIs must have a severity level of High or Critical. Per-entity KPI values do not qualify for this search.
  • Services and KPIs must have seven or more days of historical data.

What to do before enabling

This search depends on the itsi_summary data model shipped with this content pack. Before enabling this correlation search, you must accelerate the data model.

To accelerate the data model, perform the following steps:

  1. Within ITSI, click Settings > Data models.
  2. Find the itsi_summary model and click Edit > Edit Acceleration.
  3. Check the Accelerate box.
  4. Click Save.

In addition, review the search capacity and utilization in your environment to ensure the system has enough resources to run an additional search.

How to reduce noise

Consider the following suggestions if this correlation search generates too many notable events in Episode Review:

  • Alter the search to only alert on critical severities.
  • Alter the search to filter out results from KPIs with importance values of Medium (5) or lower.

Service Monitoring - Service Health Degraded

This correlation search creates a notable event when the severity of a service's health score is anything but normal.

What to do before enabling

Review historical service health scores to identify and investigate services which have been abnormal for long periods of time. Excessively unhealthy services produce a large volume of notable events with this correlation search.

  1. From the ITSI main menu, click Dashboards > Dashboards.
  2. Open the ITSI Service and KPI Severity Analytics dashboard.
  3. Use the result type filter to view Service Health Scores.
  4. Review the historical performance of service health scores and determine if any services have performed abnormally for a long period of time.

For more information about how service health scores are calculated, see How service health scores are calculated in the Service Insights manual.

In addition, review the search capacity and utilization in your environment to ensure the system has enough resources to run an additional search.

How to reduce noise

Consider the following suggestions if this correlation search generates too many notable events in Episode Review:

  • For perpetually degraded services, review the service KPIs and dependencies to validate the source of degradations. For more information about service dependencies, see Add service dependencies in ITSI.
  • Modify the search to only alert on medium, high, or critical severities.
  • Configure a maintenance window for services with known and expected degradations. For instructions, see Schedule maintenance downtime in ITSI.
  • Modify the search to exclude services that you know produce a lot of alerts.

For more information about configuring correlation searches, see Generate events with correlation searches in ITSI in the Event Analytics manual.

Service Monitoring - Sustained Entity Degradation

This correlation search creates a notable event when all of the following are true:

  • The severity of the entity value for a KPI has been high or critical for 90% or more of the search time range.
  • The KPI has configured per-entity thresholds and the most recent severity is high or critical.
  • The KPI has an importance value of medium (5) or higher. For more information about KPI importance values, see Set KPI importance values in ITSI in the Service Insights manual.

Which KPIs qualify for this search

Only KPIs whose importance is medium (5) or higher qualify for this search. Because of the search time range, only KPIs configured to run once per minute or once every five minutes qualify.

What to do before enabling

Perform the following steps before you enable this correlation search:

  1. Configure several KPIs to use per-entity thresholds, as the correlation search depends on per-entity thresholds.
  2. Review historical entity severity levels to identify and investigate entities which have been abnormal for long periods of time. Excessively unhealthy entities produce a large volume of notable events with this correlation search.
    1. From the ITSI main menu click Dashboards > Dashboards.
    2. Open the ITSI Service and KPI Severity Analytics dashboard.
    3. Use the result type filter to filter on Per-Entity Results.
    4. Review the historical performance of entity-level KPI results and determine if certain entities have performed abnormally for a long period of time.
  3. Review the search capacity and utilization in your environment to ensure the system has enough resources to run an additional search.

How to reduce noise

Consider the following suggestions if this correlation search generates too many notable events in Episode Review:

  • Make sure per-entity thresholds are tuned and accurate.
  • From the service configuration, lower the importance level of noisy KPIs below medium (5) so they will not qualify for this search
  • Configure a maintenance window for entities with known and expected degradations. For instructions, see Schedule maintenance downtime in ITSI.
  • Modify the correlation search to only alert on critical results.
  • Schedule the search to run less often. For example, once every 15 minutes.
  • Modify the search to filter out entities that you know produce a lot of alerts.
  • Modify the search to exclude results from KPIs and services that aren't considered high priority.

For more information about configuring correlation searches, see Generate events with correlation searches in ITSI in the Event Analytics manual.

Service Monitoring - Sustained KPI Degradation

This correlation search creates a notable event when the aggregate KPI severity has been high or critical for 90% or more of the search time range and the most recent severity is high or critical.

Which KPIs qualify for this search

KPIs must have the following characteristics to qualify for this search:

  • The KPI must have an importance value of medium (5) or higher. For more information about KPI importance values, see Set KPI importance values in ITSI in the Service Insights manual.
  • Because of the search time range, the KPI must run once per minute or once every 5 minutes.

What to do before enabling

Determine whether any services in your environment have experienced sustained degradations. Unhealthy services produce a large volume of notable events with this correlation search.

  1. From the ITSI main menu, click Dashboards > Dashboards.
  2. Open the ITSI Service and KPI Severity Analytics dashboard.
  3. Review the historical performance of service health scores and determine if any services have performed abnormally for a long period of time.

For more information about how service health scores are calculated, see How service health scores are calculated in the Service Insights manual.

In addition, review the search capacity and utilization in your environment to ensure the system has enough resources to run an additional search.

How to reduce noise

Consider the following suggestions if this correlation search generates too many notable events in Episode Review:

  • Make sure KPI thresholds are tuned and accurate. For more information about thresholding, see step Configure KPI thresholds in ITSI.
  • From the service configuration, lower the importance level of noisy KPIs below medium (5) so they will not qualify for this search
  • Modify the correlation search to only alert on critical results.
  • For poorly tuned thresholds or KPIs that are difficult to threshold, consider using the INFO level severity only.
  • Schedule the search to run less often. For example, once every 15 minutes.
  • Modify the search to exclude results from KPIs and services that aren't considered high priority.

For more information about configuring correlation searches, see Generate events with correlation searches in ITSI in the Event Analytics manual.

Service Monitoring - Sustained Service Health Degradation

This correlation search creates a notable event when the severity of the Service Health Score has been abnormal for 90% or more of the search time range and the most recent severity is not normal.

What to do before enabling

Review any services in your environment that have been chronically unhealthy. Unhealthy services produce a large volume of notable events with this correlation search.

  1. From the ITSI main menu, click Dashboards > Dashboards.
  2. Open the ITSI Service and KPI Severity Analytics dashboard.
  3. Use the result type filter to view Service Health Scores.
  4. Review the historical performance of services in your environment and determine if any services have performed abnormally for a long period of time.

For more information about how service health scores are calculated, see How service health scores are calculated in the Service Insights manual.

In addition, review the search capacity and utilization in your environment to ensure the system has enough resources to run an additional search.

How to reduce noise

Consider the following suggestions if this correlation search generates too many notable events in Episode Review:

  • For perpetually degraded services, review the service KPIs and dependencies to validate the source of degradations. For more information about service dependencies, see Add service dependencies in ITSI.
  • Modify the search to only alert on medium, high, or critical severities.
  • Configure a maintenance window for services with known and expected degradations. For instructions, see Schedule maintenance downtime in ITSI.
  • Modify the search to exclude services that you know produce a lot of alerts.

For more information about configuring correlation searches, see Generate events with correlation searches in ITSI in the Event Analytics manual.

Universal Correlation Search

This correlation search finds external alerts which have been normalized as Universal Alerts, and onboards them as notable events. This correlation replaces correlation searches created for individual alert sources, such as a correlation search created to onboard Nagios alerts. Features include the following:

  • Providing deduplication across the last hour's worth of raw alerts
  • Providing raw alert backfill, to catch missed alerts over the previous hour
  • Providing consistent Alarm State structure across all Notable Events (seen as Event Type "swim lanes" within an episode)

For more details, see About Universal Alerting in the Content Pack for Monitoring and Alerting.

Which external alerts qualify for this search

This correlation search will find Splunk events with the following characteristics:

  • Events are located in an index included in the macro get_itsi_universal_index (default is index=*)
  • Events include the following fields, and they have values:
    • src
    • signature
    • vendor_severity
    • severity_id

Other fields might be included, but are optional. See Universal Alerting Normalized Fields for more details.

What to do before enabling

Disable all existing ITSI correlation searches which onboard external alerts, especially custom-built ones for the alert sources which will be normalized as Universal Alerts. If they remain enabled, you might get duplicate, competing, or otherwise confusing Notable Events.

Disable all Notable Event Aggregation Policies (NEAPs) which were custom-built for the alert sources which will be normalized as Universal Alerts. If they remain enabled, you might get duplicate, competing, or otherwise confusing Episodes. The Content Pack for Monitoring and Alerting includes several Notable Event Aggregation Policies specifically designed to work with Universal Alerts, to create useful episodes.

Later, correlation searches and NEAPs can be thoughtfully re-enabled, if they don't overlap the new Universal Alerting components.

How to improve performance

The Universal Correlation Search already includes noise-reduction methods (deduplication), but the performance of the correlation search can be improved by modifying the macro get_itsi_universal_index. The Universal Correlation Search (and certain drilldown searches) require a very broad ad-hoc search in order to find all normalized alerts. By default, this is index=*, which can be very expensive in some environments.

The index list can be modified via the macro get_itsi_universal_index. To improve search performance, provide a list of indexes, rather than index=*. To change the macro, perform the following steps:

  1. Click Settings > Advanced Search > Search Macros.
  2. Edit the macro "get_itsi_universal_index". The default definition is index=* (index!=itsi_tracked_alerts AND index!=itsi_grouped_alerts)
  3. Change the definition to include the list of indexes which include normalized alerts. For example: (index=nagios* OR index=solarwinds OR index=SplunkInfraMon) or ((index=alerts AND (sourcetype=nagios* OR sourcetype=solarwinds)) OR index=SplunkInfraMon)

The macro will need to be updated whenever new alert sources are added.

Last modified on 14 October, 2021
PREVIOUS
Troubleshoot the Content Pack for Monitoring and Alerting
  NEXT
About the aggregation policies in the Content Pack for Monitoring and Alerting

This documentation applies to the following versions of Splunk® Content Packs for ITSI and IT Essentials Work: current


Was this documentation topic helpful?

You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters