Overview of Event Analytics in ITSI
Event Analytics will eventually start using Splunk Data Stream Processor (DSP) to process data in real time. The Rules Engine will be replatformed from an indexed real-time search into a single pipeline in Splunk Data Stream Processor (DSP). To try it out early, sign up to participate on the Splunk ITSI 5.0 (beta) page.
Splunk IT Service Intelligence (ITSI) Event Analytics ingests events from across your IT landscape and from other monitoring silos to provide a unified operational console of all your events and service-impacting issues. You can also integrate with incident management tools and helpdesk applications to accelerate incident investigation and automate remedial actions.
Event Analytics is equipped to handle huge numbers of events coming into ITSI at once. Because these events might be related to each other, they must be grouped together so you can identify the underlying problem. Event Analytics provides a way to deal with this huge volume and variety of events.
Aggregation policies reduce your event noise by grouping notable events based on their similarity and displaying them in Episode Review. An episode is a collection of notable events grouped together based on a set of predefined rules. An episode represents a group of events occurring as part of a larger sequence, or an incident or period considered in isolation. Aggregation policies let you focus on key event groups and perform actions based on certain trigger conditions, such as consolidating duplicate events, suppressing alerts, or closing episodes when a clearing event is received.
Event Analytics workflow
ITSI Event Analytics is designed to make event storms manageable and actionable. After data is ingested into ITSI from multiple data sources, it's processed through correlation searches to create notable events. Notable event aggregation policies group the events into meaningful episodes in Episode Review. You can then take actions on the episodes such as running a script, pinging a host, or creating tickets in external systems.
The following image illustrates the Event Analytics workflow:
You can also leverage Event Analytics to monitor your internal services and KPIs. Service and KPI data is ingested through correlation searches or multi-KPI alerts. Once events are created, they proceed through the following workflow:
Step 1: Ingest events through correlation searches
The data itself comes from Splunk indexes, but ITSI only focuses on a subset of all Splunk Enterprise data. This subset is generated by correlation searches. A correlation search is a specific type of saved search that generates notable events from the search results. For instructions, see Overview of correlation searches in ITSI.
Step 2: Configure aggregation policies to group events into episodes
Once notable events start coming in, they need to be organized so you can start gaining value from them. Configure an aggregation policy to define which notable events are related to each other and group them into episodes. An episode contains a chronological sequence of events that tells the story of a problem or issue. In the backend, a component called the Rules Engine executes the aggregation policies you configure. For more information, see Overview of aggregation policies in ITSI.
Step 3: Set up automated actions to take on episodes
You can run actions on episodes either automatically using aggregation policies or manually in Episode Review. Some actions, like sending an email or pinging a host, are shipped with ITSI. You can also create tickets in external ticketing systems like ServiceNow, Remedy, or VictorOps. Finally, actions can also be modular alerts that are shipped with Splunk add-ons or apps, or custom actions that you configure. For more information, see Configure episode action rules in ITSI.
Overview of correlation searches in ITSI
This documentation applies to the following versions of Splunk® IT Service Intelligence: 4.5.0 Cloud only, 4.5.1 Cloud only, 4.6.0 Cloud only, 4.6.1 Cloud only, 4.6.2 Cloud only, 4.7.0