Docs » Prompt guide and library for AI Assistant in Observability Cloud

Prompt guide and library for AI Assistant in Observability Cloud đź”—

The purpose of this document is to provide you with clear instructions on how to prompt the AI Assistant in Observability Cloud to give you the best results. This document gives general guidance on effective questions and prompts, as well as some examples of effective and ineffective prompts. This document defines poor, good, and excellent prompts.

How the AI Assistant in Observability Cloud works đź”—

Splunk Observability Cloud collects and correlates logs, metrics, and traces, allowing you to observe your stack from end to end. The AI Assistant in Observability Cloud has access to the following data:

  • Metric time series data, including infrastructure navigators, metrics and metadata, dashboards, and SignalFlow

  • APM data, including service metrics and dependencies, tags, exemplar traces, and trace details

  • Logs through Log Observer Connect

The Assistant searches the data in the previous list and makes use of data correlations to find the answers to your questions. In the next section, you will learn how to efficiently prompt the Assistant.

General guidelines for prompting the Assistant đź”—

General guidance for making the most of any AI assistant includes providing questions or instructions that are as clear and specific as possible. The more you can narrow the possible results, the better. Providing trusted information as reference text also helps an AI assistant to better understand which specific information you need.

Due to the kinds of data it accesses, the AI Assistant in Observability Cloud has specific guidelines to improve your results. Follow these guidelines to receive the best results from the Assistant:

  • Guide the Assistant to use specific tools or data

  • Provide entity names and types

  • Provide context and filters

  • Provide the time range

Guide the Assistant to use specific tools or data đź”—

You can steer the Assistant to use particular data or tools by appending certain keywords or phrases to your prompt. The following are examples:

  • “Use SignalFlow to find…”

  • “Look in APM data.”

  • “Check the logs for…”

When you add information to your prompt or question that specifies the tools or data it should use, the Assistant uses such hints to drill down to what you care about faster. Such hints are not required, but they do narrow your results to make your prompt efficient and your answer faster and less overwhelming.

Provide entity names and types đź”—

Providing entity names and types will generally lead to more focused, better, and faster Assistant responses. The following table shows examples of prompts and their quality:

Prompt quality

Prompt example

Poor

What’s wrong with the api-gateway?

Poor

How’s i-09182882 doing?

Good

What’s wrong with the api-gateway service?

Good

How’s instance i-09182882?

Excellent

What’s wrong with the api-gateway service in prod environment?

Excellent

How’s EC2 instance i-09182882?

What makes a prompt poor quality? đź”—

Not providing entity names and types in your prompt makes the prompt unhelpful. The prompt “What’s wrong with the api-gateway?” does not provide entity names or types. The level of specificity is low and returns an overwhelming set of responses that might or might not be useful in your troubleshooting.

What makes a prompt good quality? đź”—

The more information you include about your environment and the entities in it, the better your Assistant results are.

An example of a decent prompt is “What’s wrong with the api-gateway service?” It specifies that the Assistant should examine the API gateway service and returns results that are more specific and more helpful. If you suspect a problem with a particular service, you should name the actual service in your prompt to the Assistant.

What makes a prompt excellent quality? đź”—

The more specific your prompt, the better the results. Naming both a service and the environment helps the Assistant to narrow its results to only what you care about. An example of an excellent prompt is “What’s wrong with the api-gateway service in prod environment?” Because this example tells the Assistant which service and which environment, your results will be specific enough to allow you to identify the specific problems you are experiencing and take action.

Provide context and filters đź”—

Another way you can give the Assistant the information it needs to respond with relevant and accurate information is by providing context and filters. The following table shows examples of prompts with and without context and filters, along with the quality of the prompts.

Prompt quality

Prompt example

Poor

I got paged, what’s wrong?

Good

I got paged for api-gateway latency in prod2, what’s wrong?

Excellent

I got paged for incident <incident_id> what’s wrong?

Poor prompts đź”—

The prompt, “I got paged, what’s wrong?” does not help the Assistant to help you. With this prompt, the Assistant doesn’t know which alert you’re responding to, so it can’t provide you with more information about your page.

Good prompts đź”—

The prompt, “I got paged for api-gateway latency in prod2, what’s wrong?” is a good prompt because the Assistant is able to identify the relevant alert or alerts and collect the related information. The Assistant can then provide an evaluation of the information contained in or related to the alert, which tells you what you might want to do next to resolve the problem.

Excellent prompts đź”—

The prompt, “I got paged for incident <incident_id> what’s wrong?” is excellent because there is no ambiguity. The Assistant knows exactly which alert and incident you want information about. With that information, the Assistant can suggest probable solutions.

Provide the time range đź”—

To focus your investigation, you can provide a time range in your prompt. While it narrows down the relevant information and lets the Assistant suggest more specific problems and solutions, a time range is not required. If you do not give a time range, the default time range for most tools is the last 15 minutes, which the Assistant analyzes.

The most reliable way to construct time ranges in natural language is by relative times, such as by saying “in the past hour,” “from 8 hours ago until 2 hours ago,” etc. You can use standard shorthand, such as, [-1h, now] or [-8h, -2h]. You can also use datetime strings, such as “Did any alert fire after 2024-11-06T19:15:00+00:00 ?”

Scenarios for using the AI Assistant in Observability Cloud đź”—

This section shows examples of situations in which you can use the Assistant to resolve situations faster.

You receive an alert đź”—

When you receive an alert, possible prompts you might use in the Assistant to help resolve the incident include the following:

  • I received an alert related to the paymentservice. What’s happening?

  • I received an alert with incident ID Ggn_D1TA4BU. What’s going on?

  • Can you look at my APM data and logs to understand the root cause of this issue?

Example prompt 1: Poor đź”—

I received an alert related to the paymentservice. What’s happening?

This example is poor because it does not give the Assistant enough specific information to prompt a useful or actionable response. While this prompt mentions paymentservice, it does not provide an incident ID or environment. The Assistant is likely to return a summary of everything related to paymentservice, which will be overwhelming and potentially irrelevant. For example, the Assistant might give a summary of paymentservice in a development environment when you wanted information about a production environment. To make this prompt better, add an incident ID or an environment.

Example prompt 2: Good đź”—

I received an alert with incident ID Ggn_D1TA4BU. What’s going on?

This example is good because it is focused. It states that you received an alert and gives the incident ID. The Assistant is likely to give a summary of the incident. You can then ask a follow up question based on the summary to get more information.

Example prompt 3: Excellent đź”—

Can you look at my APM data and logs to understand the root cause of this issue?

If you are looking at an alert in the UI, this prompt is excellent because the Assistant knows exactly what you mean by “this issue” and can reference all of the information in the alert. Using the page context, the Assistant pulls in all information from the alert and can help you narrow down the probable root cause quickly.

A service is having issues đź”—

When a service is experiencing problems, possible prompts you might use to help resolve the incident include the following:

  • Show me the last 3 traces for apm-classic errors.

  • Paymentservice in online boutique env is having issues in past 15 mins. What’s going on?

  • Paymentservice in online boutique env is having issues in past 15 mins. Look for any relevant error exemplar traces. Once you’ve identified the exemplar traces, analyze each full trace by its trace ID

Example prompt 1: Poor đź”—

Show me the last 3 traces for apm-classic errors.

This prompt is poor because you do not give the Assistant a time range or environment. To improve this prompt, tell the Assistant which environment you are interested in. Then you can even ask the Assistant to analyze the traces and suggest potential root causes of the errors.

Example prompt 2: Good đź”—

Paymentservice in online boutique env is having issues in past 15 mins. What’s going on?

This prompt is good because it gives the service and the time range. Telling the Assistant which environment prevents the Assistant from giving you seemingly confident answers about the wrong environment. The default time range is the past 15 minutes, so mentioning it doesn’t help or hurt the prompt.

Example prompt 3: Excellent đź”—

Paymentservice in online boutique env is having issues in past 15 mins. Look for any relevant error exemplar traces. After identifying the exemplar traces, analyze each full trace by its trace ID

The third example prompt expands on the second example prompt. The second example prompt was good, but the third is excellent. One way to improve a prompt when you don’t know more specific information is to instruct the Assistant to extract certain details, then you can further prompt the Assistant using the extracted details. In this excellent example prompt, the Assistant extracts traces. Then it examines the traces and provides you with its analysis. From there, you can ask more and more specific questions based on information in the Assistant’s analysis. You might want to tell the Assistant how many exemplar traces to analyze so that the Assistant does not overwhelm you with a very large response and exceed context limitations of the conversation.

A Kubernetes cluster is having issues đź”—

When a Kubernetes cluster is having problems, a possible prompt you might use in the Assistant to help resolve the situation is the following:

It looks like k8s pod prod50 has a high CPU utilization. When did it start?

The preceding prompt is good because you give the Assistant the environment, prod50. This is an example of a situation in which you might not have much information to begin your troubleshooting journey. In this case, give the Assistant any specific information you can to prompt a response that gives you more information. You can identify important information in the Assistant’s response to ask more specific questions until you narrow your exploration down to a potential root cause.

Creating a chart đź”—

When you want to create a chart in Splunk Observability Cloud, you might prompt the Assistant with the following:

Can you share SignalFlow to monitor the top 5 K8s nodes with the highest CPU utilization?

The preceding prompt is excellent because it gives the Assistant a fair amount of detail on what you want to know. The Assistant can make a functional chart based on the information you provide. You can follow up to adjust your chart after you see it. For example, you can then tell the Assistant to adjust the chart to a particular 30-minute window.

Other resources đź”—

For specific instructions on how access and use the AI Assistant, see AI Assistant in Observability Cloud.

To learn about Splunk’s commitment to responsible AI, see Responsible AI for AI Assistant in Observability Cloud .

This page was last updated on Feb 04, 2025.