Splunk® App for Data Science and Deep Learning

Use the Splunk App for Data Science and Deep Learning

LLM-RAG use cases

The large language model retrieval-augmented generation (LLM-RAG) functionalities with assistive guidance dashboards handles the following use cases:

Standalone LLM

Use Standalone LLM for direct use of the LLM for Q&A inference or chat. For additional details, see Using Standalone LLM.

As shown in the following image, when using Standalone LLM, you initialize a search with a prompt as well as text data searched from the Splunk platform. The prompt is sent to the DSDL container, where the LLM module's API is called to generate responses. The responses are then returned to the Splunk platform search as search results.

This image is a diagram of the Standalone LLM process. The diagram depicts a prompt being sent to the DSDL container and returned to the Splunk platform as search results. One side of the diagram shows the DSDL search head. The other side of the diagram shows the Docker Host which includes the LLM Module, DSDL Container, and VectorDB Module.

Standalone VectorDB

Use Standalone VectorBD when you want to encode machine data and conduct similarity search. For additional details, see Using Standalone VectorDB.

As shown in the following image, when using Standalone VectorDB you initially encode Splunk log data into a collection within the vector database. When an unknown log data occurs, you can conduct a vector search against the pre-encoded data to find similar recorded log data .

This image is a diagram of the Standalone VectorDB process. One side of the diagram shows the DSDL search head. The other side of the diagram shows the Docker Host which includes the LLM Module, DSDL Container, and VectorDB Module.

Document-based LLM-RAG

Use Document-based LLM-RAG when you want to encode arbitrary documents and use them as additional contexts when prompting LLM models. Document-based LLM-RAG provides results based on an internal knowledge database. For additional details, see Using Document-based LLM-RAG.

Document-based LLM-RAG has 2 steps:

  1. Document encoding
  2. Document pieces appended

In the first step you encode any documents stored in a directory of the Docker host into a vectorDB collection.

The following image shows the document encoding step:

This image is a diagram of the document encoding step in the Document-based LLM-RAG process. One side of the diagram shows the DSDL search head. The other side of the diagram shows the Docker Host which includes the LLM Module, DSDL Container, and VectorDB Module.

When you initialize a search that requires the knowledge from the documents, the DSDL container conducts a vector search on the encoded document collection to find related pieces of those documents.

In the second step the related pieces of documents are appended to the original search as additional contexts and the search is sent to the LLM. The LLM responses are then returned to the Splunk platform search as search results.

The following image shows the DSDL container vector search of the encoded documents:

This image is a diagram of the vector search of the encoded documents step in the Document-based LLM-RAG process. One side of the diagram shows the DSDL search head. The other side of the diagram shows the Docker Host which includes the LLM Module, DSDL Container, and VectorDB Module.

Function Calling LLM-RAG

Use Function Calling LLM-RAG for the LLM to run customizable function tools to obtain contextual information for response generation. The Function Calling LLM-RAG provides example tools for searching Splunk data and searching VectorDB collections. For additional details, see Using Function Calling LLM-RAG

Similar to document-based LLM-RAG, function calling LLM-RAG also obtains additional information prior to generating the final response. The difference is that with function calling a set of function tools are made accessible to the LLM.

When additional context is needed, such as how many error logs are in a Splunk platform instance, the LLM automatically runs the functions to obtain the information and uses it to generate responses.

The following image shows Function Calling LLM-RAG architecture:

This image is a diagram of the Function Calling LLM-RAG  process. One side of the diagram shows the DSDL search head. The other side of the diagram shows the Docker Host which includes the LLM Module, DSDL Container, and VectorDB Module.

Last modified on 08 January, 2025
About the compute command   Use Standalone LLM

This documentation applies to the following versions of Splunk® App for Data Science and Deep Learning: 5.2.0


Was this topic useful?







You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters