Splunk® App for Data Science and Deep Learning

Use the Splunk App for Data Science and Deep Learning

Acrobat logo Download manual as PDF


Acrobat logo Download topic as PDF

Splunk App for Data Science and Deep Learning architecture overview

The Splunk App for Data Science and Deep Learning (DSDL) allows you to integrate advanced custom machine learning and deep learning systems with the Splunk platform.

The following image shows where DSDL fits into a machine learning workflow:

This image illustrates where the app fits into a machine learning workflow using the Splunk platform. On the far left of the image there are graphics representing different business use case types including IoT, IT, and Security. The middle of the image represents DSDL, illustrating how you can leverage Jupyter, TensorFlow, or PyTorch to build models in DSDL. The right of the image shows Splunk platform operational options including validating results and getting alerts.

The Splunk App for Data Science and Deep Learning offers an integrated architecture with the Splunk platform. In the following diagram, the architecture is represented as follows:

  • The Splunk platform is represented on the far left in the black box.
  • The container environment is represented in the centre of the diagram in the light blue box.
  • The Splunk Observability Suite is represented in a labeled box in the top right of the diagram.

This image represents how DSDL connects to the Splunk platform. In this diagram, the Splunk platform is represented on the far left in the black box. The container environment is represented in the centre of the diagram in the light blue box. The Splunk Observability Suite is represented in a labeled box in the top right of the diagram..

DSDL connects your Splunk platform deployment to a container environment such as Docker, Kubernetes, or OpenShift. With the help of the container management interface, DSDL uses configured container environment APIs to start and stop development or production containers. DSDL users can interactively deploy containers based on need, and access those containers over external URLs from JuypterLab, or other browser-based tools like TensorBoard, MLflow, or the Spark user interface for job tracking.

The development and production containers offer additional interfaces, most importantly endpoint URLs that facilitate bi-directional data transfer between the Splunk platform and the algorithms running in the containers. Optionally, containers can send data to the Splunk HTTP Event Collector (HEC) or app users can send ad-hoc search queries to the Splunk REST API to retrieve data interactively in Juypter for experimentation, analysis tasks, or modeling.

An optional function is having all container endpoints get automatically instrumented with OpenTelemetry, and analyzed in the Splunk Observability Suite. Additionally the container environment can be monitored in the Splunk Observability Suite for further operational insights such as memory load and CPU utilization.

Last modified on 03 January, 2023
PREVIOUS
About the Splunk App for Data Science and Deep Learning
  NEXT
Quick start guide for the Splunk App for Data Science and Deep Learning

This documentation applies to the following versions of Splunk® App for Data Science and Deep Learning: 5.0.0, 5.1.0, 5.1.1


Was this documentation topic helpful?


You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters