Splunk® App for Data Science and Deep Learning

Use the Splunk App for Data Science and Deep Learning

Acrobat logo Download manual as PDF


Acrobat logo Download topic as PDF

Troubleshoot the Splunk App for Data Science and Deep Learning

The following are issues you might experience when using the Splunk App for Data Science and Deep Learning and how to resolve them.

First launch of container not allowing access to JupyterLab

Cause

When you launch a container for the first time, the selected container image is downloaded from Dockerhub automatically in the background. Depending on your network, this process can take time to download the Docker image for the first time as the image sizes range from 2-12 GB.

Solution

Allow time for the images to initially pull from Dockerhub. You can check which Docker images are available locally by runningDocker images on your CLI.

Browser showing an insecure connection

Cause

The Splunk App for Data Science and Deep Learning version 3.5 and higher includes container images that use HTTPS by default with self-signed certificates for the data transfer related API endpoints and JupyterLab. Many browsers show "insecure connection" warnings and some allow you to suppress that for localhost connections used during development.

Solution

For production use, work with your Splunk administrator to secure your setup and build containers with your own certificates, or use more advanced container environment setups.

The Example dashboards don't show results or show errors

Cause

Viewing the dashboard examples depends on the presence of a Container Image, the existence of Notebook code associated to the Example in JupyterLab, and MLTK permissions being set to Global.

Solution

Perform the following steps to troubleshoot the Example dashboards:

  1. Make sure that the right Container Image is downloaded and up and running for the specific Example. For example, TensorFlow examples require a TensorFlow container.
  2. Verify that the associated Notebook code exists in JupyterLab and that you have explicitly saved the Notebook by selecting the Save button. A Python module is saved and located in the /app/model folder in JupyterLab. This Python module is required to run the Examples and populate the dashboards.
  3. Confirm that the MLTK app permissions are set to Global so that DSDL can use the lookup files required for most of the Examples.

Containers suddenly stop

About 1 minute after starting, my container suddenly stops.

Cause

Most likely you have two or more DSDL apps installed and configured to use the same container environment. In DSDL 3.x and higher, there is a scheduled search called the MLTK Container Sync that ensures synchronization of running containers and associated models for the app. If more than one DSDL app is running, there can be synchronization collisions and containers get stopped.

Solution

When using DSDL 3.x or higher, connect each DSDL app in a one-to-one relationship with your Docker or Kubernetes environment.

Error following an app version update

After a version update I see the error "unable to read JSON response" when running a DSDL related search.

Cause

This error can indicate that some part of the local configuration of DSDL is out of sync.

Solution

Resolved this error by opening the Setup page with the existing settings and clicking Test and Save again to re-confirm the configuration.

Where are my Notebooks stored in the Docker environment?

By default, there are 3 Docker volumes automatically mounted for persistence in your Docker environment. Those volumes are named "mltk-container-app" and "mltk-container-notebooks". The volume "mltk-container-data" is actively used. You can verify by running `docker volume ls` on your CLI. For DSDL version 3.1 and higher there is a new default volume called "mltk-container-data".

What container environments are supported?

The Splunk App for Data Science and Deep Learning architecture supports Docker, Kubernetes, and OpenShift as target container environments.

Does the app provide Indexer distribution?

No Indexer distribution. Data is processed on the search head and sent to the Splunk App for Data Science and Deep Learning. Data cannot be processed in a distributed manner such as streaming data in parallel from indexers to one or many containers. However, all advantages of search in a distributed Splunk platform deployment still exist.

How does the app manage security?

Data is sent from a search head to an MLTK Container uncompressed and unencrypted over HTTP protocol. With regards to security requirements, Splunk Administrators must take steps to harden or secure the setup of the app and their container environment accordingly. There are ways to configure the container environment so that it supports secured communication.

Last modified on 11 December, 2023
PREVIOUS
Using the Neural Network Designer Assistant
  NEXT
Support for the Splunk App for Data Science and Deep Learning

This documentation applies to the following versions of Splunk® App for Data Science and Deep Learning: 5.0.0, 5.1.0, 5.1.1


Was this documentation topic helpful?


You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters