Splunk® App for Data Science and Deep Learning

Use the Splunk App for Data Science and Deep Learning

Using multi-GPU computing for heavily parallelled processing

Use the multi-GPU computing option for heavily parallelled processing such as training of deep neural network models. You can leverage a GPU infrastructure if you are using NVIDIA and have the needed hardware in place. For more information on NVIDIA GPU management and deployment, see https://docs.nvidia.com/deploy/index.html.

The Splunk App for Data Science and Deep Learning (DSDL) allows containers to run with GPU resource flags added so that NVIDIA-docker is used, or GPU resources in a Kubernetes cluster are attached to the container.

To start your development or production container with GPU support, you must select NVIDIA as the runtime for your chosen image. From the Configurations > Containers dashboard, you can set up the runtime for each container you run.

This image shows the Containers page in DSDL. In this view, multiple containers are set up with sample information.

The following image shows an example console leveraging four GPU devices for model training.

This image shows an example console leveraging 4 GPU devices for model training.

If you want to use multi-GPU computing, review the strategies provided for your chosen framework:

Last modified on 11 July, 2024
Develop a model using JupyterLab   Using the Neural Network Designer Assistant

This documentation applies to the following versions of Splunk® App for Data Science and Deep Learning: 5.0.0, 5.1.0, 5.1.1, 5.1.2


Was this topic useful?







You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters