Use Docker containers with Splunk Edge Hub OS
You can deploy your own client applications to the Splunk Edge Hub device with Docker containers. After configuring a container, set up the Splunk Edge Hub SDK to facilitate client interactions with your Splunk Edge Hub device.
Splunk Edge Hub version 2.0 security update
In Splunk Edge Hub version 2.0, the Docker daemon and subsequent Docker containers will no longer run as the root
user.
See the Docker documentation for more details about this security update.
If you have existing containers running on Splunk Edge Hub version 1.9 or lower and update to version 2.0, you won't be able to view or manage your Docker containers in the Edge Hub advanced configuration page.
To comply with the security update in Splunk Edge Hub version 2.0, unregister your Splunk Edge Hub device to remove all Docker containers running on the device. Configure subsequent Docker containers after upgrading to Splunk Edge Hub 2.0 to continue full functionality. If unregistering your device does not remove all Docker containers, perform a factory reset.
See Unregister your Splunk Edge Hub device and Perform a factory reset on your Splunk Edge Hub.
Prerequisites
- Make sure you're using Splunk Edge Hub version 1.8.0 or higher.
- Turn on the Splunk Edge Hub advanced settings page. See Access the Edge Hub Advanced Settings page.
- Install Docker. Follow the documentation at https://docs.docker.com/get-docker/.
Configure the container
Follow the documentation at https://docs.docker.com/ to perform tasks using Docker. Here's how to configure a Docker container with Splunk Edge Hub OS:
- Create a Docker image. Here are example files that you can use to create an image:
Dockerfile
FROM python:3-slim WORKDIR /usr/src/app COPY ./hello.py . CMD [ "python", "./hello.py" ]
hello.py
from time import sleep while True: print("hello, world", flush=True) sleep(5)
docker build --platform=linux/arm64 -t hello-python .
Note that this targets the Splunk Edge OS platform.docker save -o hello.tar hello-python
- Create a manifest file for the container image called edge.json. The
name
must match the tag used when building the image, and thecontainerArchive
must match the name of the .tar file of the image. For example:{ "name": "hello-python", "containerArchive": "hello.tar" }
- Bundle the .tar file and the edge.json file into a .tar.gz file or uncompressed .tar file. You'll upload this file to the Splunk Edge OS.
- Navigate to the Containers tab.
- Upload the
hello_pkg.tar.gz
that file you created in step two. The bundle appears in the Container list and the container automatically launches.
Verify the container is running
To verify that the container is running, navigate to the Tools tab in the Splunk Edge Hub advanced settings page. Select Download logs. splunk-container-client@hello-python.log
should print messages if the container is running.
Additional container configuration options
You can expose ports, and share additional files between the host and container.
Expose ports
Splunk Edge Hub OS reserves port range 51000-52000
for you to expose ports or map ports between the container and host. Specify the port mapping in the edge.json file you created when configuring the container:
"portMap": ["51080:8000", "51089:8089", "51097:9997"],
The first value in each pair is the Splunk Edge Hub device port, and the second value is the container port.
Map files
To share additional files such as configuration files between the host and container, specify the path where the files are mapped in the container. Include the following line in the edge.json file you created when configuring the container:
"mappedStorage": "/your/files",
To specify the space for mapped files, include the following line in the edge.json file you created when configuring the container:
"mappedStorageMb: 500"
mappedStorageMb
is optional. If not specified, the default space allocated to mapped files is 100MB. This storage is persistent over multiple runs of the container and the container has read and write access to it.
In the Splunk Edge Hub advanced settings page, navigate to the Containers tab and select Configure files to upload or download files.
Set environment variables
You can set environment variables for the container:
"envVars": ["VAR_NAME=var_value", "VAR2=value2"]
Set run command
You can set the command that will run in the container:
"runCommand": ["bash", "/zeek_scripts/run.sh"]
Configure the Splunk Universal Forwarder using a container
Splunk provides a Splunk Universal Forwarder package for containerized deployments. The package contains a sample edge.json file that you can modify to suit your needs.
- Extract the Splunk Universal Forwarder package on the Edge Hub Central website.
- Open the sample edge.json file. Modify the file to specify how to launch the container, such as variables and port mappings. Do not modify the line
"mappedStorage": "/tmp/defaults"
. - Repackage the edge.json file with the uf.tar container image.
- Navigate to the Containers tab and upload the package in the Container upload section.
- Create a default.yml file to configure the universal forwarder. See https://splunk.github.io/docker-splunk/ADVANCED.html#runtime-configuration for configuration options.
- Navigate to Splunk Edge Hub advanced settings and select Configure files to upload the default.yml file.
- Restart the container.
To verify the container is running, download the logs in the Tools tab. splunk-container-client@uf-redhat-8-arm64.log
should print messages if the container is running.
Access USB devices from a container
With Splunk Edge Hub OS version 2.1.0 or higher, Docker containers can access USB devices.
- Navigate to the Containers tab in the Splunk Edge Hub advance settings page.
- Under Tools, select Splunk Edge Hub's connected USB device.
- Plug the USB device into your Splunk Edge Hub device.
- Select Scan for USB devices. You USB device loads with a selection for device files. The device files are symbolic links or paths that you can allow access to from the container.
- Select the path you want to allow access to from the container. The feature generates a
usbDevices
field. If you're working with multiple USB devices, repeat steps 1-3 for all devices you want to provide container access to. - Copy the generated
usbDevices
code and add it to the edge.json manifest file from the steps in Configure the container.
Implement your own AI solution
The Splunk Edge Hub comes with a Neural Processing Unit (NPU) that is shared with containers in Splunk Edge OS version 2.1.0 and higher. You can use TensorFlow Lite, an open-source machine learning framework that allows you to run trained models on a variety of devices. Use TensorFlow Lite with the Splunk Edge Hub OS to take advantage of "2 TOPS processing" offered by NPU to greatly improve the response time for application such as computer vision.
Currently, NPU support is only available to the Tensorflow Lite machine learning framework.
If you're using the Tensorflow Lite library in your container, the interpreter
object responsible for performing interference might look like this example, where you can provide the model_file
variable to the container using the map files feature:
import tflite_runtime.interpreter as tf def perform_inference(): model_file = <location_in_specified_in_mappedStorage>/<model_filename> interpreter = tf.Interpreter(model_path=model_file) interpreter.allocate_tensors() # Note: set input tensors before invoking interpreter.invoke() return interpreter.get_output_details()
Upload AI model files using the Configure files button. The files will be available in the mappedStorage
location provided in the edge.json manifest file for the container. It's useful to keep the code running in the container independent from the actual model files that could be trained elsewhere, such as a cloud service or retrieved from a public model repository.
Include in your container the setup requirements for the dependency at the tflite-vx-delegate repository. This allows your container's Tensorflow Lite code to use it as an experimental delegate when instantiating the interpreter
object. Here's an example of the code once it has NPU support:
import tflite_runtime.interpreter as tf def perform_inference(): model_file = <location_in_specified_in_mappedStorage>/<model_filename> delegates = [tf.load_delegate("/usr/lib/libvx_delegate.so")] interpreter = tf.Interpreter(model_path=model_file, experimental_delegates=delegates) interpreter.allocate_tensors() # Note: set input tensors before invoking interpreter.invoke() return interpreter.get_output_details()
Contact edgesupport@splunk.com for sample containers using NPU if needed.
(Optional) set up the Splunk Edge Hub SDK
You can set up the Splunk Edge Hub SDK to facilitate client interactions with your Splunk Edge Hub device. See Set up the Splunk Edge Hub SDK.
Download a configuration file and upload it to another Splunk Edge Hub device | Set up the Splunk Edge Hub SDK |
This documentation applies to the following versions of Splunk® Edge Hub OS: 2.1.0
Feedback submitted, thanks!