Splunk® SOAR (Cloud)

Python Playbook API Reference for Splunk SOAR (Cloud)

Acrobat logo Download manual as PDF

The Splunk SOAR Automation Broker documentation has moved. See Set Up and Manage the Splunk SOAR Automation Broker.
Acrobat logo Download topic as PDF

Understanding containers

Containers are the top-level data structure that playbook APIs operate on. Every container is a structured JSON object which can nest more arbitrary JSON objects, that represent artifacts. A container is the top-level object against which automation is run.

Assign a label to a container to dictate the kind of content it contains. This label defines how the respective elements are managed within the platform and where they are organized. Assign this label during the ingest phase and in the ingest configuration when you configure an asset as a data source.

Following this model, you might label containers imported from a SIEM as "Incidents". Or you might label containers imported from a vulnerability management product as "Vulnerabilities", or containers imported from an IP intelligence source as "Intelligence". For each label that the system ingests, a new top-level menu item appears within the top-level product navigation to allow you to navigate to the list of respective containers for that label. In addition, playbooks, the mechanism by which automated actions are run on a container, are container-specific and run only on containers that match their label.

This architecture allows for arbitrary data to be imported and run across the security operations domain, beyond the management of security incidents alone.

Data can be imported from various sources and can be structured or unstructured. Even in the case of structured data, information might be categorized, classified, named, and represented in incompatible and disparate formats. In either case, the data has to be normalized to be accessible and actionable by the platform. The platform uses apps that support ingest and interface with these assets. These apps provide the necessary functionality to map the raw data format from the source to a standard Common Event Format (CEF) schema, if applicable.

CEF is an open log management standard that improves the interoperability of security-related information from different network and security devices and applications. After the data is normalized into CEF format, automation can leverage accessing various attributes without any ambiguity. makes both the original data in its native format and normalized format available because there is an information fidelity and application-specific detail in the raw data that might not be well-represented in CEF format.

Container schema

The contents of the container header and associated container data are exposed to the platform as JSON objects. Playbooks operate on these elements in order to make decisions and apply logic. The following code shows an example of the container schema, and the table immediately after the code defines the fields present in the code:

  "id": 107,
  "version": "1",
  "label": "incident",
  "name": "my_test_incident",
  "source_data_identifier": "64c2a9a4-d6ef-4da8-ad6f-982d785f14b2",
  "description": "this is my test incident",
  "status": "open",
  "sensitivity": "amber",
  "severity": "medium",
  "create_time": "2016-01-16 07:18:46.631897+00",
  "start_time": "2016-01-16 07:18:46.636966+00",
  "end_time": "",
  "due_time": "2016-01-16 19:18:00+00",
  "close_time": "",
  "kill_chain": "",
  "owner": "admin",
  "hash": "52d277ed6eba51d86190cd72405df749",
  "tags": [""],
  "asset_name": "",
  "artifact_update_time": "2016-01-16 07:18:46.631875+00",
  "container_update_time": "2016-01-16 07:19:12.359376+00",
  "ingest_app_id": "",
  "data": {},
  "artifact_count": 8
Field Description
id A unique identifier for the incident, generated by the platform.
version The version of this schema, for schema migration purposes.
label The label as specified in the ingest asset. When you configure an ingestion asset, you can define a label for the containers ingested from that asset. For example, you might define "Incident" from a Splunk asset or "Email" from an IMAP asset.
name The name of the item, as found in the ingest or source application and incident name in a SIEM.
source_data_identifier The identifier of the container object as found in the ingestion source. An incident in a SIEM can have an identifier of its own that might be passed on to as part of the ingestion. In the absence of any source data identifier provided, a GUID is generated and provided.
description The description for the container as found in the ingest source.
status The status of this container. For example, you can define a status as new, open, closed, or as a custom status defined by an administrator.
sensitivity The sensitivity of this container such as red, amber, green, or white.
severity The severity of this container. For example, you can define severity as medium, high, or as a custom severity defined by an administrator.
create_time The timestamp of when this container was created in .
start_time The timestamp of when activity related to this container was first seen. This is also the time when the first artifact was created for this container. As artifacts are added to the container, the start time might change if an older artifact for that incident is added to the container.
end_time The timestamp of when activity related to this container was last seen. This is also the time when the last artifact was created for this container. As artifacts are added to the container, start time may change if a later artifact for that incident is added to the container.
due_time The timestamp of when the SLA for this container expires and it is considered to be in breach of its SLA. The SLAs for the container are either set by the user or default determined by the platform depending on the severity and the Event Settings. You can define default SLAs for container severity from Main Menu > Administration > Event Settings > Response.
close_time The timestamp of when this container was closed or resolved by the user or playbook.
kill_chain If the ingestion source and app provide kill-chain information about the incident, it's stored in this field.
owner The user who currently owns the incident. Administrators can assign the container to any user in the system.
hash This is the hash of the container data as ingested and is used to avoid duplicate containers being added to the system

tags assigned to a container.

asset The ID of the asset from which the container was ingested. If the user created this incident, this field does not contain a value.
asset_name The name of the asset from which the container was ingested. If the user created this incident, this field does not contain a value.
artifact_update_time The time when the artifact was last added to the container.
container_update_time The time when the container was last updated. This includes adding an artifact or changing any state or field of the container.
data This is a dictionary of the raw container data as seen in the ingestion asset or application. This is the data that is parsed to populate the artifacts and its CEF fields.
artifact_count This is the count of total artifacts that belong to this container.

All containers imported in the system start with an open state and can eventually be closed by playbooks or the user. A closed state implies that the platform no longer runs playbooks automatically, but the container continues to be updated with new artifacts if any are found and the users can take actions manually or run playbooks manually.

You can always re-open the container through the user interface.

There are some variations in fields returned when the container is queried or retrieved from a REST API versus the container object passed in playbook APIs.

Last modified on 03 September, 2021
About playbook automation APIs
Understanding artifacts

This documentation applies to the following versions of Splunk® SOAR (Cloud): current

Was this documentation topic helpful?

You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters