Splunk® SOAR (Cloud)

REST API Reference for Splunk SOAR (Cloud)

Acrobat logo Download manual as PDF

Acrobat logo Download topic as PDF

Using the REST API reference for

The platform supports RESTful APIs in order to create, update, and selectively remove objects from the system.


REST API requests must be performed over HTTPS, and only authorized users and devices are allowed. User and token based authentication methods exist.

Some REST API calls require user based authentication, for example, deleting records. HTTP Basic auth for user based authentication can be easily performed by the Python requests module:

requests.get('', auth=('admin', 'password'))

Authentication can also be provided in the URL:

curl -u "username:password"

For token based authentication, the token can be provided in the URL, or ph-auth-token must be present in the HTTP headers. Using the token in the password field of the request with no username allows rest access without requiring a valid user. See the sample Python script below in the "Provisioning an Authorization Token" section for an example using requests with ph-auth-token in the HTTP headers.

curl -u ":authToken"

Provisioning an authorization token

Use the automation user provided in by default to acquire an authorization token. This user and any other automation type users are service accounts that provide access to the REST API with customizable restrictions.

  1. Log in to as an administrative user.
  2. From the Main Menu, select Administration.
  3. Select User Management > Users.
  4. Click + User to add a new user.
  5. Select Automation as the user type.
  6. Provide a user name and fill in the Allowed IPs. Use any for unrestricted access, a single IP or a single netmask.
  7. Choose one or more roles for the new user. The default Automation role is provided for this purpose and has a broad set of permissions that allows most activities that a service account might need. If you wish to have a more restricted set of permissions for a certain playbook or activity, create a role with the desired permissions and assign that instead.
  8. Click Create.

You can view the token and other configuration information in the Authentication Configuration for REST API panel by clicking on the user name you just created. Cut and paste this JSON formatted data and provide it to your script or application that will be sending the REST requests. Below is some example JSON data:

  "ph-auth-token": "cs76HmsNcWjkd6kWmGzUa18LcbtQx95vMW1bsdeP7gU=",
  "server": ""

The configuration contains:

Parameter Datatype Description
ph-auth-token String Contains the generated authorization token. This token is only valid from the associated IP address provided in the USER EDIT panel.
server String/URL Contains the URL that can be used to POST to this instance. Provided for convenience.

If the token is compromised or needs to be re-provisioned at any point. You can click on the RE-GENERATE AUTH TOKEN button. A new token will be provided and the old token will no longer be accepted.

Example Python script

This is a simple script that shows how to take a CSV file that represents some activity and uses it to generate containers which will be called "incidents" and artifacts which will be called "events" in .

Data used for this example:

"IP ADDRESS",PORT,SHA256,SCORE,COMMENT,"INDICATOR ID",22,dc4c065cd7618b508857246f8243922253aad50a9943c6e206027db11919bcf4,99,"hack attempt",12387,80,3e4ebded4ee802790e465893f17fbc2c456426804381a44b343ae1152e13ebbe,5,"normal operation", 87492,443,4a6b9635eae00157b7b38a5a92c23df01df90b656bb61450497086cf074d3f89,20,"bad certificate", 19,80,e8d8711bf846af1580ae390da7e6722633ff187d976246b547df9eac25ac5a43,10,"http vpn",7373642

The made-up data above contains some standard fields that are expected from various technologies, such as IP address, port and a SHA256 hash, as well as some fields that are custom and specific to the product producing them such as "score" and "comment". The above snippet of data can be put into a file called "example.csv". The following Python program will read the file and create one incident with an artifact per line.

# This example uses the 3rd party "requests" module.  This can be installed
# with "pip install requests" from your client machine. If requests is not
# available, any library which supports https POSTS and basic authentication
# can be used.
import os, sys, csv
import json
import requests

AUTH_TOKEN = "tuI6TaoiBv3fjtFcuQLKciCY+niZ87C2l4FLWcWQf7I="

headers = {
  "ph-auth-token": AUTH_TOKEN

container_common = {
    "description" : "Test container added via REST API call",

# disable certificate warnings for self signed certificates

def add_container(name, sid):
  url = 'https://{}/rest/container'.format(PHANTOM_SERVER)

  post_data = container_common.copy()
  post_data['name'] = '{} ({})'.format(name, sid)
  post_data['source_data_identifier'] = sid
  json_blob = json.dumps(post_data)

  # set verify to False when running with a self signed certificate
  r = requests.post(url, data=json_blob, headers=headers, verify=False)
  if (r is None or r.status_code != 200):
    if r is None:
      print('error adding container')
      print('error {} {}'.format(r.status_code,json.loads(r.text)['message']))
    return False

  return r.json().get('id')

def add_artifact(row, container_id):
  url = 'https://{}/rest/artifact'.format(PHANTOM_SERVER)

  post_data = {}
  post_data['name'] = 'artifact for {}'.format(row[4])
  post_data['label'] = ARTIFACT_LABEL
  post_data['container_id'] = container_id
  post_data['source_data_identifier'] = "source data primary key for artifact or other identifier"

  # The cef key is intended for structured data that can be used when
  # dealing with product agnostic apps or playbooks. Place any standard
  # CEF fields here.
  cef = {
    'sourceAddress': row[0],
    'sourcePort': row[1],
    'hash': row[2],

  # The "data" key can contain arbitrary json data. This is useful for
  # keeping data that does not comfortably fit into CEF fields or is highly
  # product specific
  data = cef.copy()
  data['score'] = row[3]
  data['comment'] = row[4]

  post_data['cef'] = cef
  post_data['data'] = data

  json_blob = json.dumps(post_data)

  # set verify to False when running with a self signed certificate
  r = requests.post(url, data=json_blob, headers=headers, verify=False)

  if (r is None or r.status_code != 200):
    if (r is None):
      print('error adding artifact')
      error = json.loads(r.text)
      print('error {} {}'.format(r.status_code, error['message']))
    return False

  resp_data = r.json()
  return resp_data.get('id')

def load_data(filename):
  with open(filename, 'rb') as csvfile:
    reader = csv.reader(csvfile)
    first_row = True
    for row in reader:
      if first_row:
        # skip the header
        first_row = False
      if not row:

      container_id = add_container(row[4], row[5])
      if not container_id:
      print 'added container {}'.format(container_id)
      artifact_id = add_artifact(row, container_id)

if __name__ == '__main__':
  if len(sys.argv) < 2:
    print "Filename is required"


Checking for existing records using REST

The above code is very simple and does not cover all of the available options when working with the REST API. For example, the csv file above contains an "INDICATOR ID" which is the ID of the high level structure in the source product. It is the closest thing to an incident in that product and more than one row in the csv file may be associated with a specific ID. The same relationship can be preserved when adding data with the REST API. In order to do this, you will want to query to see if a particular document already exists in your system. The following code snippet shows how to do this.

indicator_id = row[5]
query_url = "https://{}/rest/container".format(PHANTOM_SERVER_IP)
query_url += "?_filter_source_data_identifier=\"{}\"".format(indicator_id)
query_url += "&page_size=1"
response = requests.get(query_url, headers=headers, verify=False)
if (response is None):
    print "Query failed."
elif response.status_code != 200:
    print 'Error: code {}, content: {}'.format(response.status_code, response.text)
  resp_data = response.json()
  num_records = resp_data.get('count')
  if num_records:
    # document already exists, no need to insert
    incident_id = resp_data['data'][0]['id']
    # document does not exist, it must be added before continuing.

Now the example.csv might contain some duplicate INDICATOR IDs.

"IP ADDRESS",PORT,SHA256,SCORE,COMMENT,"INDICATOR ID",22,dc4c065cd7618b508857246f8243922253aad50a9943c6e206027db11919bcf4,99,"hack attempt",12387,80,3e4ebded4ee802790e465893f17fbc2c456426804381a44b343ae1152e13ebbe,5,"normal operation", 87492,443,4a6b9635eae00157b7b38a5a92c23df01df90b656bb61450497086cf074d3f89,20,"bad certificate", 19,80,e8d8711bf846af1580ae390da7e6722633ff187d976246b547df9eac25ac5a43,10,"http vpn",7373642,80,c8885d33c29e9b7f2278b39afb3ebddb8de526143ca32f7f1c618dc8841a5983,90,"data exfiltration",12387,22,f4adf8a8f6eaff961887f0076ce5080902599e25ba14421e3e2e47f5c990d876,5,"authorized access",7373642

The above code is using the filter feature of the REST API that allows for simple queries to be run. The HTTP GET request above is asking for any one container record which has a "source_data_identifier" field with a particular value. If the count returned is 0, that means so such record exists and a new one should be created before adding the artifact data. If a record does exist, then its' ID has been returned and it can be used when creating the artifact record. This ensures that the one-to-many relationship between containers and artifacts is preserved.

Setting severity and sensitivity of data

Another common requirement is to set the severity and sensitivity of containers and artifacts. Both containers and artifacts have a severity which can be set to "low", "medium" or "high". Containers may also have their sensitivity set to "white", "green", "amber" or "red" as specified in the Traffic Light Protocol. The following Python snippet shows some possible scenarios.

# This applies to containers and artifacts. For this example company policy
# states that a score of 75 or higher is a high severity event and 50 or
# higher is a medium severity event.
score = int(row[3])
if score >= 75:
  post_data['severity'] = 'high'
elif score >= 50:
  post_data['severity'] = 'medium'
  post_data['severity'] = 'low'

# This applies only to containers. containers for VPN events are to be
# have limited distribution while access to data exfiltration events requires
# a higher level of privilege.

comment = row[4]
if comment == 'data exfiltration':
  post_data['sensitivity'] = 'red'
elif 'vpn' in comment:
  post_data['sensitivity'] = 'amber'
  post_data['sensitivity'] = 'green'

Posting over REST and automation

By default, POSTing a new artifact will trigger automation on the associated container. However, this can be controlled by setting the "run_automation" parameter in your POST JSON. Setting this to false will prevent automation from running after the artifact is added. This happens on a per-container basis. Typically a script ingesting multiple artifacts per container in a batch would turn off automation on all artifacts except the last one which would then cause automation to be run on the fully populated container. A pseudo-code example follows:

  # Acquire some container/artifact data to post
  container = get_container()
  artifacts = get_artifact_list_for_container(container)

  # post your container
  # Set run_automation to false on all but the last one
  for artifact in artifacts[:-1]:
    artifact["run_automation"] = False

The "run_automation" flag is also available when POSTing new containers. However it is defaulted to false and it is typically not necessary to modify this since running automation on a container usually requires artifact data.

Last modified on 22 February, 2023
Query for Data

This documentation applies to the following versions of Splunk® SOAR (Cloud): current

Was this documentation topic helpful?

You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters