Splunk® SOAR (On-premises)

Build Playbooks with the Playbook Editor

Acrobat logo Download manual as PDF


Acrobat logo Download topic as PDF

Add custom code to your playbook with a custom function

Use custom functions to expand the functionality of your playbooks. With custom functions and your Python skills, you can add to the kinds of processing performed in a playbook, such as applying string transformations, parsing a raw data input, or calling a third party Python module. You can also customize the way custom functions can interact with the REST API. You can share custom functions across your team and across multiple playbooks to increase collaboration and efficiency.

To create custom functions, you must have Edit Code permissions, which an Administrator can configure in Administration > User Management > Roles and Permissions. For more information on the Edit Code permission, see Add a role to in the Administer manual.

Integrate custom functions into playbooks with Utility blocks

Use Utility blocks in the Visual Playbook Editor to run your custom function code. For more information on Utility blocks see Add additional functionality to your playbook in using the Utility block in this manual.

Create a custom function

To create a custom function, follow these steps:

  1. Navigate to your instance.
  2. From the Home menu, select Custom Functions.
  3. Select the + Custom Function. Basic custom function code appears in the editor.
  4. Create a name for your custom function. Playbooks reference custom functions by name. After saving your custom function, you can only change its name by using the Save as button.
  5. (Optional) Add a description. Descriptions explain the custom function's use. Descriptions appear in both the custom function listing page and in the Visual Playbook Editor.
  6. Add your desired custom Python code in the editor, in the location indicated in the code. You can optionally include one or more of the following elements.
  7. After adding input parameters and output variables, validate and save your custom function, as described in Validate and save your custom function later in this article.

Names of custom functions, input parameters, and output variables must use valid Python identifiers: A-Z, a-z, 0-9, and underscores.

Add an input parameter to a custom function

Adding input parameters to your custom function is optional. You can populate each input parameter with a data path or a literal string when calling the custom function from a playbook. You can set a data path from any valid blocks upstream, artifact data, and container data.

To add an input to a custom function, follow these steps:

  1. Within the New Custom Functions page, select Add Input.
  2. Enter a Variable* name that is a valid Python identifier.
  3. Select an input type: List or Item
    Select List to provide an entire collection. Selecting List causes the Visual Playbook Editor to generate code that passes the entire collection to the custom function without looping. For example, if there is a container with multiple artifacts, each with a destination address Common Event Format (CEF) field, and you configure a list type input, the custom function is called once and passes in a list of all destination addresses as shown:
     container_data_0 = phantom.collect2(container=container, datapath=['artifact:*.cef.destinationAddress', 'artifact:*.id'])
     parameters = []
     container_data_0_0 = [item[0] for item in container_data_0]
     parameters.append({
         'list_type_input': container_data_0_0,
     })
    

    Select Item to provide a single item from a collection. Selecting Item causes the Visual Playbook Editor to generate code that loops over items in the collection, calling the custom function for each item. For example, if there is a container with multiple artifacts, each with a destination address CEF field, and you configure an item type input with a destination address of those artifacts, the custom function is called once per destination address as shown:

    container_data_0 = phantom.collect2(container=container, datapath=['artifact:*.cef.destinationAddress', 'artifact:*.id'])
    parameters = []
    for item0 in container_data_0:
        parameters.append({
           'item_type_input': item0[0],     
        })
    
  4. (Optional) Select a CEF data type. When you are populating an input, this CEF data type is the default filter limiting the values that appear.
  5. (Optional) Add a custom placeholder. The custom placeholder you create appears as the input placeholder in the Visual Playbook Editor configuration panel.
  6. (Optional) Add help text. Help text is a longer description that appears as a tooltip on the input in the Visual Playbook Editor.
  7. (Optional) Select Add Input to add another input. You can add a maximum of 10 inputs to a custom function.

A single custom function can have a mix of item and list type inputs. When the input types are mixed, the custom function loops over all the item type inputs and passes the list type inputs to every custom function call.

Add an output variable to a custom function

Adding output variables to your custom function is optional. Output variables are usable as inputs in other downstream blocks, such as Action, API, Filter, Decision, Format, Prompt, or other Utility blocks. A custom function output variable gets published as a data path in this format: <block_name>:custom_function_result.data.<output_variable>. Make sure to give your output variables clear and meaningful names in your custom code.

To add an output to a custom function, follow these steps:

  1. Select Add Output
  2. Select an output type: List or Item (default)
    • Select List to create a list that can optionally be used by playbook blocks downstream, as shown in this code example:
      def regex_extract_ipv4(input_string=None):
          import json
          import phantom.rules as phantom
          import re
          
          outputs = []
          ip_list = []
          for ip in input_string:
              if ip:
                  ip_rex = re.findall('(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)',ip)
                  for ip in set(ip_rex):
                      ip_list.append(ip)
                      
          for ip in set(ip_list):
              outputs.append({"ipv4": ip})
      
          assert json.dumps(outputs)  # Will raise an exception if the :outputs: object is not JSON-serializable
          return outputs
      
    • Select Item to provide a single item that can optionally be used by playbook blocks downstream. Refer to the code examples that use the Item output type later in this article.
  3. Here is an example of a datapath for the list output example, which includes this custom function name, ipv4:
    regex_extract_ipv4_1:custom_function_result.data.ipv4
  4. (Optional) Select a CEF data type. The data type is the Common Event Format (CEF) field that you want to use.
  5. (Optional) Select Add Output to add another output. You can add up to 10 outputs to a custom function.

Splunk SOAR does not validate that the custom function results match the declared output data paths. Any data in the custom function result can be accessed via a data path.

Validate and save your custom function

After adding input and output parameters, validate and save your custom function.

  1. Select Validate to check that your Python code is valid.
  2. Select Save and select a repository location to save your custom function.
    Saving your custom function automatically validates your Python code. Optionally add a commit message about your function.
  3. (Optional) After saving your custom function, select the more icon MoreIconHoriz.png to view the documentation, export the function, view the audit log, or view the history of the function.

Convert output type for existing custom functions

Custom functions created with version 5.5 and after have two output types, item and list. Legacy custom functions only have the item output type. Existing playbooks and custom code using these custom functions are not affected by this change.
To update existing custom functions that use the item output type to use the list output type, follow these steps:

  1. Open the custom function.
  2. Select Edit.
  3. Change the output type to List.
  4. If needed, update your custom code.
  5. Update the data paths that use this output type. The existing datapath looks like this example, including an asterisk (*) character:
    regex_extract_ipv4_1:custom_function_result.data.*.ipv4

    The asterisk (*) character is no longer needed, because the output is returned and handled downstream as a list. Update the datapath to remove the asterisk, as shown in this example:

    regex_extract_ipv4_1:custom_function_result.data.ipv4
  6. Select Save.

If needed, change downstream data paths that use this output type.

Use draft mode with custom functions

Consider using draft mode to save custom functions that you are in the process of creating. You can only have one draft of an original custom function at a time. Use draft mode custom functions in playbooks to help test or debug the custom function. If you have created a function that isn't yet valid, it will automatically be saved in draft mode. The Draft Mode toggle is located at the top of the Custom Functions panel.

You can use draft mode in the following ways:

  • Toggle draft mode on and off in the when creating or editing a custom function.
  • When you make an error in the Python code and try to save it, an error message displays letting you know that you can only save it in draft mode. To save it as a draft, follow these steps:
  1. If it is your first time saving the custom function, select the repository where you want to save it.
  2. Enter a commit message.
  3. Select Save as draft. In draft mode, you can toggle between the original version and the draft version by selecting the View original or the View draft button.

After you have finished making your edits, you can take the function out of draft mode by following these steps:

  1. Toggle Draft Mode to Off.
  2. Select Save. If you have any associated playbooks, a message appears alerting you about each of the playbooks using the draft custom function, or the custom function that you are overriding.
  3. Select Next.
  4. Enter a commit message.
  5. Select Save.

Switching Draft Mode to Off deletes the draft copy of the custom function and removes all playbook associations. When you take a custom function out of Draft Mode, you must update any playbooks using the draft version, because the draft custom function they refer to no longer exists.

Save a copy of your custom function

To save a copy of your custom function, follow these steps:

  1. Select Edit, then select the arrow next to the Save button.
  2. Select Save As and create a name, select the repository you want your custom function saved to, and write a commit message.
  3. Select Save again to save it as a copy.

Delete a custom function

To delete a custom function, follow these steps:

  1. Navigate to the Custom Functions listing page.
  2. Check the box next to the custom function you want to delete.
  3. Select Delete. If a custom function is being used in any playbooks, a warning displays listing all the associated playbooks for that custom function. You can switch active playbooks to inactive before confirming the deletion of a custom function.
  4. Select Delete.

You cannot bulk delete custom functions.


Write useful custom functions

Structure custom function outputs like action results whenever possible. This means the preferred data structure should be a list of dictionaries, with a meaningful name for each field in the dictionary. For example, if you are converting strings from upper case to lower case in a custom function block named lower_1, your custom function might output something like the following:

{
	'items': [{"lower_case_string": "abcd"}, {"lower_case_string": "efgh"}]
	'total_lowered_count': 2
}

Based on the output you could access the data through the following datapaths. In the Custom Function datapaths column, for the full path to each datapath, it is prefixed with lower_1:custom_function_result.data. In the Collect2 results column, it shows what the results of using the full datapaths passed into a downstream phantom.collect2 call would look like.

Custom function datapaths Collect2 results
items [[{"lower_case_string": "abcd"}, {"lower_case_string": "efgh"}]]
items.*.lower_case_string ["abcd", "efgh"]
items.0.lower_case_string ["abcd"]
total_lowered_count [2]
items.*.lower_case_string, total_lowered_count [["abcd", 2], ["efgh", 2]]

Here is the custom function that produced the output above:

def lower(values=None, **kwargs):
    """
    Args:
        values
    
   Returns a JSON-serializable object that implements the configured data paths:
        items.*.lower_case_string (CEF type: *)
    """
    ############################ Custom Code Goes Below This Line #################################
    import json
    from phantom.rules import debug
​
    if values is None:
        values = []
    
    # Write your custom code here...
    items = []
    for value in values:
        item = None
        try:
            item = value.lower()
        except:
            pass
        finally:
            items.append({
                'lower_case_string': item
            })
            
    outputs = {
    	'items': items,
    	'total_lowered_count': len(items),
    }
​
    # Return a JSON-serializable object
    assert json.dumps(outputs)  # Will raise an exception if the :outputs: object is not JSON-serializable
    return outputs

Use datetime objects with custom functions

JSON is the primary language playbooks use to communicate. As such, values that aren't JSON encodable must be encoded into JSON encodable representations before they can be serialized to JSON. The playbook API uses Python's built in JSON module for encoding and decoding JSON. However, this module does not support datetime.datetime objects without a custom encoder or decoder.

To use datetime.datetime objects, always pass encoded data between blocks. These objects can be encoded as ISO strings. The following custom function shows how to decode a datetime.datetime object, modify the object, and re-encode the object.

def add_one_day(iso_formatted_datetime=None, **kwargs):
    """
    Adds one day to an iso formatted datetime. 
    ...
    
    Args:
        iso_formatted_datetime: An ISO-formatted datetime, e.g. '2020-05-18T20:11:39.198140+00:00'

    
    Returns a JSON-serializable object that implements the configured data paths:
        iso_formatted_datetime (CEF type: <output_contains>): An ISO formatted datetime which shall be one day later than the input ISO-formatted datetime.

    """
    ############################ Custom Code Goes Below This Line #################################
    import json
    from phantom.rules import debug

    from datetime import datetime, timedelta
    from pytz import timezone
    import dateutil.parser

    outputs = {}

    # Write your custom code here...
    utc = timezone('UTC')

    # Decode the ISO-formatted string into a native python datetime
    dt = dateutil.parser.parse(iso_formatted_datetime).replace(tzinfo=utc)

    # Add one day
    dt += timedelta(days=1)
    
    # Encode data before returning it
    augmented_iso_formatted_datetime = dt.isoformat()

    # Set the output
    outputs['iso_formatted_datetime'] = augmented_iso_formatted_datetime

    # Return a JSON-serializable object
    # This assertion would not pass without encoding the dt variable.
    assert json.dumps(outputs)  # Will raise an exception if the :outputs: object is not JSON-serializable
    return outputs

Playbook APIs supported from within a custom function

If you choose, add one or more playbook APIs to the python code for your custom function.

The following are lists of playbook APIs that are supported from within a custom function. For more information on playbook APIs, see Playbook automation API in the Playbook API Reference manual.

Playbook automation APIs

View a list of playbook automation APIs supported from within custom functions.

Container automation APIs

Data management automation APIs

View a list of data management automation APIs supported from within custom functions.

Data access automation APIs

Session automation APIs

View a list of session automation APIs supported from within custom functions.

Vault automation APIs

View a list of vault automation APIs supported from within custom functions.

Network automation APIs

View a list of network automation APIs supported from within custom functions.

Use the REST API from within a custom function

In addition to using playbook APIs, you can also use the REST API from within a custom function. This is helpful for reading and writing objects in that are not accessible through the playbook APIs. For example, if you had multiple playbooks that always start with the same three blocks, you can set the status to In Progress, set the severity to Needs Triage, and add a comment to the container with the text: "automation started." You can then write a custom function named "setup" that performs the previously mentioned tasks, but replaces the three blocks with a single custom function block. This allows you to get the same playbook functionality with fewer blocks and gives you the ability to modify the status, severity, or comment value in a single place that then updates all of your playbooks. See the REST API Reference for to learn more about the capabilities of the REST API and the specific syntax needed for each type of request.

The Playbook API provides the following methods to call the REST API and to avoid storing credentials in the custom function. For more information on the Playbook API, see the Playbook API Reference.

Playbook API Description
phantom.build_phantom_rest_url() Combine the base URL and the specific resource path, such as /rest/artifact.
phantom.requests Use methods such as get and post from the Python 'requests' package to perform HTTP requests. If you use requests you don't need to set the session by hand.

The following example combines these methods to query the tags of an indicator with a given ID.

def list_indicator_tags(indicator_id=None, **kwargs):
    """
    List the tags on the indicator with the given ID
    
    Args:
        indicator_id: The ID of the indicator to list the tags for
    
    Returns a JSON-serializable object that implements the configured data paths:
        tags: The tags associated with the given indicator
    """
    ############################ Custom Code Goes Below This Line #################################
    import json
    import phantom.rules as phantom
    outputs = {}

    # Validate the input
    if indicator_id is None:
        raise ValueError('indicator_id is a required parameter')

    # phantom.build_phantom_rest_url will join positional arguments like you'd expect (with URL encoding)
    indicator_tag_url = phantom.build_phantom_rest_url('indicator', indicator_id, 'tags')

    # Using phantom.requests ensures the correct headers for authentication
    response = phantom.requests.get(
        indicator_tag_url,
        verify=False,
    )
    phantom.debug("phantom returned status code {} with message {}".format(response.status_code, response.text))

    # Get the tags from the HTTP response
    indicator_tag_list = response.json()['tags']
    phantom.debug("the following tags were found on the indicator: {}".format(indicator_tag_list))
    outputs['tags'] = indicator_tag_list

    # Return a JSON-serializable object
    assert json.dumps(outputs)  # Will raise an exception if the :outputs: object is not JSON-serializable
    return outputs

Use the REST API carefully if you are running queries across large data sets, such as containers, artifacts, indicators, and action results, or when you are working with a long-running instance with a lot of accrued data.

To improve the performance of these queries, you can apply strict filters, limit the page size of the query, restrict the query to objects created within a certain date range, or only query for a specific object detail instead of the entire object. To see the full set of available query modifiers, see Query for Data in the REST API Reference for Manual.

Use Python packages with custom functions

To write a custom function that uses a Python package that's not installed by default on your system, perform the following steps using the pip executable appropriate to the Python version of your custom function. These steps use pip3.

Using pip3 is a shorthand for the SOAR system's currently running version within python3 like python3.9. If you need to use a specific version of python3 for your custom function, like python3.#, specify that specific version, like pip3.#.

  1. Log in to using SSH as a user with sudo access. For more information, see Log in to using SSH.
  2. (Optional) Determine if the needed Python packages are already installed on your system using phenv pip3 freeze | grep -i <package_name>.
  3. Custom functions use the same Python version interpreter as playbooks, so set the user's permissions and add the package to the appropriate Python path by calling the following commands:
    1. umask 0022
    2. sudo phenv pip3 install <package_name>
  4. Import the package into the custom function using import <package_name>.

Troubleshoot custom function result failures

Your custom function result will be marked as failed if it raises an uncaught exception. This is shown in the following custom function by the message raise RuntimeError('Try again next time.').

def fail_fast():
    raise RuntimeError('Try again next time.')

Custom functions also produce a failed result when the input is not specified or the input denominator is 0, which causes a ZeroDivisionError as shown in the following function:

def divide(numerator=None, denominator=None):
    if numerator is None:
        raise ValueError('"numerator" is a required parameter')

    if denominator is None:
        raise ValueError('"denominator" is a required parameter')

    return {
        'result': numerator / denominator  # Raises a ZeroDivisionError if denominator is 0
    }

You can test if your custom function was successful or not by using a decision block in the playbook editor.

Custom function run failures

A custom function run contains a collection of intermediate custom function results. If some of the custom function results in the run fail, the overall custom function run will still succeed as long as some of the functions in the run succeed. The following call to phantom.custom_function shows a successful custom function run where the first three custom function results succeed, and the last custom function result failed:

phantom.custom_function('local/divide', parameters=[
    {'numerator': 8.0, 'denominator': 4},
    {'numerator': 8.0, 'denominator': 2},
    {'numerator': 8.0, 'denominator': 1},
    {'numerator': 8.0, 'denominator': 0},
])

The following example shows a custom function run that failed because each of the custom function results failed.

phantom.custom_function('local/divide', parameters=[
    {'numerator': 8.0, 'denominator': 0},
    {'numerator': 2.0, 'denominator': 0},
])

Keyboard shortcuts in the custom function editor

The following keyboard shortcuts can be used in the custom function editor.

Description Shortcut
Save Cmd+S
Edit mode Cmd+E

If you are using Windows, replace Cmd with Ctrl in the keyboard shortcuts.

Last modified on 19 December, 2023
PREVIOUS
Add custom code to your playbook with the code block
  NEXT
Add functionality to your playbook in using the Utility block

This documentation applies to the following versions of Splunk® SOAR (On-premises): 6.0.0, 6.0.1, 6.0.2, 6.1.0, 6.1.1, 6.2.0


Was this documentation topic helpful?


You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters