Splunk® Phantom (Legacy)

Build Playbooks with the Visual Editor

Acrobat logo Download manual as PDF


Acrobat logo Download topic as PDF

Add custom code to your Splunk Phantom playbook with the custom function block

Use custom functions to expand the functionality of your playbooks in Splunk Phantom. Custom functions enable you to use your Python skills to expand the kinds of processing performed in a playbook, such as applying string transformations, parsing a raw data input, or calling a third party Python module. Custom functions can also interact with the REST API in a customizable way. You can share custom functions across your team and across multiple playbooks to increase collaboration and efficiency. To create custom functions, you must have Edit Code permissions, which can be configured by an Administrator in Administration > User Management > Roles and Permissions. For more information on the Edit Code permission, see Add a role to Splunk Phantom in the Administer Splunk Phantom manual.

Create a custom function

To create a custom function, follow these steps:

  1. Navigate to your Splunk Phantom instance.
  2. From the main menu, select Playbooks.
  3. Click the Custom Functions tab.
  4. Click the + Custom Function button.
  5. Create a name for your custom function. Once you save your custom function, the name can only be changed using the Save As button as playbooks reference custom functions by name.
  6. (Optional) Add a description. Descriptions explain what the function is used for. Descriptions appear in both the custom function listing page and in the Visual Playbook Editor (VPE).
  7. Add any desired custom Python code in the editor.
  8. Click Validate to check if your Python code is valid.
  9. Click Save and select a repository to save your custom function to.
    Clicking the Save button automatically validates your Python code. Add a commit message in the pop-up window about your function.
  10. (Optional) After saving your custom function, you can make a copy of it.
    1. To make a copy, first click Edit and then click the arrow next to the Save button.
    2. Click Save As and create a name, select the repository you want your custom function saved to, and write a commit message.
    3. Press Save again to save it as a copy.
  11. (Optional) After saving your custom function, you can click the more icon to view the documentation, export the function, view the audit log, or view the history of the function.

Functions, input parameters, and output variables must be named using valid Python identifiers: A-Z, a-z, 0-9, and underscores.

Add an input parameter to a custom function

Adding input parameters to your custom function is optional. Each input parameter can be populated with a data path or a literal string when calling the custom function from a playbook. You can set a data path from any valid blocks upstream, artifact data, and container data.

To add an input to a custom function, follow these steps:

  1. From the New Custom Functions page, click Add Input.
  2. Enter a Variable* name that is a valid Python identifier.
  3. Select an input type. Select List to provide an entire collection. Selecting List causes the Visual Playbook Editor (VPE) to generate code that passes the entire collection to the custom function without looping. For example, if there is a container with multiple artifacts, each with a destination address Common Event Format (CEF) field, and you configure a list type input, the custom function is called once and passes in a list of all destination addresses as shown:
     container_data_0 = phantom.collect2(container=container, datapath=['artifact:*.cef.destinationAddress', 'artifact:*.id'])
     parameters = []
     container_data_0_0 = [item[0] for item in container_data_0]
     parameters.append({
         'list_type_input': container_data_0_0,
     })
    

    Select Item to provide a single item from a collection. Selecting Item causes the VPE to generate code that loops over items in the collection, calling the custom function for each item. For example, if there is a container with multiple artifacts, each with a destination address CEF field, and you configure an item type input with a destination address of those artifacts, the custom function is called once per destination address as shown:

    container_data_0 = phantom.collect2(container=container, datapath=['artifact:*.cef.destinationAddress', 'artifact:*.id'])
    parameters = []
    for item0 in container_data_0:
        parameters.append({
           'item_type_input': item0[0],     
        })
    
  4. (Optional) Select a CEF data type. When you are populating an input, this CEF data type is the default filter limiting the values that are shown.
  5. (Optional) Add a custom placeholder. The custom placeholder you create appears as the input placeholder in the VPE configuration panel.
  6. (Optional) Add help text. Help text is a longer description that appears as a tooltip on the input in the VPE.
  7. (Optional) Click Add Input to add another input. You can add a maximum of 10 inputs to a custom function.

A single custom function can have a mix of item and list type inputs. When the input types are mixed, the custom function loops over all the item type inputs and passes the list type inputs to every custom function call.

Add an output variable to a custom function

Adding output variables to your custom function is optional. Output variables are usable as inputs in other downstream blocks, such as Action, API, Filter, Decision, Format, Prompt, or other Custom Function blocks. A custom function output variable gets published as a data path in this format: <block_name>:custom_function_result.data.<output_variable>. Make sure to give your output variables clear and meaningful names in your custom code.

To add an output to a custom function, follow these steps:

  1. Click Add Output
  2. Enter a Data Path. Write the full path to your output.
  3. Enter the output in the code in your output object.
  4. Select a data type. The data type is the Common Event Format (CEF) field that you want to use.
  5. Add a description. The description appears as help text in the VPE.
  6. Click Add Output to add another output. You can add up to 10 outputs to a custom function.

Splunk Phantom doesn't validate the output data paths. Because of this, you need to make sure the output data path is valid. It is possible to have valid data paths that are not configured in the UI.

Using draft mode with custom functions

Draft mode is a good way to save custom functions that you are in the process of creating. If you have created a function that isn't yet valid, it will automatically be saved in draft mode. You can only have one draft of an original custom function at a time. Use draft mode custom functions in playbooks to help test or debug the custom function.

You can use draft mode in the following ways:

  1. Toggle draft mode on and off in the custom function editor.
  2. When you make an error in the Python code and try to save it, an error message displays letting you know that you can only save it in draft mode. To save it as a draft, follow these steps:
    1. If it is your first time saving the custom function, select the repository where you want to save it.
    2. Enter a commit message.
    3. Click Save as draft. In draft mode, you can toggle between the original version and the draft version by clicking the View original or the View draft button.

Once you have finished making your edits, you can take the function out of draft mode by following these steps:

  1. Toggle Draft Mode to Off.
  2. Click Save. If you have any associated playbooks, a message appears alerting you about each of the playbooks using the draft custom function, or the custom function that you are overriding.
  3. Click Next.
  4. Enter a commit message.
  5. Click Save.

Switching Draft Mode to Off deletes the draft copy of the custom function and removes all playbook associations. When you take a custom function out of Draft Mode, any playbooks using the draft version must be updated as the custom function they refer to no longer exists.

Delete a custom function

To delete a custom function, follow these steps:

  1. Navigate to the Custom Functions listing page.
  2. Check the box next to the custom function you want to delete.
  3. Click Delete. If a custom function is being used in any playbooks, a warning displays listing all the associated playbooks for that custom function. You can switch active playbooks to inactive before confirming the deletion of a custom function.
  4. Click Delete.

You can't bulk delete custom functions.

Add a custom function to a playbook

Once you have created a custom function, add it to a playbook by following these steps:

  1. From the main menu, click Playbooks.
  2. Click the + Playbook button.
  3. To add a new block to a playbook, drag the half-circle icon attached to any block on the canvas. Release your mouse to create a new empty block connected to the originating block with an arrow.
  4. Click the Custom Function button to add a custom function block.
  5. Click in the search bar to display all of your repositories. Then, click the repository your custom function is saved to and either search for your custom function, or select it from the list.
  6. (Optional) Test the draft version of your custom function by selecting it from the list and adding it to the playbook.
  7. (Optional) Once you have selected a custom function, you can configure the value of the input parameters. Keyword argument datapaths are available to add as input parameters by clicking keyword arguments and then selecting an argument from the drop-down list. Keyword argument datapaths are only available for custom functions as they aren't valid datapaths and can't be passed to Splunk Phantom APIs. For more information see Understanding datapaths and Playbook automation API in the Splunk Phantom Playbook API Reference.
  8. (Optional) Click the + New button in this panel to navigate to the custom function editor and create a new custom function (link). Once you save a new custom function, you can select it from this menu.
  9. (Optional) Click the Edit button to navigate to the custom function editor where you can make any needed edits. Once you save your edits, you need to reselect the custom function to use your changes.
  10. (Optional) Click Refresh List to update the list of available custom functions.
  11. When you are finished editing your playbook, click Save to enter your desired settings and playbook name.
  12. Click Save again then choose a source control to save your playbook to and enter a comment about your playbook.

To learn more about editing Python code in the visual playbook editor, see View or edit the Python code in Splunk Phantom playbooks.

Once you have added a custom function to a playbook, you can see the associated playbooks used by a custom function in the custom function configuration panel.

Settings

Follow these steps to configure the settings for a Custom Function block:

  1. Click Settings.
  2. Select Info or Advanced.
Setting Description
Info Configure settings for this Custom Function block.
  • Custom Name: The name for this format block. This name is visible in the playbook editor and also in Splunk Phantom wherever details about this action are visible.
  • Description: The Description field shows up as a code comment above the block definition.
  • Notes: The Notes field contents appear when you hover over the Note icon in the action block.
Advanced setting Description
Join Settings You can configure Join settings when you have two blocks with callbacks both calling the same downstream block. Block types with callbacks are Action and Prompt. Configure Join settings from the downstream block. Click the required checkbox if the action in the upstream block must be completed before this downstream block is run.
Artifact Scope Select a value from the drop-down menu. The setting determines which artifacts are processed when the playbook block runs.
  • Default matches the scope of the playbook.
  • New Artifacts processes only the artifacts that were defined since the block was last run.
  • All Artifacts includes all artifacts when the playbook block runs.

Write useful custom functions

Structure custom function outputs like action results whenever possible. This means the preferred data structure should be a list of dictionaries, with a meaningful name for each field in the dictionary. For example, if you are converting strings from uppercase to lowercase in a custom function block named lower_1, your custom function might output something like the following:

{
	'items': [{"lower_case_string": "abcd"}, {"lower_case_string": "efgh"}]
	'total_lowered_count': 2
}

Based on the output you could access the data through the following datapaths. In the Custom Function datapaths column, for the full path to each datapath, it is prefixed with lower_1:custom_function_result.data. In the Collect2 results column, it shows what the results of using the full datapaths passed into a downstream phantom.collect2 call would look like.

Custom function datapaths Collect2 results
items [[{"lower_case_string": "abcd"}, {"lower_case_string": "efgh"}]]
items.*.lower_case_string ["abcd", "efgh"]
items.0.lower_case_string ["abcd"]
total_lowered_count [2]
items.*.lower_case_string, total_lowered_count [["abcd", 2], ["efgh", 2]]

Here's the custom function that produced the output above:

def lower(values=None, **kwargs):
    """
    Args:
        values
    
   Returns a JSON-serializable object that implements the configured data paths:
        items.*.lower_case_string (CEF type: *)
    """
    ############################ Custom Code Goes Below This Line #################################
    import json
    from phantom.rules import debug
​
    if values is None:
        values = []
    
    # Write your custom code here...
    items = []
    for value in values:
        item = None
        try:
            item = value.lower()
        except:
            pass
        finally:
            items.append({
                'lower_case_string': item
            })
            
    outputs = {
    	'items': items,
    	'total_lowered_count': len(items),
    }
​
    # Return a JSON-serializable object
    assert json.dumps(outputs)  # Will raise an exception if the :outputs: object is not JSON-serializable
    return outputs

Use local variables instead of global variables

When creating a custom function or editing playbook source code, it is better to use local variables than global ones. Values stored in a global variable may be modified by another instance of the playbook or other process that uses the same variable resulting in unexpected or incorrect results.

If you need to pass data between functions or playbooks, it is better to use function outputs, or to persist data using the save_object and the related get_object and clear_object APIs.

Use datetime objects with custom functions

JSON is the primary language playbooks use to communicate. As such, values that aren't JSON encodable must be encoded into JSON encodable representations before they can be serialized to JSON. The playbook API uses Python's built in JSON module for encoding and decoding JSON. However, this module does not support datetime.datetime objects without a custom encoder or decoder.

To use datetime.datetime objects, always pass encoded data between blocks. These objects can be encoded as ISO strings. The following custom function shows how to decode a datetime.datetime object, modify the object, and re-encode the object.

def add_one_day(iso_formatted_datetime=None, **kwargs):
    """
    Adds one day to an iso formatted datetime. 
    ...
    
    Args:
        iso_formatted_datetime: An ISO-formatted datetime, e.g. '2020-05-18T20:11:39.198140+00:00'

    
    Returns a JSON-serializable object that implements the configured data paths:
        iso_formatted_datetime (CEF type: <output_contains>): An ISO formatted datetime which shall be one day later than the input ISO-formatted datetime.

    """
    ############################ Custom Code Goes Below This Line #################################
    import json
    from phantom.rules import debug

    from datetime import datetime, timedelta
    from pytz import timezone
    import dateutil.parser

    outputs = {}

    # Write your custom code here...
    utc = timezone('UTC')

    # Decode the ISO-formatted string into a native python datetime
    dt = dateutil.parser.parse(iso_formatted_datetime).replace(tzinfo=utc)

    # Add one day
    dt += timedelta(days=1)
    
    # Encode data before returning it
    augmented_iso_formatted_datetime = dt.isoformat()

    # Set the output
    outputs['iso_formatted_datetime'] = augmented_iso_formatted_datetime

    # Return a JSON-serializable object
    # This assertion would not pass without encoding the dt variable.
    assert json.dumps(outputs)  # Will raise an exception if the :outputs: object is not JSON-serializable
    return outputs

Playbook APIs supported from within a custom function

The following are lists of playbook APIs that are supported from within a custom function. For more information on playbook APIs, see Playbook automation API in the Splunk Phantom Playbook API Reference manual.

Playbook automation APIs

View a list of playbook automation APIs supported from within custom functions.

Container automation APIs

Data management automation APIs

View a list of data management automation APIs supported from within custom functions.

Data access automation APIs

Session automation APIs

View a list of session automation APIs supported from within custom functions.

Vault automation APIs

View a list of vault automation APIs supported from within custom functions.

Network automation APIs

View a list of network automation APIs supported from within custom functions.

Use the Splunk Phantom REST API from within a custom function

In addition to using playbook APIs, you can also use the Splunk Phantom REST API from within a custom function. This is helpful for reading and writing objects in Splunk Phantom that are not accessible through the playbook APIs. For example, if you had multiple playbooks that always start with the same three blocks, you can set the status to In Progress, set the severity to Needs Triage, and add a comment to the container with the text: "automation started." You can then write a custom function named "setup" that performs the previously mentioned tasks, but replaces the three blocks with a single custom function block. This allows you to get the same playbook functionality with fewer blocks and gives you the ability to modify the status, severity, or comment value in a single place that then updates all of your playbooks. See the REST API Reference for Splunk Phantom to learn more about the capabilities of the Splunk Phantom REST API and the specific syntax needed for each type of request.

The Playbook API provides the following methods to call the REST API and to avoid storing credentials in the custom function. For more information on the Playbook API, see the Splunk Phantom Playbook API Reference.

Playbook API Description
phantom.build_phantom_rest_url() Combine the Splunk Phantom base URL and the specific resource path, such as /rest/artifact.
phantom.requests Use methods such as get and post from the Python 'requests' package to perform HTTP requests. If you use requests you don't need to set the session by hand.

The following example combines these methods to query the tags of an indicator with a given ID.

def list_indicator_tags(indicator_id=None, **kwargs):
    """
    List the tags on the indicator with the given ID
    
    Args:
        indicator_id: The ID of the indicator to list the tags for
    
    Returns a JSON-serializable object that implements the configured data paths:
        tags: The tags associated with the given indicator
    """
    ############################ Custom Code Goes Below This Line #################################
    import json
    import phantom.rules as phantom
    outputs = {}

    # Validate the input
    if indicator_id is None:
        raise ValueError('indicator_id is a required parameter')

    # phantom.build_phantom_rest_url will join positional arguments like you'd expect (with URL encoding)
    indicator_tag_url = phantom.build_phantom_rest_url('indicator', indicator_id, 'tags')

    # Using phantom.requests ensures the correct headers for authentication
    response = phantom.requests.get(
        indicator_tag_url,
        verify=False,
    )
    phantom.debug("phantom returned status code {} with message {}".format(response.status_code, response.text))

    # Get the tags from the HTTP response
    indicator_tag_list = response.json()['tags']
    phantom.debug("the following tags were found on the indicator: {}".format(indicator_tag_list))
    outputs['tags'] = indicator_tag_list

    # Return a JSON-serializable object
    assert json.dumps(outputs)  # Will raise an exception if the :outputs: object is not JSON-serializable
    return outputs

Use the REST API carefully if you are running queries across large data sets, such as containers, artifacts, indicators, and action results, or when you are working with a long-running Splunk Phantom instance with a lot of accrued data.

To improve the performance of these queries, you can apply strict filters, limit the page size of the query, restrict the query to objects created within a certain date range, or only query for a specific object detail instead of the entire object. To see the full set of available query modifiers, see Query for Data in the REST API Reference for Splunk Phantom Manual.

Use Python packages with custom functions

If you want to write a custom function that uses a Python 2.7 or a Python 3.6 package that's not installed by default on your system, perform the following steps:

  1. Log in to Splunk Phantom using SSH as a user with sudo access. For more information, see Log in to Splunk Phantom using SSH.
  2. (Optional) Determine if the needed Python packages are already installed on your system.
    For Python 2.7
     phenv pip2.7 freeze | grep -i <package_name>
    For Python 3.6
    phenv pip3.6 freeze | grep -i <package_name>
  3. Custom functions use the same Python interpreters as playbooks. Set the user's permissions and add the package to the Python path.
    For Python 2.7:
    umask 0022
    sudo phenv pip2.7 install <package_name>
    For Python 3.6:
    umask 0022
    sudo phenv pip3.6 install <package_name>
  4. Import the package into the custom function using import <package_name>.

For clustered Splunk Phantom deployments, do these commands on each Splunk Phantom node.

For more information on using PIP to install Python modules, see Installing Python Modules on Python.org.

Legacy custom functions versus new custom functions

Legacy custom functions are the custom functions that were introduced in Splunk Phantom version 4.2. See the following section for information on how to update your custom functions to the new version. In Splunk Phantom 4.9, legacy custom functions and new custom functions are both options in the visual playbook editor to make migration to the new custom functions as painless as possible. The main difference between the two versions is that the new custom functions support a library of source controlled custom functions that can each be easily reused and called from multiple playbooks, while legacy custom functions are not reusable and must be written once per custom function block in a playbook.

Convert legacy custom functions to new custom functions

Version 4.9 and later of Splunk Phantom has support for a library of custom functions that can each be called from multiple playbooks. When converting a legacy custom function to a new custom function, certain Splunk Phantom APIs can't be called from within a new custom function. If this occurs, use the custom code section of the custom function block code in the VPE to make those API calls. You can also use this section to override any of the parameters before the phantom.custom_function call is made. To see a complete list of APIs supported from within a custom function, see Playbook APIs supported from within a custom function.

To convert any of your legacy custom functions to the current version of custom functions, follow these steps:

  1. Copy the legacy custom functions inputs and outputs using the custom function editor. If you're using an API that isn't supported from within a custom function, you can't call it from within the custom function, but you can call it from the custom code section available in the VPE in the custom function block code.
  2. Select your input type. If you want your inputs to be the same as the legacy custom function inputs, configure them as list-type inputs. For more information on input types, see Add an input parameter to a custom function.
  3. Copy your code from the playbook editor into the custom function editor, and change the input names to the names of your newly configured custom function inputs.
  4. Structure your outputs into the output object that the custom function publishes.
  5. Change any downstream blocks that use a legacy custom function output to take the new custom function data path.

Troubleshoot custom function result failures

Your custom function result will be marked as failed if it raises an uncaught exception. This is shown in the following custom function by the message raise RuntimeError('Try again next time.').

def fail_fast():
    raise RuntimeError('Try again next time.')

Custom functions also produce a failed result when the input is not specified or the input denominator is 0, which causes a ZeroDivisionError as shown in the following function:

def divide(numerator=None, denominator=None):
    if numerator is None:
        raise ValueError('"numerator" is a required parameter')

    if denominator is None:
        raise ValueError('"denominator" is a required parameter')

    return {
        'result': numerator / denominator  # Raises a ZeroDivisionError if denominator is 0
    }

You can test if your custom function was successful or not by using a decision block in the playbook editor.

Custom function run failures

A custom function run contains a collection of intermediate custom function results. If some of the custom function results in the run fail, the overall custom function run will still succeed as long as some of the functions in the run succeed. The following call to phantom.custom_function shows a successful custom function run where the first three custom function results succeed, and the last custom function result failed:

phantom.custom_function('local/divide', parameters=[
    {'numerator': 8.0, 'denominator': 4},
    {'numerator': 8.0, 'denominator': 2},
    {'numerator': 8.0, 'denominator': 1},
    {'numerator': 8.0, 'denominator': 0},
])

The following example shows a custom function run that failed because each of the custom function results failed.

phantom.custom_function('local/divide', parameters=[
    {'numerator': 8.0, 'denominator': 0},
    {'numerator': 2.0, 'denominator': 0},
])

Keyboard shortcuts in the custom function editor

The following keyboard shortcuts can be used in the custom function editor.

Description Shortcut
Save Cmd+S
Edit mode Cmd+E

If you are using a Windows machine, replace Cmd with Ctrl in the keyboard shortcuts.

Last modified on 26 August, 2021
PREVIOUS
Run other Splunk Phantom playbooks inside your playbook
  NEXT
Add custom code to your Splunk Phantom Playbook with the legacy custom function block

This documentation applies to the following versions of Splunk® Phantom (Legacy): 4.10.4, 4.10.6, 4.10.7


Was this documentation topic helpful?


You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters