Splunk® Enterprise Security

Data Source Integration Manual

Acrobat logo Download manual as PDF


This documentation does not apply to the most recent version of Splunk® Enterprise Security. For documentation on the most recent version, go to the latest release.
Acrobat logo Download topic as PDF

Generic example

An add-on maps data and extracts fields from a data feed for use with Splunk. The data needs to be expressed in security-relevant terms to be visible within Splunk for Enterprise Security.

This example shows how to create an add-on to map a data feed for use with Splunk for Enterprise Security. Mapping the data feed informs Splunk that the data in this source type will need these extractions to provide security context. To create this knowledge, sample data for the new feed will be input and source typed, field extractions will be created, tags will be created, and actions will be created.

Before creating the add-on, you want to be familiar with Splunk for Enterprise Security and the data that you will be mapping (location, elements). Identify what portions of Splunk for Enterprise Security that the data will be populating (views, searches, dashboards).

For more information about the tags and fields needed for different views and dashboards, see the "Dashboard Requirements Matrix" in the Installation and Configuration Manual

Process for creating the add-on is:

  • Get the data into Splunk
  • Set the source type
  • Create field extractions
  • Create eventtypes (if necessary)
  • Create tags (if necessary)
  • Consolidate the configurations
  • Prepare and populate the folder
  • Test and package the Add-on
  • Upload to Splunkbase

Get the sample data into Splunk

With Splunk running and logged in as admin, upload a sample file of the data for which you want to create an add-on. For this example, we will work with a hypothetical sample data source called "moof".

1. In Splunk Web, choose Settings > Data Inputs. Click Add New and upload a sample file containing a representative number of records or connect to the data streaming source. Be sure to use a manual sourcetype and select a name which represents the data. The source type name is an important value that enables Splunk to recognize that a data feed can be mapped with the knowledge in a given add-on. Source typing tells Splunk what sort of extractions have been mapped for this data. The source type of data is set at index time, and cannot be changed after ingestion.

For introductory information about source typing, see "Why sourcetypes matter (a lot)" in the Splunk documentation and Michael Wilde's article "Sourcetypes gone wild" on the Splunk blogs.

Splunk cannot index data without setting a source type, so automatic source types are set for data feeds during indexing. These automatic source type decisions are permanent for a data input. For instance, if sample.moof is specified as a data input file and the source type is set to moof, all future updates from that file are set to the same source type. See "Bypass automatic source type assignment" in the Splunk documentation to learn more about this process. The list of source types that have already been mapped by Splunk ("List of pre-trained source types") can be found in the Splunk documentation.

Adding a new data input through the user interface will establish a local/inputs.conf file under the current application context (which is likely to be launcher or search). For instance, if the Welcome > Add data button were used, the inputs configuration file would be found at $SPLUNK_HOME/etc/apps/launcher/local/inputs.conf.

Important: For a production add-on, the sample data to be mapped should be fed into Splunk in the same manner as live data, to reduce differences in format and configuration. For example, if it will be arriving as a file, upload the sample data as a file; if it will be coming from TCP, UDP, or a script, then the sample data should be brought into Splunk by that method.

Note: Processing complex data feeds are beyond the scope of this chapter. For more information about establishing a data feed to map with this process, see "What Splunk can monitor" in the Splunk documentation.

Create field extractions

Once the data is in Splunk and identified by a source type, view the data in the Search app using sourcetype="moof" to test that the source type is working correctly.

sourcetype="moof"

1. In the data sample, click the context menu which appears to the left of each data row and choose Extract Fields to launch the Interactive Field Extractor (IFX). The field Restrict field extraction to will contain the proper source type by default, and should not be altered. Please review the "Configure custom fields" and "Manage search-time field extractions" in the core Splunk documentation before proceeding.

2. Refer to the datamodels needed for the views, dashboards, and searches in Splunk for Enterprise Security in the "Dashboard Requirements Matrix" in the Installation and Configuration Manual.

  • List the fields needed for this new add-on. For example, if moof is a type of firewall, its logs will be useful in Network > Traffic Center, which will lead to use of these fields:
  action
  dvc
  transport
  src
  dest
  src_port
  dest_port

For each field required, use the IFX to construct a field extraction.

1. Select several samples from the data to paste into the "Example values for a field" field. If there are multiple instances and click Generate. A generated pattern (regular expression or regex) string appears beneath the the example value field, and extracted results appear to the left of the sample data. Extracted results are highlighted with an "X" next to them. Click the "X" to delete unwanted matches from the search and refine the regular expression.

Note: The data source may require parsing which is beyond the capabilities of IFX to complete, in which case the data and the partially correct regular expression can be copied into an external tool for further editing.

2. When the completed regular expression correctly matches the samples for this field, click Save and enter the name of the field.

Repeat this process for each of the fields needed to use the new data. These extractions are saved in $SPLUNK_HOME/etc/users/$USERNAME$/$APPNAME$/local/props.conf. For instance, if this work is done as admin in the Search app, the path will be $SPLUNK_HOME/etc/users/admin/search/local/props.conf

Create eventtypes (if necessary)

Splunk App for Enterprise Security is not particularly sensitive to event type, but an event type is needed for tagging. Event types are used to differentiate the types of things that are done by the device which is logging data into Splunk. For instance, if moof is a firewall, it may have an event type of "moof_authentication" because it can authenticate a network connection.

Creating tags is only necessary for Centers and Searches in Splunk for Enterprise Security. Correlations searches can be created with data that is not tagged or event typed, so this step is not necessary for all add-ons.

To create the eventtype:

1. In the Search app, look at the data using the source type you created. For example:

sourcetype="moof"

2. Select the proper event, and click the context menu and choose Create eventtype from the drop-down menu. In the Build Event Type editor, use the source type as the eventtype. For instance, if moof logs authentication_de_la_moof=true when it authenticates a connection, this event should be used.

3. Name the eventtype something descriptive (such as moof_authentication) and Save the eventtype.

The eventtype is stored in $SPLUNK_HOME/etc/users/$USERNAME$/$APPNAME$/local/eventtypes.conf. For instance, if this work is done as admin in the Search app, the path will be $SPLUNK_HOME/etc/users/admin/search/local/eventtypes.conf

Create tags (if necessary)

Tags are used to differentiate the results of events that are done by the device which is logging into Splunk. For instance, if moof is a firewall, it may have a tag of "authentication" which normalizes "moof_authentication". Tags are used by Splunk to categorize information and specify the dashboards in which the information appears. For instance, a moof log of authentication_result=42 will now produce an eventtype of moof_authentication which will produce a tag of authentication. Using eventtypes and tags allows unified searches across multiple platforms with similar purposes, such as

tag::authentication="true"

To determine which tags are needed in the add-on, refer to the "Dashboard Requirements Matrix" in the Installation and Configuration Manual. Make a list of the needed tags.

To create the tags:

1. Refer to the Dashboard Requirements Matrix to make a list of which tags are needed. If we are looking for firewall traffic, only two tags are needed network and communicate

2. View the eventtype using the Search app:

sourcetype="moof" eventtype="moof_authentication"

The new eventtype will be displayed and highlighted beneath the event; click the context menu to the right and select "tag eventtype=moof_authentication".

3. Enter the name of the tag and click Save.

Repeat this process for each of the tags needed to use the new data. When tag modifications are made, they will be saved in the last open 'app', local context: $SPLUNK_HOME/etc/users/$USERNAME$/$APPNAME$/local/eventtypes.conf. For example, if this work is done as admin in the Search app, the path will be $SPLUNK_HOME/etc/users/admin/search/local/eventtypes.conf

Consolidate the configurations

Once the various configurations are working as intended, all the changes to the configuration files should be gathered into one package, an app, for easy identification and distribution.

Prepare and populate the folder

Use the sample template files in $SPLUNK_HOME/etc/apps/SplunkEnterpriseSecurityInstaller/default/src/etc/apps/TA-template.zip to prepare the add-on folder:

1. extract the template:

cd $SPLUNK_HOME/etc/apps/
unzip TA-template.zip

2. Rename the extracted folder $SPLUNK_HOME/etc/apps/TA-template to a name that reflects its new purpose, such as $SPLUNK_HOME/etc/apps/Splunk_TA-moof.

mv TA-template Splunk_TA-moof 

3. Go to the Splunk_TA-moof directory.

cd Splunk_TA-moof

This will be the folder for the new add-on.

Edit inputs.conf if necessary

If this add-on will be responsible for feeding the data in, edit the default/inputs.conf file and specify the input mechanism, as well as setting a sourcetype. For instance, this method would be used if moof logs were in binary format and needed to be translated with an external script before use in Splunk. If the data will be fed into Splunk directly (for example, through file or data stream input), editing inputs.conf is not necessary. Review the $SPLUNK_HOME/etc/apps/$APPNAME$/local/inputs.conf file produced in "Get the sample data into Splunk" for example configuration.

Edit props.conf to set the sourcetype name

The source type is specified in the props.conf file. Multiple source type rules can be set in this file to support different types of data feed. In this case, a simple file extension rule is used to set the source type to moof.

To specify the feed and source type of our sample data, edit the default/props.conf file. List the source of the data in brackets, then set its sourcetype:

[source::/path/to/sample.moof]
sourcetype = moof

Reference material for props.conf may be found in the "props.conf" in the core Splunk documentation.

Review the $SPLUNK_HOME/etc/apps/$APPNAME$/local/inputs.conf file produced in [Get the sample data into Splunk] for example configuration; setting source type can be performed in inputs.conf or props.conf.

Edit transforms.conf and props.conf to prepare field extractions

The field extractions produced with IFX are saved in $SPLUNK_HOME/etc/users/$USERNAME$/$APPNAME$/local/props.conf. For instance, if this work was done as admin in the Search app, the path will be $SPLUNK_HOME/etc/users/admin/search/local/props.conf Each extraction is saved in the following format:

EXTRACT-$FIELD_NAME$ = $REGULAR_EXPRESSION$

This is strictly functional, but to provide a higher level of flexibility and maintainability the add-on should split this form into a transforms.conf statement.

1. Copy each regular expression into default/transforms.conf in the following format:

[get_$FIELD_NAME$]
REGEX = $REGULAR_EXPRESSION$
FORMAT = $FIELD_NAME::$1

2. Reference each expression in default/props.conf in the following format:

REPORT-$FIELD_NAME$ = get_$FIELD_NAME$

Save both files. Now the add-on is prepared to do source typing of the data and extract the proper fields from it. This is sufficient for some basic correlation searches, but to fully utilize the data source eventtypes and tags should be used as well.

Edit eventtypes.conf and tags.conf to prepare tags

The eventtypes produced in the web console are saved in $SPLUNK_HOME/etc/users/$USERNAME$/$APPNAME$/local/eventtypes.conf. For instance, if this work is done as admin in the Search app, the path will be $SPLUNK_HOME/etc/users/admin/search/local/eventtypes.conf

Each eventtype is saved in the following format:

[authentication_moof]
search = sourcetype="moof" "authentication_result="42"

1. Copy these eventtypes into defaults/eventtypes.conf.

2. Create a new file defaults/tags.conf and enable tags for each eventtype:

[eventtype=authentication_moof]
network = enabled
communicate = enabled

Test and package the add-on

Restart Splunk, then open the Search app and verify that:

  1. Source typing is working correctly - Go to the Summary screen, the source type is listed under Sources.
  2. Event types are showing up - Click on the source type and scroll down in the Field Discovery panel to "eventtype". The event type is listed.
  3. Tags are displayed - Click the eventtype and scroll down tags::eventtype; the new tags are listed.

Next go into the Splunk App for Enterprise Security and check to see that the dashboards are populating correctly. Go to the dashboard you expect to be populated and check to see that the data is being displayed.

Review the add-on for complete coverage of the fields, event types, and tags required for the use cases and data sources it needs to support.

Document and package the add-on

Once you have created your add-on and verified that it is working correctly, document the add-on for future reference and package so it is easy to deploy and share.

1. Edit the README file under the root directory of the add-on and add the information necessary to remember what you did and to help others who may use the add-on. Note that the sample README from TA-template.zip contains a suggested format.

2. Ensure that the archive does not include any files in the local directory. The local directory is reserved for files that are specific to an individual installation or to the system where the add-on is installed.

3. Add a .default extension to any files that may need to be changed on individual instances of Splunk running the add-on. This includes dynamically-generated files (such as lookup files generated by saved searches) as well as lookup files that users must configure on a per install basis. If you include a lookup file in the archive and do not add a .default extension, regular usage and/or upgrades will overwrite the corresponding file. Adding the .default extension makes it clear to the administrator that the file is a default version of the file, and should be used only if a more applicable file does not exist already.

Upload the new add-on to Splunkbase

Compress the add-on into a single file archive (such as a zip or tar.gz archive). To share the add-on, go to Splunkbase, login to an account, click upload an app and follow the instructions for the upload.

Last modified on 04 January, 2017
PREVIOUS
Out-of-the-box source types
  NEXT
Example 1: Blue Coat proxy logs

This documentation applies to the following versions of Splunk® Enterprise Security: 3.2.1, 3.2.2, 3.3.0, 3.3.1, 3.3.2, 3.3.3


Was this documentation topic helpful?


You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters