Splunk® Enterprise

Developing Views and Apps for Splunk Web

Splunk Enterprise version 7.2 is no longer supported as of April 30, 2021. See the Splunk Software Support Policy for details. For information about upgrading to a supported version, see How to upgrade Splunk Enterprise.

Modular inputs configuration

This topic describes several ways to define configuration for modular inputs. It includes the following:

  • How to create and edit the inputs.conf.spec file for modular inputs.
  • Configuration layering for modular inputs
  • Specifying permissions to access modular input apps

Create a modular input spec file

Specific locations are required for all spec files. For modular inputs, the spec file is located in a README directory of the app implementing the modular input.

$SPLUNK_HOME/etc/apps/<myapp>/README/inputs.conf.spec

The location of script referenced in the spec file is here:

$SPLUNK_HOME/etc/apps/<myapp>/bin/<myscript>

Structure of a spec file

Splunk Enterprise provides numerous spec files that it uses to configure and access a Splunk Enterprise server. These default spec files are heavily commented and include examples on how to configure Splunk Enterprise.

However, the structure of a spec file is quite basic, it only requires the following elements:

  • stanza header (one or more)
  • param values (one or more for each stanza)

The following shows a minimal inputs.conf.spec file. In this file, the values for the parameters are not present. These are not required. If present, Splunk Enterprise ignores them. Additionally, the <name> element in the stanza header is ignored.

Sample inputs.conf.spec file

[myscript://<name>]
param1 =

Writing valid spec files

Here are some things to keep in mind when writing spec files:

  • The inputs.conf.spec spec file must be at the following location:
$SPLUNK_HOME/etc/apps/<app_name>/README/
  • The following regex defines valid identifiers for the scheme name (the name before the ://) and for parameters:
[0-9a-zA-Z][0-9a-zA-Z_-]*
  • Avoid name collision with built-in scheme names. Do not use any of the following as scheme names for your modular inputs:
batch
fifo
monitor
script
splunktcp
tcp
udp
  • Some parameters are always implicitly defined. Specifying any of the following parameters for your modular inputs has no effect. However, you could specify these to help clarify the usage:
source
sourcetype
host
index
disabled
interval
persistentQueue
persistentQueueSize
queueSize
  • Modular inputs can only be defined once. Subsequent definitions (a new scheme stanza) and their parameters are ignored.
  • A scheme must define at least one parameter. Duplicate parameters are ignored.
  • The stanza definition and their parameters must start at the beginning of the line.

Spec file example

Here is the spec file for the Amazon S3 example.

S3 inputs.conf.spec file

[s3://<name>]
key_id = <value>
* This is Amazon key ID.

secret_key = <value>
* This is the secret key.

Configuration layering for modular inputs

As described in Configuration file precedence in the Admin manual, Splunk Enterprise uses configuration layering across inputs.conf files in your system. Configuration for modular inputs contrasts with how configuration generally works. Typically a configuration stanza only inherits from the global default configuration.

For modular inputs configuration, each modular input scheme gets a separate default stanza in inputs.conf. After Splunk Enterprise layers the configurations, the configuration stanza for a modular input (myScheme://aaa) inherits values from the scheme default configuration. A modular input can inherit the values for index and host from the default stanza, but the scheme default configuration can override these values.

For example, consider the following inputs.conf files in a system:

Global default
.../etc/system/local/inputs.conf

[default]
. . .
index = default
host = myHost

Scheme default
.../etc/apps/myApp/default/inputs.conf

[myScheme]
host = myOtherHost
param1 = p1

Configuration stanza
.../etc/apps/search/local/inputs.conf

[myScheme://aaa]
param2 = p2

Here is how layered configuration is built:

  1. Apply the values for index and host from the global default.
    In a typical installation the values for index and host from the global default configuration apply to all inputs. Other values in the global default configuration do not apply to modular inputs.
  2. Apply values from scheme default, overriding any values previously set.
  3. Apply values from configuration stanza, overriding any values previously set.

The layered outcome of the above configuration example is:

Layered configuration example

[myScheme://aaa]
index = default       #from Global default
host = myHost         #from Global default, overridden by Scheme default
host = myOtherHost    #from Scheme default
param1 = p1           #from Scheme default
param2 = p2           #from Configuration stanza

Interval parameter

Use the interval parameter to schedule and monitor scripts. The interval parameter specifies how long a script waits before it restarts.

The interval parameter is useful for a script that performs a task periodically. The script performs a specific task and then exits. The interval parameter specifies when the script restarts to perform the task again.

The interval parameter is also useful to ensure that a script restarts, even if a previous instance of the script exits unexpectedly.

Entering an empty value for interval results in a script only being executed on start and/or endpoint reload (on edit).

single script instance per input stanza mode

For single script instance per input stanza mode, each stanza can specify its own interval parameter.

single script instance mode

For single script instance mode, Splunk Enterprise reads the interval setting from the scheme default stanza only. If interval is set under a specific input stanza, that value is ignored.

For single script instance mode, interval cannot be an endpoint argument, even if it is specified in inputs.conf.spec. You cannot modify the interval value for single script instance mode using the endpoint.

Persistent queues

You can configure persistent queues with modular inputs. You can use persistent queues with modular inputs much as you do with TCP, UDP, FIFO, and scripted inputs, as described in Use persistent queues to help prevent data loss.

You configure persistent queues for modular inputs much as you do with other inputs. There are differences depending on the type of modular input.

single script instance per input stanza mode

In this mode, a script is spawned for each inputs stanza. Because each script produces its own stream, it can have its own persistent queue. The correct way to configure a persistent queue is to put the persistent queue parameters under each inputs stanza:

[foobar://aaa]
param1 = 1234
param2 = qwerty
queueSize = 50KB
persistentQueueSize = 100MB

Another way to configure a persistent queue is to put queueSize and persistentQueueSize under the scheme default stanza (in this example, [foobar]). All input stanzas inherit these params and result in the creation of a separate persistent queue for each input stanza.

single script instance mode

In this mode, there is only one stream of data that services all inputs stanzas for that modular input. The only valid way to configure the persistent queue is to put the settings under the scheme default stanza. Placing it under a specific input stanza has no effect.

[foobar]
queueSize = 50KB
persistentQueueSize = 100MB

Persistent queue location

Persistent queue files are in the same directory location as scripted inputs:

$SPLUNK_HOME/var/run/splunk/exec/<encoded path>

<encoded path> derives from the inputs stanza (for single script instance per input stanza mode) or the scheme name (for single script instance mode).

Specify permissions for modular input scripts

Read permission for modular input scripts is controlled by the list_inputs capability. This capability also controls reading of other input endpoints.

By default, the admin_all_objects capability controls create and edit permissions for modular inputs. However, you have the option to create a capability that customizes edit and create permissions for any specific modular input scheme. If the custom capability for a modular input is present, the custom capability is applied rather than the default admin_all_objects capability.

The custom capability for modular inputs takes the following form:

edit_modinput_myscheme

After creating the capability for a modular input, enable it for one or more user roles.

Caution: Make sure you assign one or more roles for the capability edit_modinput_myscheme, otherwise no one can create or edit modular inputs for that scheme.

To create a custom capability and assign roles edit the authorize.conf configuration file. For example, to create a custom create and edit capability for the MyScheme modular input, and then enable it for the admin and power roles, do the following:

$SPLUNK_HOME/etc/apps/<app_name>/default/authorize.conf

[capability::edit_modinput_MyScheme]

[role_admin]
edit_modinput_MyScheme = enabled

[role_power]
edit_modinput_MyScheme = enabled

For more information on roles and capabilities, refer to:

Last modified on 01 August, 2019
Set up streaming   Create a custom user interface

This documentation applies to the following versions of Splunk® Enterprise: 7.0.0, 7.0.1, 7.0.2, 7.0.3, 7.0.4, 7.0.5, 7.0.6, 7.0.7, 7.0.8, 7.0.9, 7.0.10, 7.0.11, 7.0.13, 7.1.0, 7.1.1, 7.1.2, 7.1.3, 7.1.4, 7.1.5, 7.1.6, 7.1.7, 7.1.8, 7.1.9, 7.1.10, 7.2.0, 7.2.1, 7.2.2, 7.2.3, 7.2.4, 7.2.5, 7.2.6, 7.2.7, 7.2.8, 7.2.9, 7.2.10, 7.3.0, 7.3.1, 7.3.2, 7.3.3, 7.3.4, 7.3.5, 7.3.6, 7.3.7, 7.3.8, 7.3.9, 8.0.0, 8.0.1, 8.0.2, 8.0.3, 8.0.4, 8.0.5, 8.0.6, 8.0.7, 8.0.8, 8.0.9, 8.0.10, 8.1.1, 8.1.2, 8.1.3, 8.1.4, 8.1.5, 8.1.6, 8.1.7, 8.1.8, 8.1.9, 8.1.13, 8.1.14, 8.2.0, 8.2.1, 8.2.2, 8.2.3, 8.2.4, 8.2.5, 8.2.6, 8.2.7, 8.2.8, 8.2.9, 8.2.10, 8.2.11, 8.2.12, 9.0.0, 9.0.1, 9.0.2, 9.0.3, 9.0.4, 9.0.5, 9.0.6, 9.0.7, 9.0.8, 9.0.9, 9.0.10, 9.1.0, 9.1.1, 9.1.2, 9.1.3, 9.1.4, 9.1.5, 9.1.6, 9.2.0, 9.2.1, 9.2.2, 9.2.3, 9.3.0, 9.3.1, 8.1.0, 8.1.10, 8.1.11, 8.1.12


Was this topic useful?







You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters