To add an input to Splunk Enterprise, add a stanza to inputs.conf in $SPLUNK_HOME/etc/system/local/, or your own custom application directory in $SPLUNK_HOME/etc/apps/. If you have not worked with Splunk's configuration files before, read "About configuration files" before you begin.
You can set multiple attributes in an input stanza. If you do not specify a value for an attribute, Splunk Enterprise uses the default, as defined in $SPLUNK_HOME/etc/system/default/inputs.conf.
Note: To ensure that new events are indexed when you copy over an existing file with new contents, set the CHECK_METHOD = modtime attribute in props.conf for the source. This checks the modification time of the file and re-indexes it when it changes. Be aware that the entire file will be re-indexed, which can result in duplicate events.
The following are attributes that you can use in both monitor and batch input stanzas. See the sections that follow for attributes that are specific to each type of input.
host = <string>
Sets the host key/field to a static value for this stanza.
Sets the host key's initial value. The key is used during parsing/indexing, in particular to set the host field. It is also the host field used at search time.
The <string> is prepended with 'host::'.
the IP address or fully-qualified domain name of the host where the data originated.
index = <string>
Set the index where events from this input will be stored.
The <string> is prepended with 'index::'.
For more information about the index field, see "How indexing works" in the Managing Indexers and Clusters manual.
main or whatever you set the default index to
sourcetype = <string>
Sets the sourcetype key/field for events from this input.
Explicitly declares the source type for this data, as opposed to allowing Splunk Enterprise to determine it automatically. This is important both for searchability and for applying the relevant formatting for this type of data during parsing and indexing.
Sets the sourcetype key's initial value. Splunk Enterprise uses the key during parsing/indexing, in particular to set the source type field during indexing. It is also the source type field used at search time.
Specifies a comma-separated list of tcpout group names.
Using this attribute, you can selectively forward your data to specific indexer(s) by specifying the tcpout group(s) that the forwarder should use when forwarding your data.
You define the tcpout group names in outputs.conf in [tcpout:<tcpout_group_name>] stanzas.
This setting defaults to the groups present in 'defaultGroup' in [tcpout] stanza in outputs.conf.
the groups present in 'defaultGroup' in [tcpout] stanza in outputs.conf
host_regex = <regular expression>
If specified, the regex extracts host from the filename of each input.
Specifically, the first group of the regex is used as the host.
the default "host =" attribute, if the regex fails to match
host_segment = <integer>
If specified, a segment of the path is set as host, using <integer> to determine which segment. For example, if host_segment = 2, host is set to the second segment of the path. Path segments are separated by the '/' character.
the default "host =" attribute, if the value is not an integer, or is less than 1
Monitor syntax and examples
Monitor input stanzas direct Splunk Enterprise to watch all files in the <path> (or just <path> itself if it represents a single file). You must specify the input type and then the path, so put three slashes in your path if you're starting at the root directory.
The following are additional attributes you can use when defining monitor input stanzas:
source = <string>
Sets the source key/field for events from this input.
Note: Overriding the source key is generally not recommended. Typically, the input layer will provide a more accurate string to aid in problem analysis and investigation, accurately recording the file from which the data was retreived. Consider use of source types, tagging, and search wildcards before overriding this value.
The <string> is prepended with 'source::'.
the input file path
crcSalt = <string>
Use this setting to force Splunk Enterprise to consume files that have matching CRCs (cyclic redundancy checks). (Splunk only performs CRC checks against the first few lines of a file. This behavior prevents Splunk from indexing the same file twice, even though you may have renamed it -- as, for example, with rolling log files. However, because the CRC is based on only the first few lines of the file, it is possible for legitimately different files to have matching CRCs, particularly if they have identical headers.)
If set, string is added to the CRC.
If set to <SOURCE>, the full source path is added to the CRC. This ensures that each file being monitored has a unique CRC.
Be cautious about using this attribute with rolling log files. It could lead to the log file being re-indexed after it has rolled.
Note: This setting is case sensitive.
Causes the monitored input to stop checking files for updates if their modification time (modtime) has passed the <time_window> threshold. This improves the speed of file tracking operations when monitoring directory hierarchies with large numbers of historical files (for example, when active log files are co-located with old files that are no longer being written to).
Note: A file whose modtime falls outside <time_window> when monitored for the first time will not get indexed.
Value must be: <number><unit>. For example, "7d" indicates one week. Valid units are "d" (days), "h" (hours), "m" (minutes), and "s" (seconds).
followTail = 0|1
If set to 1, monitoring begins at the end of the file (like *nix tail -f).
This only applies to files the first time they are picked up.
After that, Splunk Enterprise keeps track of the file using its internal file position records.
whitelist = <regular expression>
If set, Splunk Enterprise only monitors files whose names match the specified regex.
blacklist = <regular expression>
If set, Splunk Enterprise does NOT monitor files whose names match the specified regex.
alwaysOpenFile = 0 | 1
If set to 1, Splunk Enterprise opens a file to check if it has already been indexed.
Only useful for files that don't update modtime.
Should only be used for monitoring files on Windows, and mostly for IIS logs.
Important: This flag should only be used as a last resort, as it increases load and slows down indexing.
recursive = true|false
If set to false, Splunk Enterprise will not go into subdirectories found within a monitored directory.
time_before_close = <integer>
Modtime delta required before Splunk Enterprise can close a file on EOF.
Tells the system not to close files that have been updated in past <integer> seconds.
followSymlink = true|false
If false, Splunk Enterprise will ignore symbolic links found within a monitored directory.
Example 1. To load anything in /apache/foo/logs or /apache/bar/logs, etc.
Example 2. To load anything in /apache/ that ends in .log.
Example 3. To monitor the Windows DNS server log.
MonitorNoHandle syntax and examples
On Windows systems only, use the MonitorNoHandle stanza to monitor files without using Windows file handles. This allows you to read special log files like Windows's DNS server log files.
You must specify a valid path to a file when you use MonitorNoHandle. You cannot specify a directory. If you specify a file that already exists, Splunk Enterprise does not index the existing data in the file. It only indexes new data that the system writes to the file.
You can only configure monitorNoHandle using inputs.conf or the CLI. you cannot configure it in Splunk Web.
Important: When defining batch inputs, you must include the attribute, move_policy = sinkhole. This loads the file destructively. Do not use the batch input type for files you do not want to delete after indexing.
Example: This example batch loads all files from the directory system/flight815/, but does not recurse through any subdirectories under it:
You must be logged into splunk.com in order to post comments.
Log in now.
Please try to keep this discussion focused on the content covered in this documentation topic.
If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk,
consider posting a question to Splunkbase Answers.
Your Comment Has Been Posted Above
We use our own and third-party cookies to provide you with a great online experience. We also use these cookies to improve our products and services, support our marketing campaigns, and advertise to you on our website and other websites. Some cookies may continue to collect information after you have left our website.
Learn more (including how to update your settings) here »