Command line tools for use with Support
This topic contains information on CLI tools to help with troubleshooting Splunk Enterprise. Most of these tools are invoked using the Splunk CLI command "cmd".
Do not use these tools without first consulting with Splunk Support.
For general information about using the CLI in Splunk software, see Get help with the CLI in the Admin Manual.
cmd
Runs the specified utility in $SPLUNK_HOME/bin
with the required environment variables preset.
To see which environment variables will be set, run "splunk envvars".
Examples:
./splunk cmd btool inputs list ./splunk cmd /bin/ls
Syntax: cmd <command> [parameters...]
Objects: None
Required Parameters: None
Optional Parameters: None
btool
View or validate Splunk software configuration files, taking into account configuration file layering and user/app context.
Syntax:
btool <CONF_FILE> list [options] btool check [options]
Objects: None
Required Parameters: None
Optional Parameters:
--user=SPLUNK_USER View the configuration data visible to the given user --app=SPLUNK_APP View the configuration data visible from the given app --dir=DIR Read configuration data from the given absolute path instead of $SPLUNK_HOME/etc --debug Print and log extra debugging information
Examples:
List: ./splunk cmd btool [--app=app_name] conf_file_prefix list [stanza_prefix]
Add: ./splunk cmd btool [--app=app_name] conf_file_prefix add
Delete: ./splunk cmd btool --app=app_name --user=user_name conf_file_prefix delete stanza_name [attribute_name]
Check for typos: ./splunk cmd btool check
For more information, read Use btool to troubleshoot configurations.
btprobe
Queries the fishbucket for checkpoints stored by monitor inputs. Any changes made to the fishbucket using btprobe take effect only after a restart. Shut down your Splunk software before using btprobe. For up-to-date usage, run btprobe --help.
You must specify either -d <dir> or --compute-crc <file>.
There are two ways to invoke this tool.
1. Query a specified BTree for a given key or file.
From the Splunk software installation directory, type:
./btprobe [-h or --help] -d <btree directory> [-k <hex key OR ALL> | --file <filename>] [--salt <salt>] [--validate] [--reset] [--bytes <bytes>] [-r]
The options are as follows:
-d Directory that contains the btree index. (Required.) -k Hex crc key or ALL to get all the keys. --file File to compute the crc from. -r Rebuild the btree .dat files (i.e., var/lib/splunk/fishbucket/splunk_private_db/ One of -k and --file must be specified. --validate Validate the btree to look for errors. --salt Salt the crc if --file param is specified. --reset Reset the fishbucket for the given key or file in the btree. Resetting the checkpoint for an active monitor input reindexes data, resulting in increased license use. --bytes Number of bytes to read when calculating CRC (default 256). --sourcetype Sourcetype to load configurations and check Indexed Extraction and compute CRC accordingly.
2. Computes a crc from a specified file, using a given salt if any.
From the Splunk software installation directory, type:
./btprobe [-h or --help] --compute-crc <filename> [--salt <salt>] [--bytes <bytes>]
- Example: Reset a specific file in the fishbucket:
./splunk cmd btprobe -d /opt/splunkforwarder/var/lib/splunk/fishbucket/splunk_private_db --file /var/log/messages --reset
- Example:
./btprobe -d /opt/splunk/var/lib/splunk/fishbucket/splunk_private_db -k 0xe8d117ddba85e714 --validate
- Example:
./btprobe -d /opt/splunk/var/lib/splunk/fishbucket/splunk_private_db --file /var/log/inputfile --salt SOME_SALT
- Example:
./btprobe --compute-crc /var/log/inputfile --salt SOME_SALT
classify
The "splunk train sourcetype" CLI command calls classify. To call it directly use:
$SPLUNK_HOME/bin/splunk cmd classify <path/to/myfile> <mysourcetypename>
check-rawdata-format
Unpacks and verifies the 'rawdata' component one or more buckets. 'rawdata' is the record of truth from which Splunk software can rebuild the other components of a bucket. This tool can be useful if you are worried or believe there may be data integrity problems in a set of buckets or index. Also you can use it to check for journal integrity prior to issuing a rebuild, if you wish to know whether the rebuild can complete successfully before running it.
Complementary but nonoverlapping with the splunk fsck
command
splunk check-rawdata-format -bucketPath <bucket> splunk check-rawdata-format -index <index> splunk check-rawdata-format -allindexes
fsck
Diagnoses the health of your buckets and can rebuild search data as necessary. Can take a long time to run on several buckets, and you must stop Splunk software before running it. See Nonclustered bucket issues in Managing Indexers and Clusters of Indexers for help repairing buckets.
The output of splunk fsck --help
is as follows:
USAGE
Supported modes are: scan, repair, clear-bloomfilter, check-integrity, generate-hash-files
<bucketSelector> := --one-bucket|--all-buckets-one-index|--all-buckets-all-indexes [--index-name=<name>] [--bucket-name=<name>] [--bucket-path=<path>] [--include-hots] [--local-id=<id>] [--origin-guid=<guid>] [--min-ET=<epochSecs>] [--max-LT=<epochSecs>]
<otherFlags> := [--try-warm-then-cold] [--log-to--splunkd-log] [--debug] [--v]
fsck repair <bucketSelector> <otherFlags> [--bloomfilter-only] [--backfill-always|--backfill-never] [--bloomfilter-output-path=<path>] [--raw-size-only] [--metadata] [--ignore-read-error]
fsck scan <bucketSelector> <otherFlags> [--metadata] [--check-bloomfilter-presence-always] [--include-rawdata]
fsck clear-bloomfilter <bucketSelector> <otherFlags>
fsck check-integrity <bucketSelector> fsck generate-hash-files <bucketSelector>
fsck check-rawdata-format <bucketSelector>
fsck minify-tsidx --one-bucket --bucket-path=<path> --dont-update-manifest|--home-path=<dir>
Notes: The mode verb 'make-searchable' is synonym for 'repair'. The mode 'check-integrity' will verify data integrity for buckets created with the integrity-check feature enabled. The mode 'generate-hash-files' will create or update bucket-level hashes for buckets which were generated with the integrity-check feature enabled. The mode 'check-rawdata-format' verifies that the journal format is intact for the selected index buckets (the journal is stored in a valid gzip container and has valid journal structure Flag --log-to--splunkd-log is intended for calls from within splunkd. If neither --backfill-always nor --backfill-never are given, backfill decisions will be made per indexes.conf 'maxBloomBackfillBucketAge' and 'createBloomfilter' parameters. Values of 'homePath' and 'coldPath' will always be read from config; if config is not available, use --one-bucket and --bucket-path but not --index-name. All <bucketSelector> constraints supplied are implicitly ANDed. Flag --metadata is only applicable when migrating from 4.2 release. If giving --include-hots, please recall that hot buckets have no bloomfilters. Not all argument combinations are valid. If --help found in any argument position, prints this message & quits.
./splunk --repair
works only with buckets created by Splunk Enterprise version 4.2 or later.
For more information about buckets, read How Splunk stores indexes in Managing Indexers and Clusters of Indexers.
locktest
./splunk cmd locktest
If you run Splunk Enterprise on a file system that is not listed, the software might run a startup utility named `locktest` to test the viability of the file system. `Locktest` is a program that tests the start up process. If `locktest` fails, then the file system is not suitable for running Splunk Enterprise. See System Requirements for details.
locktool
./splunk cmd locktool
Usage :
lock : [-l | --lock ] [dirToLock] <timeOutSecs>
unlock [-u | --unlock ] [dirToUnlock] <timeOutSecs>
Acquires and releases locks in the same manner as splunkd. If you were to write an external script to copy db buckets in and out of indexes you should acqure locks on the db colddb and thaweddb directories as you are modifying them and release the locks when you are done.
parsetest
./splunk cmd parsetest Usage: parsetest "<string>" ["<sourcetype>|source::<filename>|host::<hostname>"] parsetest file <filename> ["<sourcetype>|host::<hostname>"] Example: parsetest "10/11/2009 12:11:13" "syslog" parsetest file "foo.log" "syslog"
pcregextest
Simple utility tool for testing modular regular expressions.
./splunk cmd pcregextest mregex=<regex> Usage: pcregextest mregex="query_regex" (name="subregex_value")* (test_str="string to test regex")? Example: pcregextest mregex="[[ip:src_]] [[ip:dst_]]" ip="(?<ip>\d+[[dotnum]]{3})" dotnum="\.\d+" test_str="1.1.1.1 2.2.2.2"
That is, define modular regex in the 'mregex' parameter. Then define all the subregexes referenced in 'mregex'. Finally you can provide a sample string to test the resulting regex against, in 'test_str'.
searchtest
./splunk cmd searchtest search
signtool
Sign
./splunk cmd signtool [-s | --sign] [<dir to sign>]
Verify
./splunk cmd signtool [-v | --verify] [<dir to verify>]
Using logging configuration at /Applications/splunk/etc/log-cmdline.cfg
.
Allows verification and signing splunk index buckets. If you have signing set up in a cold to frozen script. Signtool allows you to verify the signatures of your archives.
tsidxprobe
This will take a look at your time-series index files (or "tsidx files"; they are appended with .tsidx) and verify that they meet the necessary format requirements. It should also identify any files that are potentially causing a problem
go to the $SPLUNK_HOME/bin directory. Do "source setSplunkEnv".
Then use tsidxprobe to look at each of your index files with this little script you can run from your shell (this works with bash):
- for i in `find $SPLUNK_DB -name '*.tsidx'`; do tsidxprobe $i >> tsidxprobeout.txt; done
(If you've changed the default datastore path, then this should be in the new location.)
The file tsidxprobeout.txt will contain the results from your index files. You should be able to gzip this and attach it to an email and send it to Splunk Support.
tsidx_scan.py
For Splunk Enterprise versions 4.2.2 or later, this utility script searches for tsidx files at a specified starting location, runs tsidxprobe for each one, and outputs the results to a file.
From $SPLUNK_HOME/bin, call it like this:
splunk cmd python tsidx_scan.py [path]
Example:
splunk cmd python tsidx_scan.py /opt/splunk/var/lib/splunk
If you omit the optional path, the scan starts at $SPLUNK_DB
The output is written to the file tsidxprobe.YYYY-MM-DD.txt in the current directory.
walklex
This tool "walks the lexicon" to tell you which terms exist in a given index. For example, with some search commands (like tstat
), the field is in the index; for other terms it is not. Walklex can be useful for debugging.
Walklex outputs a line with three pieces of information:
- term ID (a unique identifier)
- number of occurrences of the term
- term
Usage:
From $SPLUNK_HOME/bin
, type
./splunk cmd walklex </path/to/tsidx_file.tsidx> "<key>::<value>"
It recognizes wildcards:
./splunk cmd walklex </path/to/tsidx_file.tsidx> ""
./splunk cmd walklex </path/to/tsidx_file.tsidx> "*::*"
Empty quotes return all results, and asterisks return all keys or all values (or both, as in the example above).
Example:
./splunk cmd walklex </path/to/tsidx_file.tsidx> "token"
Collect pstacks | I can't find my data! |
This documentation applies to the following versions of Splunk® Enterprise: 7.0.0, 7.0.1, 7.0.2, 7.0.3, 7.0.4, 7.0.5, 7.0.6, 7.0.7, 7.0.8, 7.0.9, 7.0.10, 7.0.11, 7.0.13, 7.1.0, 7.1.1, 7.1.2, 7.1.3, 7.1.4, 7.1.5, 7.1.6, 7.1.7, 7.1.8, 7.1.9, 7.1.10
Feedback submitted, thanks!