Splunk® Enterprise

Troubleshooting Manual

Splunk Enterprise version 9.0 will no longer be supported as of June 14, 2024. See the Splunk Software Support Policy for details. For information about upgrading to a supported version, see How to upgrade Splunk Enterprise.

Command line tools for use with Support

This topic contains information about CLI tools that can help with troubleshooting Splunk Enterprise. Most of these tools are invoked using the Splunk CLI command cmd.

Do not use these tools without first consulting with Splunk Support.

For general information about using the CLI in Splunk software, see Get help with the CLI in the Admin Manual.

cmd

Runs the specified utility in $SPLUNK_HOME/bin with the environment variables preset.

To see which environment variables will be set, run splunk envvars.

Examples:

  ./splunk cmd /bin/ls
  ./splunk cmd locktest
Syntax
cmd <command> [parameters...]
Objects
None
Required parameters
None
Optional parameters
None

btool

View or validate Splunk software configuration files, taking into account configuration file layering and user/app context.

Syntax
btool <CONF_FILE> list [options]
btool check [options]
Objects
None
Required parameters
None
Optional parameters
--user=SPLUNK_USER  View the configuration data visible to the given user

--app=SPLUNK_APP    View the configuration data visible from the given app

--dir=DIR   Read configuration data from the given absolute path instead of $SPLUNK_HOME/etc

--debug     Print and log extra debugging information
Examples

List:
./splunk btool [--app=app_name] conf_file_prefix list [stanza_prefix]

Add:
./splunk btool [--app=app_name] conf_file_prefix add

Delete:
./splunk btool --app=app_name --user=user_name conf_file_prefix delete stanza_name [attribute_name]

Check for typos:
./splunk btool check

For more examples, see Use btool to troubleshoot configurations.

btprobe

Queries the fishbucket for checkpoints stored by monitor inputs. Any changes made to the fishbucket using btprobe take effect only after a restart. Shut down your Splunk software before using btprobe. For up-to-date usage, run btprobe --help.

You must specify either -d <dir> or --compute-crc <file>.

There are two ways to invoke this tool.

1. Query a specified BTree for a given key or file.

From the Splunk software installation directory, type:

./btprobe [-h or --help] -d <btree directory> [-k <hex key OR ALL> | --file <filename>] [--salt <salt>] [--validate] [--reset] [--bytes <bytes>] [-r]

The options are as follows:

    -d        	 Directory that contains the btree index. (Required.)
    -k        	 Hex crc key or ALL to get all the keys.
    --file    	 File to compute the crc from.
    -r        	 Rebuild the btree .dat files (i.e., var/lib/splunk/fishbucket/splunk_private_db/ 
      One of -k and --file must be specified.

    --validate  	 Validate the btree to look for errors.
    --salt      	 Salt the crc if --file param is specified.
    --reset     	 Reset the fishbucket for the given key or file in the btree.
                 Resetting the checkpoint for an active monitor input reindexes data, resulting in increased license use.

    --bytes     	 Number of bytes to read when calculating CRC (default 256).
    --sourcetype	 Sourcetype to load configurations and check Indexed Extraction
	                	 and compute CRC accordingly.


2. Computes a crc from a specified file, using a given salt if any.

From the Splunk software installation directory, type:

./btprobe [-h or --help] --compute-crc <filename> [--salt <salt>] [--bytes <bytes>]


  • Example: Reset a specific file in the fishbucket:
./splunk cmd btprobe -d  /opt/splunkforwarder/var/lib/splunk/fishbucket/splunk_private_db  --file /var/log/messages --reset
  • Example:
./btprobe -d /opt/splunk/var/lib/splunk/fishbucket/splunk_private_db -k 0xe8d117ddba85e714 --validate
  • Example:
./btprobe -d /opt/splunk/var/lib/splunk/fishbucket/splunk_private_db --file /var/log/inputfile --salt SOME_SALT
  • Example:
./btprobe --compute-crc /var/log/inputfile --salt SOME_SALT

classify

The "splunk train sourcetype" CLI command calls classify. To call it directly use:

$SPLUNK_HOME/bin/splunk cmd classify <path/to/myfile> <mysourcetypename> 

check-rawdata-format

Unpacks and verifies the 'rawdata' component one or more buckets. 'rawdata' is the record of truth from which Splunk software can rebuild the other components of a bucket. This tool can be useful if you are worried or believe there may be data integrity problems in a set of buckets or index. Also you can use it to check for journal integrity prior to issuing a rebuild, if you wish to know whether the rebuild can complete successfully before running it.

Complementary but nonoverlapping with the splunk fsck command

splunk check-rawdata-format -bucketPath <bucket>
splunk check-rawdata-format -index <index>
splunk check-rawdata-format -allindexes

cluster-merge-buckets

The command is used to select and merge a group of buckets in a specific index, based on a time range and size limits. After the group of buckets are merged, the newly created bucket is marked as searchable, and the old buckets are removed. If there's an error during the cluster bucket merge process, the merge process is stopped, and the pre-merged buckets are marked searchable again. The cluster-merge-buckets command includes support for clustered buckets using SmartStore. The command must be run from the cluster manager node.

The cluster-merge-buckets command has an extensive list of optional parameters. Always use the -dryrun switch to preview your changes on the console before running a bucket merge in production. The parameters that set limits, such as -max-count, -max-total-size, -max-total-runtime are defined per cluster peer. To verify that an index contains merged buckets, use the cluster-list-buckets command.

Prerequisties
  • The cluster-merge-buckets command is available in Splunk Enterprise for Windows and Linux operating systems.
  • The cluster-merge-buckets command must be run from the cluster manager node.
  • The cluster manager and all cluster peers must be running Splunk Enterprise 9.0.0 or higher.
  • The indexes.conf must have bucketMerging=true set globally, or in an individual index stanza.
  • The user running the cluster-merge-buckets command must be a member of the admin role, or have the merge_buckets capability.
  • The bucket selection is limited to Warm buckets in a single index.
  • The index settings determine the compression type used for the new bucket.
  • Data model accelerations (DMA) for merged buckets are generated by the same process that manages DMA for new hot buckets. The DMA associated with the old buckets are not included when using the -backup-to parameter.
  • In addition to console messages, the procedure is logged in the splunkd.log file. When using the -dryrun parameter, the output is on the console only.


Upon successful creation of a new merged bucket, the old buckets are deleted. To save the old buckets, use the -backup-to parameter.


Syntax
cluster-merge-buckets [parameters...]
Examples
  • You want to select any warm buckets created in the main index in early January, and merge them to create one or more 1GB buckets:
  1. On the cluster manager node, open a command line and run:
    ./splunk cluster-merge-buckets -index-name main -startdate 2020/01/01 -enddate 2020/01/10 -dryrun
    1. The sum of all buckets to be merged must meet the -min-size default (750MB) value.
    2. The -max-count default of 24 limits the maximum total buckets on each cluster peer that can be merged at one time.
    3. Using the -dryrun parameter allows the cluster-merge-buckets process to get the bucket counts, sizes, and other limits, and displays a report on the console for review.
  2. Review the process summary and details on the console.
  3. Once you're satisfied that the correct buckets are selected for merging, you can remove -dryrun switch from the command, and run the bucket merge again.
    1. A new bucket will use the next available hot bucket number.
    2. The merge process uses 3 threads to merge buckets in parallel.
    3. Upon successful creation of a new merged bucket, the old buckets are deleted. Use the -backup-to parameter to keep a copy of the old buckets on the local cluster peers.
  4. You can check the progress of the merge task using ./splunk cluster-merge-buckets -show-progress on the console to generate a status report.
  5. Use the cluster-list-buckets command to verify the merged bucket information:
    ./splunk cluster-list-buckets -index-name main -startdate 2020/01/01 -enddate 2020/01/10
    . The output is placed into the $SPLUNK_HOME/var/log/splunk/mergebuckets.log file, and is also available in the internal index.


  • You want to select any warm buckets in the main index created in the last quarter of the year. You'll expand the maximum number of small buckets that can be selected for merge to 1000, but you want to limit the storage space used to create the merged buckets to 5GB, and make a backup archive of the old buckets in the /tmp folder. This results in (5) 1G merged buckets, and a copy of the old buckets in the /tmp folder on the cluster peers.
./splunk cluster-merge-buckets -index-name main -max-count 1000 -max-total-size 5000 -startdate 2020/10/01 -enddate 2020/12/31 -backup-to /tmp -dryrun
  • You want to verify the creation date and bucket count information for any merged buckets created in the main index over the last year:
./splunk cluster-list-buckets -index-name main -startdate 2020/01/01 -enddate 2020/12/31 -verbose
Required parameters
Parameter Description
-index-name <index_name> The index that contains the buckets you want to merge.
Optional parameters
Parameter Description
-min-size <min size (MB)> Minimum size of buckets to be created. Default value is 750, use bucketMerge.minMergeSizeMB in indexes.conf to change default value. Bucket merging will not start If the buckets selected for merging do not meet the minimum size.
-max-size <max size (MB)> Maximum size of buckets to be created. Default value is 1000, use bucketMerge.maxMergeSizeMB in indexes.conf to change default value.
-max-timespan <max timespan (seconds)> The maximum timespan allowed for buckets to be merged in a single bucket. Default value is 7776000 (90 days), use bucketMerge.maxMergeTimeSpanSecs to change default value.
-max-count <max count of source buckets> The maximum number of buckets to merge. Default: 24.
-dryrun Use 'dryrun' to preview the behavior of your cluster-merge-bucket settings and filters without performing any actions. The results are sent to the console.
-startdate <date (yyyy/mm/dd)> Use 'startdate' to merge buckets created between now and the time chosen. Use with 'enddate' to set an exact time range.
-enddate <date (yyyy/mm/dd)> Use 'enddate' to merge buckets created prior to the time chosen. Use with 'startdate' to set an exact time range.
-backup-to <path to destination folder> Use 'backup-to' to make an archive of the original source buckets on each peer, and place that archive into the path on the peer after creating the merged bucket. Examples: -backup-to d:\temp, -backup-to /tmp
-max-total-size <max size (MB)> Used to limit the total disk space utilized for creating merged buckets. Divide by '-max-size' to estimate merged bucket count. Default value is 0 for no limit.
-max-total-runtime <max total runtime (seconds) Used to limit the total run time of a bucket merge process. The 'max-total-runtime' is measured after each merged bucket is created. The bucket merge run time is influenced by available machine resources.
-show-progress Use the cluster-merge-buckets -show-progress parameter to display the status of a running cluster bucket merge.

Use the cluster-list-buckets command to verify the merged bucket information:

Parameter Description
-index-name <index_name> The index that contains merged buckets.
-verbose <true/false> Enable debug mode to display a list of buckets that contributed to the merged bucket.
-startdate <date (yyyy/mm/dd)> Use 'startdate' to report on merged buckets created between now and the time chosen.
-enddate <date (yyyy/mm/dd)> Use 'enddate' to report on merged buckets created prior to the time chosen. To list merged buckets in a specific time span, use both 'startdate' and 'enddate' to define the time span.

fsck

Diagnoses the health of your buckets and can rebuild search data as necessary. Can take a long time to run on several buckets, and you must stop Splunk software before running it. See Nonclustered bucket issues in Managing Indexers and Clusters of Indexers for help repairing buckets.

The output of splunk fsck --help is as follows:

USAGE

Supported modes are: scan, repair, clear-bloomfilter, check-integrity, generate-hash-files

<bucketSelector> := --one-bucket|--all-buckets-one-index|--all-buckets-all-indexes
	[--index-name=<name>] [--bucket-name=<name>] [--bucket-path=<path>]
	[--include-hots]
	[--local-id=<id>] [--origin-guid=<guid>]
	[--min-ET=<epochSecs>] [--max-LT=<epochSecs>]

<otherFlags> := [--try-warm-then-cold] [--log-to--splunkd-log] [--debug] [--v]

fsck repair <bucketSelector> <otherFlags> [--bloomfilter-only]
	[--backfill-always|--backfill-never] [--bloomfilter-output-path=<path>]
	[--raw-size-only] [--metadata] [--ignore-read-error]

fsck scan <bucketSelector> <otherFlags> [--metadata] [--check-bloomfilter-presence-always] [--include-rawdata]

fsck clear-bloomfilter <bucketSelector> <otherFlags>

fsck check-integrity <bucketSelector>
fsck generate-hash-files <bucketSelector>

fsck check-rawdata-format <bucketSelector>

fsck minify-tsidx --one-bucket --bucket-path=<path> --dont-update-manifest|--home-path=<dir>
Notes:
	The mode verb 'make-searchable' is synonym for 'repair'.
	The mode 'check-integrity' will verify data integrity for buckets created with the integrity-check feature enabled.
	The mode 'generate-hash-files' will create or update bucket-level hashes for buckets which were generated with the integrity-check feature enabled.
	The mode 'check-rawdata-format' verifies that the journal format is intact for the selected index buckets (the journal is stored in a valid gzip container and has valid journal structure
	Flag --log-to--splunkd-log is intended for calls from within splunkd.
	If neither --backfill-always nor --backfill-never are given, backfill decisions will be made per indexes.conf 'maxBloomBackfillBucketAge' and 'createBloomfilter' parameters.
	Values of 'homePath' and 'coldPath' will always be read from config; if config is not available, use --one-bucket and --bucket-path but not --index-name.
	All <bucketSelector> constraints supplied are implicitly ANDed.
	Flag --metadata is only applicable when migrating from 4.2 release.
	If giving --include-hots, please recall that hot buckets have no bloomfilters.
	Not all argument combinations are valid.
	If --help found in any argument position, prints this message & quits.

./splunk --repair works only with buckets created by Splunk Enterprise version 4.2 or later.

For more information about buckets, read How Splunk stores indexes in Managing Indexers and Clusters of Indexers.

locktest

./splunk cmd locktest

If you run Splunk Enterprise on a file system that is not listed, the software might run a startup utility named `locktest` to test the viability of the file system. `Locktest` is a program that tests the start up process. If `locktest` fails, then the file system is not suitable for running Splunk Enterprise. See System Requirements for details.

locktool

./splunk cmd locktool

Usage :

lock : [-l | --lock ] [dirToLock] <timeOutSecs>

unlock [-u | --unlock ] [dirToUnlock] <timeOutSecs>

Acquires and releases locks in the same manner as splunkd. If you were to write an external script to copy db buckets in and out of indexes you should acqure locks on the db colddb and thaweddb directories as you are modifying them and release the locks when you are done.

merge-buckets

The command is used to select and merge a group of buckets in a specific index, based on a time range and size limits. After the group of buckets are merged, the newly created bucket is marked as searchable, and the old buckets are removed. If there's an error during the bucket merge process, the merge process is stopped, and the old buckets are marked searchable again. The merge-buckets command is supported on standalone instances, and distributed indexers.

If you're looking for guidance on how to identify small buckets, see What does this message mean regarding the health status of Splunkd? on Splunk Answers.

The merge-buckets command has an extensive list of optional parameters. Always use the --dryrun switch to preview your changes on the console before running a bucket merge in production. To verify that an index contains merged buckets, use the merge-buckets --listbuckets parameter subset.

Prerequisties
  • The Splunk Enterprise services cannot be running when merge-buckets is used.
  • The merge-buckets command is available in Splunk Enterprise for Windows and Linux operating systems.
  • The indexes.conf must have bucketMerging=true set globally, or in an individual index stanza
  • The merge-buckets command does not support clustered buckets or buckets stored using SmartStore. The command can be used on the buckets in single instance and distributed Splunk Enterprise environments only.
  • The user running merge-buckets must have full access to $SPLUNK_HOME and to the storage mounts where the Warm buckets are stored.
  • The bucket selection is limited to Warm buckets in a single index.
  • The index settings determine the compression type used for the new bucket.
  • Data model accelerations (DMA) for merged buckets are generated by the same process that manages DMA for new hot buckets. The DMA associated with the old buckets are not included when using the --backup-to parameter.
  • In addition to console messages, the procedure is logged in the $SPLUNK_HOME/var/log/splunk/splunkd-utility.log file. When using the --dryrun parameter, the output is on the console only.


Upon successful creation of a new merged bucket, the old buckets are deleted. To save the old buckets, use the --backup-to parameter.


Syntax
merge-buckets [parameters...]
Examples
  • You want to select any warm buckets created in the main index in early January, and merge them to create one or more 1GB buckets:
  1. Stop Splunk Enterprise services.
  2. On the command line, run:
    ./splunk merge-buckets --index-name=main --startdate=2020/01/01 --enddate=2020/01/10 --dryrun
    1. The sum of all buckets to be merged must meet the --min-size default (750MB) value.
    2. The --max-count default of 24 limits the maximum total buckets that can be merged at one time.
    3. Using the --dryrun parameter allows the merge-buckets process to get the bucket counts, sizes, and other limits, and displays a report on the console for review.
  3. Review the process summary and details.
  4. Once you're satisfied that the correct buckets are selected for merging, you can remove --dryrun switch from the command, and run the bucket merge again.
    1. A new bucket will use the next available hot bucket number.
    2. The merge process uses 3 threads to merge buckets in parallel.
    3. Upon successful creation of a new merged bucket, the old buckets are deleted. Use the --backup-to parameter to keep a copy of the old buckets.
  5. Use the --listbuckets parameter to verify the merged bucket information:
    ./splunk merge-buckets --index-name=main --listbuckets --startdate=2020/01/01 --enddate=2020/01/10
  6. Start Splunk Enterprise services.


  • You want to select any warm buckets in the main index created in the last quarter of the year. You'll expand the maximum number of small buckets that can be selected for merge to 1000, but you want to limit the storage space used to create the merged buckets to 5GB, and make a backup archive of the old buckets in the /tmp folder. This results in (5) 1G merged buckets, and a copy of the old buckets in the /tmp folder.
./splunk merge-buckets --index-name=main --max-count=1000 --max-total-size=5000 --startdate=2020/10/01 --enddate=2020/12/31 --backup-to=/tmp --dryrun
  • You want to verify the creation date and bucket count information for a specific merged bucket:
splunk.exe merge-buckets --index-name=main --listbuckets=0 --buckets=d:\opt\splunk\var\lib\splunk\main\db\db_1608929358_1608842958_44
Required parameters
Parameter Description
--index-name=<index_name> The index that contains the buckets you want to merge.
Optional parameters
Parameter Description
--buckets=<comma_separated_bucket_paths> The list of buckets you want to merge, separated by a comma.
--filter When specified, this option takes the list of buckets provided with --buckets, and applies additional filters. The filter will consider the following options: --min-size, --max-size, --max-timespan, --max-count, --startdate, --enddate.
--json-out Format the console stdout as JSON.
--debug Enable debug mode.
--min-size=<min size (MB)> Minimum size of buckets to be created. Default value is 750, use bucketMerge.minMergeSizeMB in indexes.conf to change default value. Bucket merging will not start If the buckets selected for merging do not meet the minimum size.
--max-size=<max size (MB)> Maximum size of buckets to be created. Default value is 1000, use bucketMerge.maxMergeSizeMB in indexes.conf to change default value.
--max-timespan=<max timespan (seconds)> The maximum timespan allowed for buckets to be merged in a single bucket. Default value is 7776000 (90 days), use bucketMerge.maxMergeTimeSpanSecs to change default value.
--max-count=<max count of source buckets> The maximum number of buckets to merge. Default: 24.
--dryrun/-D Use 'dryrun' to preview the behavior of your merge-bucket settings and bucket selections without performing any actions. The results are sent to the console.
--startdate=<date (yyyy/mm/dd)> Use 'startdate' to merge buckets created between now and the time chosen.
--enddate=<date (yyyy/mm/dd)> Use 'enddate' to merge buckets created prior to the time chosen.
--backup-to=<path to destination folder> Use 'backup-to' to make an archive of the original source buckets, and place the archive into the path after creating the merged bucket. Examples: --backup-to=d:\temp, --backup-to=/tmp
--max-total-size=<max size (MB)> Used to limit the total disk space utilized for creating merged buckets. Divide by '-max-size' to estimate merged bucket count. Default value is 0 for no limit.
--max-total-runtime=<max total runtime (seconds) Used to limit the total run time of a bucket merge process. The 'max-total-runtime' is measured after each merged bucket is created. The bucket merge run time is influenced by available machine resources.

Use the merge-buckets --listbuckets parameter to verify the merged bucket information:

Parameter Description
merge-buckets --listbuckets=<number> --index-name=<index_name> Lists the most recently merged <number> of buckets in the index homePath. Use '0' to display all merged buckets found.
--debug Enable debug mode to display a list of buckets that contributed to the merged bucket.
--buckets=<comma_separated_bucket_paths> Use this switch to report on a specific merged bucket, or a comma-separated list of merged buckets. You must provide a full path and bucket name. When using 'buckets', all other switches are overridden except for 'debug'. This parameter if set overwrites all other filter parameters.
--startdate=<date (yyyy/mm/dd)> Use 'startdate' to report on merged buckets created between now and the time chosen.
--enddate=<date (yyyy/mm/dd)> Use 'enddate' to report on merged buckets created prior to the time chosen. To list merged buckets in a specific time span, use both 'startdate' and 'enddate' to define the time span.

parsetest

./splunk cmd parsetest

Usage: 
	parsetest "<string>" ["<sourcetype>|source::<filename>|host::<hostname>"]
	parsetest file <filename> ["<sourcetype>|host::<hostname>"]
Example:
	parsetest "10/11/2009 12:11:13" "syslog"
	parsetest file "foo.log" "syslog"

pcregextest

Simple utility tool for testing modular regular expressions.

./splunk cmd pcregextest mregex=<regex>

Usage: pcregextest mregex="query_regex" (name="subregex_value")* (test_str="string to test regex")?

Example: pcregextest mregex="[[ip:src_]] [[ip:dst_]]" ip="(?<ip>\d+[[dotnum]]{3})" dotnum="\.\d+" test_str="1.1.1.1 2.2.2.2"

That is, define modular regex in the 'mregex' parameter. Then define all the subregexes referenced in 'mregex'. Finally you can provide a sample string to test the resulting regex against, in 'test_str'.

searchtest

./splunk cmd searchtest search

signtool

Sign

./splunk cmd signtool [-s | --sign] [< dir to sign >]

Verify

./splunk cmd signtool [-v | --verify] [< dir to verify >]

Using logging configuration at /Applications/splunk/etc/log-cmdline.cfg.

Allows verification and signing splunk index buckets. If you have signing set up in a cold to frozen script. Signtool allows you to verify the signatures of your archives.

toCsv

Use the toCsv tool to convert a binary serialization SRS (Splunk search results) file to the CSV format for a specific search. The SRS file is the compressed search results file in the search dispatch directory, such as $SPLUNK_HOME/var/run/splunk/dispatch/<sid>/results.srs.gz.

If an output path is not specified, the output is streamed to STDOUT. Do not specify the search dispatch directory as the output path.

Use this tool only for debugging search results. Do not attempt to replace the results files that are created by Splunk software. Replacing the search results file can interfere with the operation of other searches that use the loadjob command or other internal mechanisms to load the result set.

Syntax
splunkd toCsv <input path> [output path]
Required parameters
<input path>
Optional parameters
[output path]
Example
Navigate to the dispatch directory for the search ID (sid) 1534946862.1 and run the toCsv tool.
$ cd  $SPLUNK_HOME/var/run/splunk/dispatch/1534946862.1
$ splunk cmd splunkd toCsv ./results.srs.gz

Changing the results format for all searches

If you experience issues with the SRS format, you can change the default format for all searches to the CSV format. This requires changing a setting in the limits.conf file.

Prerequisites

  • Only users with file system access, such as system administrators, can change the default search results format.
  • Review the steps in How to edit a configuration file in the Admin Manual.

Never change or copy the configuration files in the default directory. The files in the default directory must remain intact and in their original location. Make the changes in the local directory.

Steps

  1. Open the local limits.conf file for the app. For example, $SPLUNK_HOME/etc/apps/<app_name>/local.
  2. Under the [Search] stanza, in the Misc section, set results_serial_format to csv.


If you are using Splunk Cloud and want to change the default format, open a Support ticket.

toSrs

Use the toSrs tool to convert a CSV search results file to the SRS (Splunk search results) format. The CSV file is the compressed search results file in the search dispatch directory, such as $SPLUNK_HOME/var/run/splunk/dispatch/<sid>/results.csv.gz.

The SRS format is a binary serialization format and is not directly readable in a text editor. You must specify an output path to use this utility.

Use this tool only for debugging search results. Do not attempt to replace the results files that are created by Splunk software. Replacing the search results file can interfere with the operation of other searches that use the loadjob command or other internal mechanisms to load the result set.

Syntax
splunkd toSrs < input path > < output path >
Required parameters
< input path >
< output path >
Optional parameters
None

tsidxprobe

This will take a look at your time-series index files (or "tsidx files"; they are appended with .tsidx) and verify that they meet the necessary format requirements. It should also identify any files that are potentially causing a problem

go to the $SPLUNK_HOME/bin directory. Do "source setSplunkEnv".

Then use tsidxprobe to look at each of your index files with this little script you can run from your shell (this works with bash):

  • for i in `find $SPLUNK_DB -name '*.tsidx'`; do tsidxprobe $i >> tsidxprobeout.txt; done

(If you've changed the default datastore path, then this should be in the new location.)

The file tsidxprobeout.txt will contain the results from your index files. You should be able to gzip this and attach it to an email and send it to Splunk Support.

tsidx_scan.py

For Splunk Enterprise versions 4.2.2 or later, this utility script searches for tsidx files at a specified starting location, runs tsidxprobe for each one, and outputs the results to a file.

From $SPLUNK_HOME/bin, call it like this:

splunk cmd python tsidx_scan.py [path]

Example:

splunk cmd python tsidx_scan.py /opt/splunk/var/lib/splunk

If you omit the optional path, the scan starts at $SPLUNK_DB

The output is written to the file tsidxprobe.YYYY-MM-DD.txt in the current directory.

walklex

This tool "walks the lexicon" to tell you which terms exist in a given index. For example, with some search commands (like tstat), the field is in the index; for other terms it is not. Walklex can be useful for debugging.

Walklex outputs a line with three pieces of information:

  • term ID (a unique identifier)
  • number of occurrences of the term
  • term

Usage:

From $SPLUNK_HOME/bin, type

./splunk cmd walklex </path/to/tsidx_file.tsidx> "<key>::<value>"

It recognizes wildcards:

./splunk cmd walklex </path/to/tsidx_file.tsidx> ""

./splunk cmd walklex </path/to/tsidx_file.tsidx> "*::*"

Empty quotes return all results, and asterisks return all keys or all values (or both, as in the example above).

Example:

./splunk cmd walklex </path/to/tsidx_file.tsidx> "token"

Last modified on 16 June, 2022
Collect pstacks   I can't find my data!

This documentation applies to the following versions of Splunk® Enterprise: 9.0.0, 9.0.1, 9.0.2, 9.0.3, 9.0.4, 9.0.5, 9.0.6, 9.0.7, 9.0.8, 9.0.9, 9.0.10, 9.1.0, 9.1.1, 9.1.2, 9.1.3, 9.1.4, 9.1.5, 9.1.6, 9.1.7, 9.2.0, 9.2.1, 9.2.2, 9.2.3, 9.2.4, 9.3.0, 9.3.1, 9.3.2, 9.4.0


Was this topic useful?







You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters