I can't find my data!
This documentation does not apply to the most recent version of Splunk. Click here for the latest version.
I can't find my data!
Are you searching for events and not finding them, or looking at a dashboard and seeing "No result data"? Here are a few common mistakes to check.
Are you running Splunk Free?
Saved searches that were previously scheduled by other users are still available, and you can run them manually as required. You can also view, move, or modify them in Splunk Web or in savedsearches.conf.
Was the data added to a different index?
Some apps, like the *nix and Windows apps, write input data to a specific index (in the case of *nix and Windows, that is the "os" index). If you're not finding data that you're certain is in Splunk, be sure that you're looking at the right index. You may want to add the "os" index to the list of default indexes for the role you're using. For more information about roles, refer to the topic about roles in the Admin Manual.
Do your permissions allow you to see the data?
Your permissions can vary depending on the index privileges or search filters. Read more about adding and editing roles in the Admin Manual.
Double check the time range that you're searching. Are you sure the events exist in that time window? Try increasing the time window for your search.
You might also want to try a real-time search over all time for some part of your data, like a source type or string.
The indexer might be incorrectly timestamping for some reason. Read about timestamping in the Getting Data In Manual.
Are you using forwarders?
Check that your data is in fact being forwarded. Here are some searches to get you started. You can run all these searches, except for the last one, from the Splunk default Search app. The last search you run from the CLI to access the forwarder. A forwarder does not have a user interface:
- Are my forwarders connecting to my receiver? Which IP addresses are connecting to Splunk as inputs, and how many times is each IP logged in metrics.log?
index=_internal source=*metrics.log* tcpin_connections | stats count by sourceIp
- What output queues are set up?
index=_internal source=*metrics.log* group=queue tcpout | stats count by name
- What hosts (not forwarder/TCP inputs) have logged an event to Splunk in the last 10 minutes? (Including rangemap.)
| metadata type=hosts index=netops | eval diff=now()-recentTime | where diff < 600 | convert ctime(*Time) | stats count | rangemap field=count low=800-2000 elevated=100-799 high=50-99 sever=0-49
- Where is Splunk trying to forward data to? From the Splunk CLI issue the following command:
$SPLUNK_HOME/bin/splunk search 'index=_internal source=*metrics.log* destHost | dedup destHost'
Read up on forwarding in the Distributed Deployment Manual.
Are you using search heads?
Check that your search heads are searching the indexers that contain the data you're looking for. Read about distributed search in the Distributed Deployment Manual.
Are you still logged in and under your license usage?
If you have several (3 for Splunk Free or 5 for Enterprise) license violations within a rolling 30 day window, Splunk will prevent you from searching your data.
Note, however, that Splunk will continue to index your data, and no data will be lost. You will also still be able to search the _internal index to troubleshoot your problem. Read about license violations in the Admin Manual.
Are you using a scheduled search?
Are you SURE your time range is correct? (You wouldn't be the first!)
Are you sure the incoming data is indexed when you expect and not lagging? To determine if there is a lag between the event's timestamp and indexed time is to manually run the scheduled search with the added syntax of:
| eval time=_time | eval itime=_indextime | eval lag=(itime - time)/60 | stats avg(lag), min(lag), max(lag) by index host sourcetype
For example there is an indexing lag of up to 90 minutes, if you run a scheduled search every 20 minutes, you might not see the most recent data yet (but if you run the same search 70 minutes later, the data will be there).
It could also be a scheduler problem. The Knowledge Manager Manual has a topic on configuring priority of scheduled searches.
Other common problems with scheduled searches are searches getting rewritten, saved, run incorrectly, or run not as expected. Investigate scheduled searches in audit.log and the search's dispatch directory: read about these tools in "What Splunk logs about itself" in this manual.
Check your search query
- Are you using NOT, AND, or OR? Check your logic.
- How about double quotes? Read more about Search language syntax in the Search Reference Manual.
- Are you using views and drilldowns? Splunk Web might be rewriting the search incorrectly via the intentions functionality.
- Double check that you're using the correct index, source, sourcetype, and host.
- Are you correctly using escape characters when needed?
- Are your subsearches ordered correctly?
- Are your subsearches being passed the correct fields?
Are you extracting fields?
- Check your regex. One way to test regexes interactively is in Splunk using the rex command.
- Do you have privileges for extracting and sharing fields? Read about sharing fields in the Knowledge Manager Manual.
- Are your extractions applied for the correct source, sourcetype, and host?