Splunk® Enterprise

Release Notes

Download manual as PDF

Splunk Enterprise version 6.x is no longer supported as of October 23, 2019. See the Splunk Software Support Policy for details. For information about upgrading to a supported version, see How to upgrade Splunk Enterprise.
Download topic as PDF

Timestamp recognition of dates with two-digit years fails beginning January 1, 2020

Review this Known Issue topic for the latest updates on this issue.
The last significant update was at 22:10 -0800 Wed 1 Jan 2020:

- Updated validation instructions to include year-2021 timestamps


Beginning on January 1, 2020, un-patched Splunk platform instances will be unable to recognize timestamps from events where the date contains a two-digit year. This means data that meets this criteria will be indexed with incorrect timestamps.

Beginning on September 13, 2020 at 12:26:39 PM Coordinated Universal Time (UTC), un-patched Splunk platform instances will be unable to recognize timestamps from events with dates that are based on Unix time, due to incorrect parsing of timestamp data.


This issue affects all un-patched Splunk platform instance types, on any operating system:

  • Splunk Cloud
  • Splunk Light
  • Splunk Enterprise
    • Indexers, clustered or not
    • Heavy forwarders
    • Search heads, clustered or not
    • Search head deployers
    • Deployment servers
    • Cluster masters
    • License masters
  • Splunk universal forwarders, under the following known conditions:
    • When they have been configured to process structured data, such as CSV, XML, and JSON files, using the INDEXED_EXTRACTIONS setting in props.conf
    • When they have been configured to process data locally, using the force_local_processing setting in props.conf

The issue appears when you have configured the input source to automatically determine timestamps, and can result in one or more of the following problems:

  • Incorrect timestamping of incoming data
  • Incorrect rollover of data buckets due to the incorrect timestamping
  • Incorrect retention of data with incorrect timestamps
  • Incorrect search results due to data ingested with incorrect timestamps

There is no method to correct the timestamps after the Splunk platform has ingested the data when the problem starts. If you ingest data with an un-patched Splunk platform instance beginning on January 1, 2020, you must patch the instance and re-ingest that data for its timestamps to be correct.


The Splunk platform input processor uses a file called datetime.xml to help the processor correctly determine timestamps based on incoming data. The file uses regular expressions to extract many different types of dates and timestamps from incoming data.

On un-patched Splunk platform instances, the file supports the extraction of two-digit years of "19", that is, up to December 31, 2019. Beginning on January 1, 2020, these un-patched instances will mistakenly treat incoming data as having an invalid timestamp year, and could either add timestamps using the current year, or misinterpret the date incorrectly and add a timestamp with the misinterpreted date.


Splunk Cloud customers will receive the fix on their Splunk Cloud instances automatically. If you are a Splunk Cloud customer, your support representative will advise you when the upgrade will take place on your Splunk Cloud instance. As this is a critical update, there is no option to defer it.

While Splunk Cloud instances will receive automatic updates, you must perform one of these solutions on any self-deployed instances, such as heavy and universal forwarders that send data to your Splunk Cloud instance. You can perform one of these solutions at any time before or after your Splunk Cloud instance receives the fix. Contact your support representative if you have any questions.

There are four solutions to this problem for on-premises customers. You only need to perform one of the solutions to fix the problem:

  • Download and deploy an app to temporarily replace the defective datetime.xml with the fixed file
  • Download an updated version of datetime.xml and apply it to each of your Splunk platform instances
  • Upgrade Splunk platform instances to a version with an updated version of datetime.xml
  • Make modifications to the existing datetime.xml file on your Splunk platform instances

Download and deploy an app that temporarily replaces the defective datetime.xml file with the fixed file

Splunk has released a Splunk app that temporarily replaces the defective datetime.xml file with the fixed file. This option is the preferred path for customers who have large deployments including many universal forwarders and indexer and search head clusters.

You do not need to stop the Splunk platform before you deploy the apps, but you must restart each instance that receives the apps.

To ensure that you have no problems with universal forwarders and timestamp recognition, deploy this fix on all universal forwarders, even those that do not specifically suffer from this problem.

This solution is temporary only. Do not run it long-term. Carefully read the instructions that come with the app download and make sure to deploy the correct app to the correct Splunk platform instance type. Do not deploy both apps to a single instance. The preferred method to address this problem is to upgrade to a version that has the fixed datetime.xml file. After you complete upgrades, remove the apps immediately.

  1. Download the datetime.xml fix app archive (MD5 hash: 2e6b7520fa1379d72ac80ca21a54d45a) from splunk.com.
  2. Unarchive the .tgz file to a location that is accessible from all of your Splunk platform instances.
  3. Open the README file and follow the instructions to deploy one of the apps to each Splunk platform instance.

Download an updated version of datetime.xml and apply it to each of your Splunk platform instances

Splunk is providing an updated version of the datetime.xml file for download. This option is the preferred path for customers who cannot upgrade right away to a version of the Splunk platform with the fixed file, or who run an unsupported version that is lower than 6.6.x.

After you download the file, you must apply it directly over the existing datetime.xml file using the following procedure. You must apply the updated file to all affected on-premises Splunk platform instances prior to January 1, 2020, or you will experience timestamp recognition problems from that point forward.

  1. Download the datetime.zip timestamp recognition ZIP file (MD5 hash: 00dfc319e89001fa16d6725dbf042234) from splunk.com.
  2. Unarchive the ZIP file to a location that is accessible from all of your Splunk platform instances.
  3. On each Splunk platform instance, do the following:
    1. Using your operating system file management utilities, copy the updated datetime.xml from the location where you downloaded it to the $SPLUNK_HOME/etc directory on the Splunk platform instance. Ensure that the updated file overwrites the existing file.
    2. Confirm that the new datetime.xml has been written to the $SPLUNK_HOME/etc directory.
    3. Restart the Splunk platform. Your Splunk platform instance is now patched.

Upgrade Splunk platform instances to a version with an updated version of datetime.xml

Splunk has released updated versions of the Splunk platform that contain an updated datetime.xml.

To fix the problem, download and install a version of the Splunk platform that contains the fixed datetime.xml. The following table lists the minor versions that include the fixed file for each major version that Splunk currently supports, and provides links on how to upgrade Splunk Enterprise and Splunk Light. Apply the fixed version directly over the existing Splunk platform instance to patch it.

Major version Minor version with patched file Released? Splunk platform upgrade instructions
6.6 Yes Splunk Enterprise
(versions 7.0.12 and 7.0.13 are Splunk Cloud-only releases)
Yes Splunk Enterprise
7.1 7.1.10 Yes Splunk Enterprise
7.2 Yes Splunk Enterprise
7.3 7.3.3 Yes Splunk Enterprise
8.0 8.0.1 Yes Splunk Enterprise

Make modifications to the existing datetime.xml file on your Splunk platform instances

If you feel comfortable with making changes to the datetime.xml file directly, you can do so without upgrading Splunk Enterprise or Splunk Light. If you are a Splunk Cloud customer that uses forwarders to send data to your Splunk Cloud instance, you can use this procedure to update those instances.

Use caution when making changes to the $SPLUNK_HOME/etc/datetime.xml file. Outside of fixing this specific issue, never make changes to this file. Typos and incompatible characters in the file can cause direct, lasting negative effects on timestamp recognition and data ingestion. If you are not comfortable with making these changes, consider one of the previous options for fixing this problem, or contact Splunk Support for assistance.

You must complete these changes on all affected Splunk platform instances by January 1, 2020, or you will experience timestamp recognition problems from that point forward.

  1. Using either a shell prompt or your operating system file management utilities, go to the $SPLUNK_HOME/etc directory in your Splunk Enterprise installation.
  2. Open datetime.xml for editing with a text editor.
  3. Search for and replace the following strings, according to this table:
    Search for this string Replace with this string
    <text><![CDATA[(?:^|source::).*?(?<!\d|\d\.|-)(?:20)?([901]\d)(0\d|1[012])([012]\d|3[01])(?!\d|-| {2,})]]></text>
    <text><![CDATA[(?:^|source::).*?(?<!\d|\d\.|-)(?:20)?([9012]\d)(0\d|1[012])([012]\d|3[01])(?!\d|-| {2,})]]></text>
    <text><![CDATA[(?:^|source::).*?(?<!\d|\d\.)(0\d|1[012])([012]\d|3[01])(?:20)?([901]\d)(?!\d| {2,})]]></text>
    <text><![CDATA[(?:^|source::).*?(?<!\d|\d\.)(0\d|1[012])([012]\d|3[01])(?:20)?([9012]\d)(?!\d| {2,})]]></text>
  4. Save the file and close it. If the editor asks if you want to overwrite the file, answer yes.
  5. Confirm that the datetime.xml file has been updated.
  6. Restart the Splunk platform. Your Splunk platform instance is now patched.

Solutions for customers with large installations

If you have a large on-premises deployment of many Splunk platform instances, contact Professional Services for assistance. If you feel comfortable with deploying Splunk apps, you can also use the app deployment solution, as described in the "Solutions" section.

Validate timestamp extraction after an update

After you have applied one of the solutions described in this topic, you can use the following procedure to validate that timestamp extraction works as expected.

As part of this process, you must temporarily configure the input processor to accept timestamps that occur in the future. This procedure includes instructions for configuring the MAX_DAYS_HENCE setting in the default stanza of the props.conf configuration file. After you have successfully validated timestamp extraction, you can remove the MAX_DAYS_HENCE setting from props.conf to restore default timestamp extraction behavior.

  1. Paste the following text into a text editor:
    19-12-31 23:58:44,Test Message  - datetime.xml testing - override - puppet managed forced restart
    20-01-02 23:58:54,Test Message  - datetime.xml testing - override - puppet managed forced restart
    21-01-01 23:59:04, Test Message - datetime.xml testing - override - puppet managed forced restart
  2. Save the text as a text file, for example, test_file.csv, to a place that is accessible from all of your Splunk platform instances.
  3. On the Splunk platform instance that you want to validate, adjust the MAX_DAYS_HENCE setting for the [default] stanza in the $SPLUNK_HOME/etc/system/local/props.conf configuration file.
    MAX_DAYS_HENCE = 400
  4. Restart the Splunk platform.
  5. Using the Splunk CLI, add the text file you saved earlier as a oneshot monitor to the Splunk platform instance that you want to validate.
    $SPLUNK_HOME/bin/splunk add oneshot -source test_file.csv -sourcetype csv -index main
  6. Perform a search on the text in Step 1. In the search results, the "Time" column should display a timestamp for the January 2, 2020 event with the correct two-digit year of 2020, and a timestamp for the January 1, 2021 event with the correct two-digit year of 2021.

File integrity errors after an update to the file

If you perform either of the file update or the direct file modification solutions, upon restart, Splunk Web will display a message like the following:

File Integrity checks found 1 files that did not match the system-provided manifest. Review the list of problems reported by the InstalledFileHashChecker in splunkd.log File Integrity Check View ; potentially restore files from installation media, change practices to avoid changing files, or work with support to identify the problem.

You can safely disregard this message in these two scenarios only. After you upgrade to a version that has the fixed datetime.xml, this message should no longer appear.

More information

Transparent huge memory pages and Splunk performance

This documentation applies to the following versions of Splunk® Enterprise: 6.0, 6.0.1, 6.0.2, 6.0.3, 6.0.4, 6.0.5, 6.0.6, 6.0.7, 6.0.8, 6.0.9, 6.0.10, 6.0.11, 6.0.12, 6.0.13, 6.0.14, 6.0.15, 6.1, 6.1.1, 6.1.2, 6.1.3, 6.1.4, 6.1.5, 6.1.6, 6.1.7, 6.1.8, 6.1.9, 6.1.10, 6.1.11, 6.1.12, 6.1.13, 6.1.14, 6.2.0, 6.2.1, 6.2.2, 6.2.3, 6.2.4, 6.2.5, 6.2.6, 6.2.7, 6.2.8, 6.2.9, 6.2.10, 6.2.11, 6.2.12, 6.2.13, 6.2.14, 6.2.15, 6.3.0, 6.3.1, 6.3.2, 6.3.3, 6.3.4, 6.3.5, 6.3.6, 6.3.7, 6.3.8, 6.3.9, 6.3.10, 6.3.11, 6.3.12, 6.3.13, 6.3.14, 6.4.0, 6.4.1, 6.4.2, 6.4.3, 6.4.4, 6.4.5, 6.4.6, 6.4.7, 6.4.8, 6.4.9, 6.4.10, 6.4.11, 6.5.0, 6.5.1, 6.5.1612 (Splunk Cloud only), 6.5.2, 6.5.3, 6.5.4, 6.5.5, 6.5.6, 6.5.7, 6.5.8, 6.5.9, 6.5.10, 6.6.0, 6.6.1, 6.6.2, 6.6.3, 6.6.4, 6.6.5, 6.6.6, 6.6.7, 6.6.8, 6.6.9, 6.6.10, 6.6.11, 6.6.12, 7.0.0, 7.0.1, 7.0.2, 7.0.3, 7.0.4, 7.0.5, 7.0.6, 7.0.7, 7.0.8, 7.0.9, 7.0.10, 7.0.11, 7.0.13, 7.1.0, 7.1.1, 7.1.2, 7.1.3, 7.1.4, 7.1.5, 7.1.6, 7.1.7, 7.1.8, 7.1.9, 7.1.10, 7.2.0, 7.2.1, 7.2.2, 7.2.3, 7.2.4, 7.2.5, 7.2.6, 7.2.7, 7.2.8, 7.2.9, 7.3.0, 7.3.1, 7.3.2, 7.3.3, 7.3.4, 8.0.0, 8.0.1, 8.0.2


Hi Jezzaaaa,

Thanks for your comment. Yes, the current fix does extend the Unix epoch range to November of 2023. While I can't speculate or provide an estimate on future fixes for this problem, I can say that we are aware of the problem and are evaluating the best ways to solve it permanently. You can speak with your customer service representative for further details if you have a support entitlement with us.

Malmoore, Splunker
January 13, 2020

I notice the updated file contains an adjustment that seems unrelated to the 2-digit year. Instead, this seems to be a fix for "utcepoch". The regexp component is "(?:1[012345]|9)\d{8}" being changed to "(?:1[0123456]|9)\d{8}". I think this is an attempt to extend the epoch seconds beyond 1599999999 (Sep 13 2020). However it appears to only extend it to 1699999999 (Nov 15 2023) which doesn't seem to be quite far enough into the future. Is this likely to be addressed in a future upgrade?

January 6, 2020

HI Mxg142,

I'm sorry that you were having trouble leaving comments on this page. There was a problem with the page loading and I've submitted a ticket to have it addressed. Unfortunately, the way that our site is set up, only administrators can delete comments. I've deleted the 5 extra you posted.

I'm glad that you were able to successfully troubleshoot your problem in a test environment. The reason why the documentation might appear vague with regards to MAX_DAYS_HENCE is because we want you to confirm that every instance is updated. Depending on what role the instance plays in your environment, you might need to perform the procedure on any of the instances there. We can't just say "perform it only on the HF" or "perform it on the search head" because we can't possibly know how those instances operate in your environment.

Thank you for your understanding.

Malmoore, Splunker
January 1, 2020

I was able to confirm in a distributed test environment (one HF, two non-clustered indexers, one SH) that adding the max_days_hence setting in the props.conf and "splunk add oneshot" command works as expected when both were done on only the Heavy Forwarder v7.2.6.

December 31, 2019

Also maybe let people edit/delete their comments in this web app. I entered my previous comment and the browser sat and spun doing nothing, so I tried entering again with nothing happening a few more times. Meanwhile, unknown to me, that same comment has been added 6 times and I cannot delete them.

December 31, 2019

Can anyone from Splunk please confirm on what component the Max_Days_Hence setting in the props.conf needs to be set and Splunk restarted? If we have a distributed environment would we need to put the max_days_hence in just the search head? Indexer and search head? HF, Indexer, Search head? The documentation is extremely vague and does not specify what distributed component actually requires the max_days_hence setting, they just say "the platform instance you want to validate" . I saw Rongshengfang mention its just the Search Head in the comments, but this should really be confirmed by Splunk themselves and added to this DOCUMENTATION!

However, the props.conf document implies its the honored by the heavy forwarder (input layer):
* The maximum number of days in the future, from the current date as
provided by the input layer(For e.g. forwarder current time, or
modtime for the file)

December 31, 2019

Hi Mxg142,

Look in the “Time” column in the search results. You should see the correct two-digit year for 2020, e.g. “1/2/20”. In 2020, this will appear as correct but you would need to submit a year-2021 or later event and increase MAX_DAYS_HENCE to something between 365 and 730 to test that future timestamp ingestion is working properly.

Hi Akhilsunkari,

The best move is to upgrade or update them all. That way, you know that you are not affected by the problem. If your forwarders do not do local processing of data or process structured data, then the situation is not as critical, but I would be remiss in telling you that not updating/upgrading is okay.

Malmoore, Splunker
December 27, 2019

This has still not been clarified and its 4 days before the New Year.... What "timestamp" is the one we need to look at and what is the expected behavior for the validation steps? There are multiple fields that could be considered "timestamps" in that log event. There is "_time", "Time" column, the "date" field, and the timestamp text within the event itself. Also your directions say "The text with the two digit "20" should have a timestamp with the correct two-digit year of 2020." Which should we see? 20 or 2020? and for which timestamp?

December 27, 2019

Hello we have Splunk enterprise 8.0.0 and I have upgraded to 8.0.1 this is 3rd solution. Do I also need to update datetime.zip to all the UF's? because we have UF's installed on 200+ servers please advise.


December 26, 2019

Hi Mohammadgafoor,

I will always be a proponent of upgrading if you can do so. If you can't, then you can either deploy the app, replace the file, or edit the existing file directly, and timestamping will work properly again. This is, of course, an unsupported solution, but it will work.

Have a happy holiday!

Malmoore, Splunker
December 23, 2019

Hi Malmoore,

Right now, our Splunk enterprise server is on version:6.2.3 build: 264376. If we apply the suggested workaround on the datetime.xml file of our instance instead of upgrading to 7/8, will it work?


December 22, 2019

Hi all,

Hopefully the holidays are going well even if you're having to deal with this inconvenience.

Hi Rongshengfang! We actually do ask you to temporarily adjust the timestamp validity window with MAX_DAYS_HENCE in the verification instructions, that's Step 3.

Hi Esalesapns2! We are constantly reviewing changes to the regex and determining not only what is most effective but will also work for all platforms involved. Thank you for your suggestion, we'll review it and adjust the regex accordingly.

Hi MichaelRye! As long as the DATETIME_CONFIG = CURRENT configurations are within specific sourcetype stanzas, e.g. “[mysourcetype]”, it will override the [default] stanza, which is within the app we provide for the datetime.xml temp fix. No matter the precedence of app name.

Hi Cvinsonjr! There is no support for Windows 7, but there is for Windows Server 2012 and 2012 R2. You can patch any version by replacing or editing the file directly, or using the app.

Thanks for your patience.

Malmoore, Splunker
December 20, 2019

MAX_DAYS_HENCE also needs to be added to $SPLUNK_HOME/etc/system/local/props.conf on the search head where the validation is performed in addition to the indexers. Otherwise the validation would fail. Can you add that to the validation procedure? Thanks!

December 20, 2019

I followed the procedure to update the datetime.xml file on the servers. However when I followed the verification steps, I found no log assigned with date_year 2020 as claimed by the verification step #6 "Perform a search on the text in Step 1. The text with the two digit "20" should have a timestamp with the correct two-digit year of 2020.". Splunk extracted hour/time/second correctly for both logs, but assigned date 2019-12-19 to both. Can you clarify if this is expected behavior? Thanks!

December 20, 2019

Would it be better if the hour extract on line 38 was:
instead of:

December 19, 2019

The temporary app install version of the fix for this issue provides these two apps:


We have other apps installed which reference the "DATETIME_CONFIG" config stanza. They are likely not in the "[default]" props group, but we will need to look further to be sure. Some that are showing up in btool mainly specify it in the format of DATETIME_CONFIG = CURRENT. One example of this is the SplunkBase app Splunk_TA_nix (https://splunkbase.splunk.com/app/833/).

The potential issue with this lies with lexicographical order of config precedence for the app directory. The capital "S" of the Splunk_TA_nix app comes before the lowercase "i" of the idxc_date_patch_props app in the timestamp recognition fix app lexicographically. This would cause the timestamp fix app config to not be in effect for the Splunk_TA_nix app.

Is this a concern?

December 19, 2019

Is the a patched version of the UF that supports Windows 7, Server 2012, or Server 2012 R2?

December 19, 2019

Hi Team,

A couple of questions from our client.

• When and how did we originally discover this Splunk issue and fix?
• What is the usual turnaround time from the discovery of the issue up to the release of comms?


December 17, 2019

Useful search to show any raw data that might be impacted by this issue:
index=* |regex _raw="[/ -]19[, -]" |stats first(_raw) count by index, sourcetype, host

I ran this for 24 hours and can see that for sure my windows DHCP logs seems to be in a 2 digit year:

index sourcetype host first(_raw) count
windows DhcpSrvLog SERVER02 11,12/16/19,11:12:25,Renew,,ABC00286.SERVER.MyDomain.com,,,,,,,,,,,,,0

December 16, 2019

And we just release version 8.0.1. With this release, all supported versions of Splunk Enterprise and Universal Forwarder have now been patched. Thank you again for your continued patience!!

Malmoore, Splunker
December 12, 2019

Hi all,

Some updates:

- Splunk released version 7.1.10 today which contains the patched datetime.xml file. You can upgrade to this version on all of your Splunk 7.1 infrastructure.
- Splunk has determined that the UF situation where file monitoring encounters an unknown source type is a non-issue. Therefore it has been removed from the release notes as a condition.

To address some questions:
- You should see Splunk tag the event with the correct two-digit year for 2020 when you perform a search for events with two-digit years after the fix has been applied. Timestamps will still be two digits.
- UFs for version 7.3.3 do contain the patched file. If any version has been patched, it's patched for every Splunk type.
- All Splunk types except indexer cluster peers receive the "all_date_patch_props" app. This is in the README file that comes with the apps.
- We still can't promise ship dates.

Hopefully this helps. Thanks for your continued patience and understanding.

Malmoore, Splunker
December 12, 2019

no update for 7 days already..... When can we expect 8.0.1 ?

December 12, 2019

On standalone indexer, which app should be installed? Is it idxc… (updating path in props.conf) or all…?

December 11, 2019

Are the UFs for version 7.3.3 updated with the current date file fix? Or is the list for 7.3.3 ONLY for Splunk instances?

December 11, 2019

Can someone clarify/confirm in the documentation where it says "Perform a search on the text in Step 1. The text with the two digit "20" should have a timestamp with the correct year of 2020." what timestamp/field we should expect to see updated?

There are multiple timestamp in the search results. I implemented overwriting the existing datetime.xml with the new one and went through the steps to "Validate timestamp extraction after an update" as described in the document, however, the "Time" field and the timestamp within the event still show two digit years. ONLY the "_time" field is showing the correct full four digit year. Is this the expected result or should each and every instance of a timestamps be 4 digits?

December 10, 2019

Reduces both false negatives and false positives. Adjust as needed.

| dedup index, sourcetype
| fields index, sourcetype, timeendpos,timestartpos, _raw
| eval timestamp=substr(_raw, timestartpos+1, timeendpos-timestartpos)
| regex timestamp="(((?:^|\D)\d{1,2}[-\/]\d{1,2}[-\/]19[^\d])|((?:^|\D)19[-\/]\d{1,2}[-\/]\d{1,2}[^\d])|((?:^|\D)\d{1,2}\s[-\/]\s\d{1,2}\s[-\/]\s19[^\d])|((?:^|\D)19\s[-\/]\s\d{1,2}\s[-\/]\s\d{1,2}[^\d])|((?:^|\D)([a-zA-Z]{3}[- \/]+\d{1,2}[- \/]+19[^:\d]))|((?:^|\D)19[- \/][a-zA-Z]{3}[- \/]\d{1,2}[^:\d])|((?:^|\D)\d{1,2}[- \/]+[a-zA-Z]{3}[- \/]+19[^:\d]))"
| table index, sourcetype, timestamp

Regex from Michael Uschmann.

December 9, 2019

Where can we review a log of changes made to this page and to the datetime.xml fix app? Asking because it looks like there are now at least three versions of the app which have been released here, and because the "last significant update" date and time at the top of the page keeps changing.

Thank you.

December 6, 2019

Hi Mxg142 (and everyone),

A few updates in the last hour: We have released patches for versions 6.6, 7.0, and 7.2 today. Currently, that leaves versions 7.1 and 8.0 un-patched. We are working to provide patches for those versions but I can't state their availability time. Please continue watching this space.

On the effort to determine which instances need patching, there is a query for that in the app download. That said, we want to make sure what we deliver works for all of our customers ideally. That takes time.

On whether or not we're inducing panic - I understand the concern, yet respectfully disagree. We would rather over-report a problem and have you determine that it's not as big a deal for you, than under-report it and end up with some of our customers losing data. Obviously, we want to get it right on the nose.

We thank you again for your continued patience and understanding. We'll continue to post updates as they arrive. We will get through this!

Malmoore, Splunker
December 5, 2019

Thank you to Pratt daniel, Bjcross, Ssiat479 for helping out and providing a search query to validate risk for two digit year/Jan 1 deadline. Personally I think Splunk should have provided a query to begin with in the Release Notes and save all of us the hassle and panic since I bet most of us are not ingesting two digit years anyway. The September 2020 deadline is definitely a critical must-do, but I think the knee jerk of everyone reading this initially was "we must get this in by January!!" when in reality the risk for that deadline is quite minimal for most users.

December 5, 2019

Hi everyone,

Wanted to thank you for your continued patience and understanding as well as answer some additional questions with regard to this problem. While I can't provide future statements on releases, I can say without hesitation that we're working to get releases out as quickly as possible.

Version was released this morning.

On Windows compatibility - to be safe, substitute '/' with '\' in configuration files. Otherwise, the apps should work as delivered.

On the issue with universal forwarders and file monitor input. I'm working on getting specific clarification. In my understanding the bug triggers if you don't specify a source type, and the monitor encounters a file for which it has no pre-trained source type. We'll update the release note when we have that specific information.

Again, thank you all for your continued patience and understanding on this.

Malmoore, Splunker
December 5, 2019

Building off of the searches provided by Bjcross and others, we have found the following search to be even more helpful in identifying specific hosts reporting two digit years in their timestamps:

| eval timestamp_length = timeendpos-timestartpos+1,
estimated_timestamp=ifnull(substr(_raw, timestartpos+1, timeendpos-timestartpos),"unknown"),
combined=timestamp_length." | ".estimated_timestamp
| where !(match(estimated_timestamp,"2019")) AND !(match(estimated_timestamp,"\d{10}"))
| rex field=estimated_timestamp "^(?<estimated_timestamp>.*)\.\d+$"
| rex field=estimated_timestamp "(?<bad>(?<!20|:)19(?!:))"
| where isnotnull(bad)
| stats latest(combined)
by host

December 5, 2019

Does the provided app all_date_patch_props work with windows as well? Does it need changing of "/" to "\" to respect windows?

December 5, 2019

Building off Pratt Daniel's search, this will reduce the output so that the list is more manageable.

| dedup sourcetype
| fields sourcetype, timeendpos,timestartpos, _raw
| eval timestamp_length = timeendpos-timestartpos+1
| dedup sourcetype, timestamp_length
| eval estimated_timestamp=ifnull(substr(_raw, timestartpos+1, timeendpos-timestartpos),"unknown")
| rex field=estimated_timestamp "(?<bad>(?<!20)19)"
| search bad=*
| table sourcetype, timestamp_length, estimated_timestamp

December 3, 2019

Hello, I also don't quite understand wording for UF:
When they have been configured with a monitor input, and that input subsequently encounters an unknown file type

What does this mean?

We have config like this:
disabled = false

Are we affected?

December 3, 2019

When can we expect release of Splunk 8.0.1? Please release this update ASAP. Updating is the easiest solution for this issue in my company.

December 3, 2019

I read the following part, but couldn't understand what "unknown file type" means.
When they have been configured with a monitor input, and that input subsequently encounters an unknown file type

Is "unknown file type" about sourcetypes that doesn't include in pretrained source types shown in the following document?

-List of pretrained source types

December 2, 2019

Will the app be posted to Splunkbase?

December 2, 2019

Hi everyone,

Wanted to provide an update as to where we stand with this issue. We understand the inconvenience and apologize for that. Again, we can't provide forward looking statements owing to legal reasons, which means we can't tell you when a specific updated version will be released, no matter how many times you ask, nor how nicely.

If you run a version of on-premises Splunk software that is lower than 6.6, your options are:
- Upgrade to at least 7.3, and 8.0 if you can
- Download our app solution and use deployment server/cluster master to distribute it amongst affected instances.

If you use the TIME_FORMAT setting to configure timestamp recognition, this problem should not affect you. But, to ensure you have no problems with recognition, it is a good idea to update any instance that ingest data anyway.

This page will continue to be updated as warranted in the near term. Thank you for your continued patience and understanding.

Malmoore, Splunker
December 2, 2019

@Mxg142 Here is a query I used that should assist you with this bug, and identifying the sourcetypes that have 2 digit year raw timestamps.

| dedup sourcetype
| fields sourcetype, timeendpos,timestartpos, _raw
| eval timestamp_length = timeendpos-timestartpos+1
| dedup sourcetype, timestamp_length
| eval estimated_timestamp=ifnull(substr(_raw, timestartpos+1, timeendpos-timestartpos),"unknown")
| table sourcetype, timestamp_length, estimated_timestamp

Pratt daniel
December 2, 2019

Any word when the rest of the patched datetime.xml versions of Splunk will become available for download? As of this moment, I only see 7.3.3 as released. Mostly interested in

December 2, 2019

To which Version I need to upgrade? I have Splunk 6.5.4.
For permanent solution of the problem only version above 6.6. are specified above .

December 2, 2019

When can we expect release of Splunk 8.0.1?

November 28, 2019

Download worked. Thanks

November 28, 2019

The Workaround with Apps:
"Download and deploy an app that temporarily replaces the defective datetime.xml file with the fixed file"
is not downloadable from google Drive? (https://drive.google.com/file/d/17WR7mBqyhDLEOwQAipfp-MW8aYWVT5wZ/view?usp=sharing)
Where can we download it?

November 27, 2019

If `TIME_FORMAT` is set on the parsing node (Indexer, HF, and UF in some cases.), I think this problem have no effect.

Because I think that the `datetime.xml` is a file that is used to automatically extract timestamps without using `TIME_FORMAT`, and `TIME_FORMAT` follows the Unix strptime format NOT `datetime.xml`.


If this is correct, I think this workaround should be written in this doc.

November 27, 2019

Hi everyone,

First off, thank you in advance for your continued patience. We appreciate that this is very inconvenient to say the least. I will try to answer as many questions as I can.

On future patched versions: When those versions come out, this page will be updated.

On the support for versions lower than 6.6: Currently, you can download and apply the patched file to your systems, or modify it directly. While there is no official support, this should resolve the problem until you can upgrade to a fixed version.

On whether or not you can ignore warnings in splunkd.log, Monitoring Console, or the system bar: Yes, for this situation only.

We have updated our guidance on when this problem affects universal forwarders.

The best advice I can give everyone right now is to keep monitoring this page. I can't provide forward looking statements, owing to legal rules, and I kinda like this job, so I'm gonna try and keep it. :)

Thank you for your understanding.

Malmoore, Splunker
November 27, 2019

our instances have "version 4.0" which is same as the above provided link.
Do i still need to update/replace the file.

November 26, 2019

Another note, when replacing the datetime.xml, be sure to match the file permissions to the original file so that it can be read.

Rchristian splunk, Splunker
November 26, 2019

This is the sort of thing that -should- trivially be able to be handled by the deployment server. I suspect that isn't the case, which is another sign the DS, overall, needs some attention.

November 26, 2019

What happens for versions 6.0.15? Will this work for earlier Splunk versions before 6.6? We have this deployed worldwide and need guidance on what to do.

November 26, 2019

After updating the datetime.xml file, we found error "InstalledFilesHashChecker" in splunkd.log and our Splunk web UI.
Can we suppress the messages? Or it should be accepted for this patching?

November 26, 2019

Do we need to apply to UF as well? Can it be done thru deployment server?

November 26, 2019

Can someone provide a search query to see the percentage of our ingested data uses just the two digit timestamp?

November 26, 2019

What Akram said, do UF need this file and if so what is the procedure to deploy through deployment server.

November 26, 2019

V.8.0.1 is mentioned in the upgrade path - when will that version be released?


November 26, 2019

It looks like the patch postpones the issue to Jan 1 2030, and November 14 2023. When will a more robust fix be released?

November 25, 2019

Does this patch only apply for servers? how about UF? Do we need to apply to UF as well? Can it be done thru deployment server?

November 25, 2019

Hi Davidstuffle,

At this time our guidance is as written: Stop, update the file, restart. If there is a change to that guidance, we will update this page. We thank you for your patience in this developing situation and apologize for what we understand is significant inconvenience.

Malmoore, Splunker
November 25, 2019

Is it necessary to stop...update file...start, or can the file be updated in advance and then restart Splunk?

November 25, 2019

Was this documentation topic helpful?

Enter your email address, and someone from the documentation team will respond to you:

Please provide your comments here. Ask a question or make a suggestion.

You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters