Known and resolved issues
Version 6.4.10
Hunk version 6.4.10 was released on March 14, 2018. This topic lists known issues for Hunk functionality. For a full list of issues, see the Splunk Enterprise 6.4.10 known issues and fixed issues.
Known Issues
Date filed | Issue number | Description |
---|---|---|
2015-09-09 | ERP-1650 | timestamp data type not properly deserialized. |
2015-08-05 | ERP-1619 | Searching on a newly created archive index before the bucket copy saved search is run causes a filenotfound exception. Workaround: Reenable the bucket copy saved search and let it run, or force the archiving to happen via | archivebuckets force=1 and then rerun the search. |
2015-07-07 | ERP-1598 | minsplit rampup - splits generation takes too long. Workaround: Set minsplits=maxsplits |
2015-06-16 | ERP-1576 | Report acceleration does not work with smart search index. |
2015-05-12 | ERP-1502 | Non-accelerated pivot search on Pivot UI page waits for a long time to return result. |
2015-01-08 | ERP-1343, SPL-95174 | Splunk Analytics for Hadoop searches fail on corrupted journal.gz files, although Splunk searches run without error. Workaround: Add the journal.gz to the input path's blacklist (vix.input.1.ignore = ....) |
2014-10-27 | ERP-1216 | Data Explorer preview does not honor existing sourcetypes for big5/sjis files. |
2014-10-22 | ERP-1201, ERP-978 | Required field optimization causes problems with time extraction. With structured data sets (such as csv, avro, parquet etc) the product tries to honor the list of required fields as passed down by the search. This may cause issues with _time. Workaround: To work around this issue, we recommend that you always use "index-time" _time extraction or add a config option at the virtual-index level to force the product to always output a set of fields. |
2014-10-03 | ERP-1164 | Report acceleration summary gets deleted when two Splunk Analytics for Hadoop instances point to the same Splunk working directory. Workaround: To mitigate this issue, make sure that vix.splunk.home.hdfs (or Working directory in the UI) is unique on both search heads that are not in a pool. To keep your instances in the same working directory, configure vix.splunk.search.cache.path to be unique on both search heads. |
Resolved Issues
No new Hunk-specific issues are resolved in this version.
PREVIOUS What's new for Hunk |
NEXT Third-Party software |
This documentation applies to the following versions of Hunk®(Legacy): 6.4.10
Feedback submitted, thanks!