Manage data integrity
Splunk Enterprise data integrity control helps you verify the integrity of data that it indexes.
When you enable data integrity control for an index, Splunk Enterprise computes hashes on every slice of data using the SHA-256 algorithm. It then stores those hashes so that you can verify the integrity of your data later.
Splunk Enterprise supports data integrity control on local indexes only. There is no support on SmartStore indexes.
Data integrity control works only on Splunk Enterprise. Splunk Cloud Platform does not use data integrity control.
How data verification works
When you enable data integrity control, Splunk Enterprise computes hashes on every slice of newly indexed raw data and writes it to an l1Hashes
file. When the bucket rolls from hot to warm, Splunk Enterprise computes a hash on the contents of the l1Hashes
and stores the computed hash in l2Hash
. Splunk Enterprise stores both hash files in the rawdata
directory for that bucket.
Data integrity control generates hashes on newly indexed data. To ensure that data that comes from a. forwarder is secure, encrypt that data using SSL. For more information, see About securing Splunk with SSL.
Check data verification hashes to validate data
To check Splunk Enterprise data, run the following CLI command to verify the integrity of an index or bucket:
./splunk check-integrity -bucketPath [ bucket path ] [ -verbose ] ./splunk check-integrity -index [ index name ] [ -verbose ]
Configure data integrity control
To configure Data Integrity Control, edit the indexes.conf
configuration file. For each index that you want data integrity to enable the enableDataIntegrityControl
setting for each index. The default value for this setting for all indexes is false
(off).
enableDataIntegrityControl=true
Data Integrity in clustered environments
In a clustered environment, the cluster manager and all the peers must run Splunk Enterprise 6.3 or higher to enable accurate index replication.
Optionally modify the size of your data slice
By default, data slice sizes are set to 128KB, which means that a data slice is created and hashed every 128KB. You can optionally edit the indexes.conf
configuration file to specify the size of each slice.
rawChunkSizeBytes = 131072
Store and secure data hashes
For optimal security, you can optionally store data integrity hashes outside the instance that hosts your data, such as on a different server. To avoid naming conflicts, store secured hashes in separate directories.
Regenerate hashes
If you lose data hashes for a bucket, use the following CLI command to regenerate the files on a bucket or index. This command extracts the hashes that exist in the journal:
./splunk generate-hash-files -bucketPath [ bucket path ] [ -verbose ] ./splunk generate-hash-files -index [ index name ] [ -verbose ]
Use audit events to secure Splunk Enterprise | SPL safeguards for risky commands |
This documentation applies to the following versions of Splunk® Enterprise: 7.0.0, 7.0.1, 7.0.2, 7.0.3, 7.0.4, 7.0.5, 7.0.6, 7.0.7, 7.0.8, 7.0.9, 7.0.10, 7.0.11, 7.0.13, 7.1.0, 7.1.1, 7.1.2, 7.1.3, 7.1.4, 7.1.5, 7.1.6, 7.1.7, 7.1.8, 7.1.9, 7.1.10, 7.2.0, 7.2.1, 7.2.2, 7.2.3, 7.2.4, 7.2.5, 7.2.6, 7.2.7, 7.2.8, 7.2.9, 7.2.10, 7.3.0, 7.3.1, 7.3.2, 7.3.3, 7.3.4, 7.3.5, 7.3.6, 7.3.7, 7.3.8, 7.3.9, 8.0.0, 8.0.1, 8.0.2, 8.0.3, 8.0.4, 8.0.5, 8.0.6, 8.0.7, 8.0.8, 8.0.9, 8.0.10, 8.1.0, 8.1.1, 8.1.2, 8.1.3, 8.1.4, 8.1.5, 8.1.6, 8.1.7, 8.1.8, 8.1.9, 8.1.10, 8.1.11, 8.1.12, 8.1.13, 8.1.14, 8.2.0, 8.2.1, 8.2.2, 8.2.3, 8.2.4, 8.2.5, 8.2.6, 8.2.7, 8.2.8, 8.2.9, 8.2.10, 8.2.11, 8.2.12, 9.0.0, 9.0.1, 9.0.2, 9.0.3, 9.0.4, 9.0.5, 9.0.6, 9.0.7, 9.0.8, 9.0.9, 9.0.10, 9.1.0, 9.1.1, 9.1.2, 9.1.3, 9.1.4, 9.1.5, 9.1.6, 9.1.7, 9.2.0, 9.2.1, 9.2.2, 9.2.3, 9.2.4, 9.3.0, 9.3.1, 9.3.2
Feedback submitted, thanks!