Splunk® User Behavior Analytics

Administer Splunk User Behavior Analytics

Acrobat logo Download manual as PDF


Acrobat logo Download topic as PDF

Collect diagnostic data from your Splunk UBA deployment

Collect diagnostic data from your Splunk UBA deployment to share with Splunk Support. You can download diagnostic data from all services (or specific services) in Splunk UBA for a specific time range.

In distributed deployments, Splunk UBA collects logs in the following manner:

  • If a service is running on a different node from where you are requesting the diagnostic data, Splunk UBA obtains the log file from the appropriate node.
  • For services that are running on multiple nodes, Splunk UBA collects the logs from all nodes where the service is running.

Download Splunk UBA diagnostic data from the web interface

Perform the following tasks to download Splunk UBA diagnostic data from the web interface:

  1. In Splunk UBA, select System > Download Diagnostics.
  2. Type the number of days into the past for which you want to collect and download diagnostic data into the Past Days field. For example, type 1 to download data from the previous 24 hours.
  3. Select whether to download data from all services or only selected services.
  4. (Optional) Select the check boxes for the services that you want to download data from.
  5. Click OK to collect the diagnostic files as a tar.gz file.
  6. After a few minutes a message appears prompting you to download the Splunk UBA diagnostics file.

Depending on the time period for which you want to download diagnostics, the process might take up to thirty minutes to return results. If your session ends before the diagnostic file becomes available, download diagnostic data from the command line instead. See Download Splunk UBA diagnostic data from the command line.

You cannot download Apache Spark events from Splunk UBA because it takes more than ten minutes to collect the logs. Download diagnostic data from the command line instead.

Download Splunk UBA diagnostic data from the command line

Perform the following tasks to download Splunk UBA diagnostic data from the command line:

  1. Log in to the Splunk UBA management node as an admin user or a user who has permissions to access log files.
  2. Create a configuration file defining the following parameters:
    Parameter Required? Description
    outputFolder Yes The location where the extracted tarball is written.
    retentionPeriodInDays No The number of days worth of logs files to extract. The default is 5, meaning that log files older than 5 days are not extracted.
    diskUsageThreshold No The disk usage limit as a percentage. Specify a value 0-100. The default is 80, meaning that if the following exceeds 80% then the log extraction is not performed:
    (<size of all logs you want to extract, before compression> + <current used disk space>) / <total disk space>
    modules No The modules whose logs you want to extract. If no modules are defined, then logs for all modules are extracted.
    • The /opt/caspida/conf/caspida-log-extractor-module-log-folder.json file defines all the Splunk UBA modules and where their logs are located.
    • The /opt/caspida/conf/caspida-log-extractor-service-to-modules.json file defines the mapping from Splunk UBA services listed in /opt/caspida/conf/deployment/caspida-deployment.conf to Splunk UBA modules.

    The following is an example configuration file. CaspidaGeneral, CaspidaJobManager, and Spark logs from the last 5 days are extracted. The disk usage threshold is set to 90%:

    {
        "outputFolder": "/home/caspida/log_extraction_test",
        "retentionPeriodInDays": 5,
        "diskUsageThreshold": 90,
        "modules": [
            {
                "name": "CaspidaGeneral"
            },
            {
                "name": "CaspidaJobManager"
            },
            {
                "name": "Spark"
            }
        ]
    }
    
  3. Run the following command:
    python /opt/caspida/bin/log_extractor/CaspidaLogExtractor.py --config <path_to_configuration_file>

    To extract the log files only on the host where CaspidaLogExtractor.py is run, use the --local parameter:

    python /opt/caspida/bin/log_extractor/CaspidaLogExtractor.py --local --config <path_to_configuration_file>
    The --local parameter causes the script to ignore the outputFolder field in the configuration file. The generated archive of diagnostic data is located in the /tmp/<ip_of_current_machine>_<YYMMDDHHMMSS>.tar.gz file.
Last modified on 15 December, 2023
PREVIOUS
Health Monitor status code reference
  NEXT
Audit user activity in Splunk UBA

This documentation applies to the following versions of Splunk® User Behavior Analytics: 5.0.0, 5.0.1, 5.0.2, 5.0.3, 5.0.4, 5.0.4.1, 5.0.5, 5.0.5.1, 5.1.0, 5.1.0.1, 5.2.0, 5.2.1, 5.3.0


Was this documentation topic helpful?


You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters