After the future removal of the classic playbook editor, your existing classic playbooks will continue to run, However, you will no longer be able to visualize or modify existing classic playbooks.
For details, see:
Tune performance by managing features
An administrator can tune performance of their deployment by toggling the Indicators feature or removing audit logs from the deployment after they have been downloaded.
Enable or disable the indicators feature
Prior to Splunk Phantom 4.8, retrieval of indicator records did not scale for large deployments with hundreds of thousands of indicator records. Improvements were made to enhance performance, but some administrators may still wish to disable the feature entirely.
An administrator can toggle the Indicators feature of by running a command from the *nix shell command line.
Disabling the Indicators feature removes it from the Main Menu, from the events page, and from context menus in the investigations page.
When indicators are disabled, the indicator REST APIs return response 400, with the message body:
{ "failed": true, "message": "The indicators feature is not enabled." }
Affected APIs
- /rest/indicator
- /rest/indicator_by_value
- /rest/indicator_artifact
- /rest/indicator_artifact_timeline
- /rest/indicator_stats_indicator_count
- /rest/indicator_stats_top_labels
- /rest/indicator_stats_top_types
- /rest/indicator_stats_top_values
- /rest/ioc
- /rest/indicator_common_container
See REST Indicators.
Toggle the Indicators feature
To disable Indicators:
- SSH to your instance.
SSH <username>@<phantom_hostname> - Run the set_preference command.
phenv set_preference --indicators no
To enable Indicators:
- SSH to your instance.
SSH <username>@<phantom_hostname> - Run the set_preference command.
phenv set_preference --indicators yes
It can take as much as five minutes for the indicators feature to be hidden or to show from the Splunk SOAR (On-premises) UI after the set_preference command has been run.
Delete indicators
Indicators can provide valuable insights by cross-correlating cases, reports, or emails that contain data about an event on a network or device. When indicators are not tuned correctly, they can generate an excessive amount of records that impact the overall performance of the platform.
An administrator can improve system performance by deleting indicators from the platform by using the delete_indicators
command.
# phenv delete_indicators -h usage: phmanage.py delete_indicators [-h] [-v {0,1,2,3}] [--no-color] [--skip-checks] {truncate,delete} ... subcommands: {truncate,delete} truncate Truncate all indicator tables delete Delete records from indicator tables
This command will permanently delete indicators and indicator_artifact_records
from Splunk SOAR (On-premises). The records can't be recovered without restoring Splunk SOAR (On-premises) from a backup. Exercise caution when using this command.
Prior to running this command you should create a backup of Splunk SOAR (On-premises), and pause ingestion so that race conditions aren't created. See Back up a Splunk SOAR (On-premises) deployment.
delete_indicators delete arguments
Use these arguments with the delete
subcommand of the delete_indicators
command.
# phenv delete_indicators delete -h usage: phmanage.pyc delete_indicators delete [-h] [--dry-run] [--no-prompt] [--preserve-cef-fields [PRESERVE_CEF_FIELDS ...]] [--before BEFORE_TIMESTAMP] [--after AFTER_TIMESTAMP] [-c CHUNK_SIZE] [--transactional]
Argument | Description |
---|---|
-h, --help | show this help message and exit |
--dry-run | Find records, but do not delete them. |
--no-prompt | Do not block on user input. This flag is suitable for running as part of an unsupervised script. |
--preserve-cef-fields <PRESERVE_CEF_FIELDS ...> | Preserve indicators associated with only these CEF fields. Indicators associated with all other CEF fields will be deleted. CEF field names are case-sensitive and any arguments you pass to the command line must not end in commas unless the CEF fields themselves end in commas. For example --preserve-cef-fields foo bar will preserve two CEF fields: foo and bar . But --preserve-cef-fields foo, bar " will preserve: foo, and bar , which is probably not what you want.
|
--before <BEFORE_TIMESTAMP> | Records created before this timestamp will be deleted. Records created after this timestamp will not be deleted. The timestamp value can be in various formats including <yyyy-mm-dd>T<hh:mm:ss>Z or <yyyy-mm-dd>T<hh:mm>Z. |
--after <AFTER_TIMESTAMP> | Records created after this timestamp will be deleted. Records created before this timestamp will not be deleted. The timestamp value can be in various formats including <yyyy-mm-dd>T<hh:mm:ss>Z or <yyyy-mm-dd>T<hh:mm>Z. |
-c <number of indicators to delete>, --chunk-size <number of indicators to delete> |
Maximum number of indicators to delete at one time. |
--transactional | When set, the entire delete operation is done automatically. Depending on how many indicators your system has, this operation could take a long time. Don't run the operation transactionally if you want to pause and restart the deletion process. |
Examples
Test command parameters by using the --dry-run
option first. In the following examples, the --dry-run
option is only supported for the second example.
Delete indicator_artifact_records between July 1 and December 1 2022 except for those with cef fields "foo" or "bar", and the related indicators that aren't referenced elsewhere:
delete_indicators truncate arguments
Use these arguments with the truncate
subcommand of the delete_indicators
command.
The following command truncates all the indicator tables: indicators, artifact_indicators, and indicator_artifact_records from Splunk SOAR (On-premises). Truncation permanently removes all the records from these tables. Exercise caution when using this command.
Prior to running this command you should create a backup of Splunk SOAR (On-premises), and pause ingestion so that race conditions aren't created. See Back up a Splunk SOAR (On-premises) deployment.
# phenv delete_indicators truncate -h usage: phmanage.py delete_indicators truncate [-h] [--dry-run] [--no-prompt]
Argument | Description |
---|---|
-h, --help | show the help message and exit |
--dry-run | Find records, but do not delete them. |
--no-prompt | Do not block on user input. This flag is suitable for running as part of an unsupervised script. |
Examples
Test command parameters by using the --dry-run
option first. In the following examples, the --dry-run
option is only supported for the second example.
Delete all indicators, artifact_records, and indicator_artifact_records:
Delete audit logs
Downloading Audit logs could take a long time because all the records were loaded into memory before being written to a file. In Splunk Phantom release 4.8, audit logs were changed to stream records to a file.
An administrator can remove audit logs after they have been manually downloaded and archived by using the delete_audit_logs
command.
This command will permanently delete audit records from Splunk SOAR (On-premises). You should create a backup before running this command. The records cannot be recovered without restoring Splunk SOAR (On-premises) from a backup. Exercise caution when using this command.
delete_audit_logs arguments
Use these arguments with the delete_audit_logs
command.
# phenv delete_audit_logs -h usage: delete_audit_logs.py [-h] [--before BEFORE_TIMESTAMP] [--after AFTER_TIMESTAMP] [--categories [CATEGORIES [CATEGORIES ...]]] [--no-prompt] [--dry-run] [--no-prompt] [-v {0,1,2,3}] [--no-color] [--skip-checks]
Argument | Description |
---|---|
-h, --help | Show this help message and exit. |
--before <BEFORE_TIMESTAMP> | Records created before this timestamp will be deleted.
Records created after this timestamp will not be deleted. The timestamp value can be in various formats including <yyyy-mm-dd>T<hh:mm:ss>Z or <yyyy-mm-dd>T<hh:mm>Z. |
--after <AFTER_TIMESTAMP> | Records created after this timestamp will be deleted.
Records created before this timestamp will not be deleted. The timestamp value can be in various formats including <yyyy-mm-dd>T<hh:mm:ss>Z or <yyyy-mm-dd>T<hh:mm>Z. |
--categories <CATEGORIES <CATEGORIES ...>> | Only delete records with the given categories.
Examples of categories: ph_user, container, playbook, system_settings, artifact, decided_list. |
--dry-run | Do not run the DELETE queries. Use this argument to test your parameters before running the command for real. |
--no-prompt | Do not block on user input. This flag is suitable for running as part of an unsupervised script. |
-v <LEVEL>, --verbosity <LEVEL> | Verbosity level; 0 for minimal output, 1 for normal output, 2 for verbose output, 3 for very verbose output |
--no-color | Don't colorize the command output. |
Examples
Test command parameters by using the --dry-run
option first.
Delete all audit logs from before July 2019:
Delete audit logs between July 1 and December 1 2019:
Adjust the number of uWSGI workers
You can adjust the performance for your deployment by adjusting the number of workers used by uWSGI. For a complete reference on uWSGI see the uWSGI project documentation.
What is a "worker"?
Workers are essentially a subsystem of the uWSGI system that processes requests. For deployments which process large volumes of work, increasing the number of uWSGI workers can increase performance.
In release 6.2.0 and higher, the uWSGI system for dynamically spawning and removing uWSGI workers was implemented. Based on settings in uwsgi.ini, when all available uWSGI workers are busy for one and a half seconds, uWSGI will spawn new workers. Once the overall load is reduced uWSGI will begin stopping workers one at a time, until the number of workers is restored to the defined minimum.
The default settings for uWSGI workers are stored in <$PHANTOM_HOME>/etc/uwsgi.ini. If you have customized your uWSGI worker settings, the overrides for the default settings are stored in <$PHANTOM_HOME>/etc/uwsgi_local.ini.
Settings
Using any of these commands will restart the uWSGI process in order to apply the new settings. Restarting uWSGI will interrupt any requests being processed.
uWSGI Setting | Command option | Default value | Description |
---|---|---|---|
cheaper | --uwsgi-min-worker-count | 10 | The minimum number of uWSGI workers to keep active at all times.
Example command phenv python -m manage set_preference --uwsgi-min-worker-count <minimum number of workers> |
workers | --uwsgi-max-worker-count | 20 | The maximum number of workers that uWSGI can spawn.
Example command phenv python -m manage set_preference --uwsgi-max-worker-count <maximum number of workers> |
reload-on-rss | --uwsgi-reload-rss-limit | 1024 | Reload a uWSGI worker if the memory used by it, called the Resident Set Size, exceeds this size in megabytes.
Example command phenv python -m manage set_preference --uwsgi-reload-rss-limit <RSS memory in megabytes> |
reload-on-as | --uwsgi-reload-rss-limit | 2048 | Reload a uWSGI worker if the memory used by it is greater than this memory address space in megabytes.
Example command phenv python -m manage set_preference --uwsgi-reload-as-limit <address space in megabytes> |
Workers require CPU time and available RAM. You should carefully consider how many resources are available before increasing the number of uWSGI worker processes.
Define tasks using workbooks | Use data retention strategies to schedule and manage your database cleanup |
This documentation applies to the following versions of Splunk® SOAR (On-premises): 6.2.0, 6.2.1, 6.2.2, 6.3.0, 6.3.1
Feedback submitted, thanks!