Splunk® Supported Add-ons

Splunk Add-on for CrowdStrike FDR

Acrobat logo Download manual as PDF


Acrobat logo Download topic as PDF


Configure inputs for the Splunk Add-on for CrowdStrike FDR

The Splunk Add-on for CrowdStrike FDR lets you configure the following types of inputs:

  • Crowdstrike FDR host information sync (not required): This input lets you synchronize host resolution information with local collection so that you can resolve CrowdStrike agent hostnames in events at index time. By default host resolution takes place at search time.
  • Crowdstrike FDR S3 bucket monitor (not required): This is a diagnostic modinput. It monitors Crowdstrike dedicated S3 buckets and logs information about FDR event batches. The add-on dashboard monitors this information by comparing the list of discovered batches with batches received in SQS messages. Discrepancies between batches can indicate a batch ingest backlog or an unknown consumer "stealing" SQS notifications. If the modinput is not running, then the corresponding dashboard will not show any data. You should create a single instance of this input, as creating more than one instance can negatively affect the dashboard's search results. You may enable this modular input only when you notice an issue, but running it constantly may help you notice issues earlier.
  • CrowdStrike FDR SQS based S3 consumer (required): This input type consumes events from the CrowdStrike AWS feed. Use this input if you work with a relatively small amount of CrowdStrike data, for example, if the event data batch folders at S3 bucket contain less than 10 event files, or average batch ingest time is far below 7 minutes. Before you create a new input of this type consider fulfilling the following configuration steps:
    1. Configure an FDR AWS collection.
    2. Configure a CrowdStrike event filter.
  • Inputs "Crowdstrike FDR SQS based manager" and "Crowdstrike FDR managed S3 consumer" (required) are new in version 1.3.0 and implement another architecture to ingest CrowdStrike events and work in conjunction with each other. This architecture allows parallel batch file ingestion by distributing files among several ingesting modinputs. This approach is recommended for configurations with more than one terabyte of CrowdStrike data per day (as counted in Splunk license usage) that may have hundreds of event files per single batch and billions of sensor events in 24 hours. Since inputs use the KVStore journal to exchange information, this input is optimized when installed in Splunk Cloud Victoria. For best results in on-premise environments, use with a single dedicated heavy forwarder capable of running multiple Splunk ingest pipelines, or with several heavy forwarders sharing the same KVStore cluster.
    • Crowdstrike FDR SQS based manager: This modular input does not consume any information for an indexer and is responsible for receiving SQS notifications about newly uploaded event batches, validating batch content, and distributing ingestion of S3 located event files between managed inputs (Crowdstrike FDR managed S3 consumers). Use only one input per CrowdStrike feed in your Splunk environment. This input has all the same configuration settings as CrowdStrike FDR SQS based S3 consumer input. Before you create a new input of this type consider fulfilling the following configuration steps:
      1. Configure an FDR AWS collection.
      2. Configure a CrowdStrike event filter.
    • CrowdStrike FDR managed S3 consumer: This input type ingests a single file from the S3 bucket. It requires minimal configuration or manager input to work with and the interval value. This input receives all additional configuration required for AWS connectivity from the assigned manager. The input waits for the manager to assign another task with the event file URL and immediately starts downloading it and sending content to Spunk index. You should have several CrowdStrike FDR managed S3 consumers working with the same manager. When transferring from CrowdStrike FDR SQS based S3 consumers to this new architecture, you should have the same number of Crowdstrike FDR managed S3 consumers as the number of CrowdStrike FDR SQS based S3 consumers


Configure your FDR Amazon Web Services collection

Specify your CrowdStrike FDR AWS feed connection information. In most cases only one collection is needed. All inputs that you create to consume events can reuse this information to connect the FDR AWS feed.

  1. Open the Splunk Add-on for Crowdstrike FDR Configuration page on your heavy forwarder or IDM. You must repeat the following task for each heavy forwarder or IDM. If you are on the Splunk Cloud Platform, perform this task in Splunk Web.
  2. Select the FDR AWS Collection tab and click Add.
  3. Specify an FDR AWS collection name.
  4. Select the AWS region where your CrowdStrike feed is located. To find this information, as well as the AWS access key id and AWS secret access key id, refer to your CrowdStrike Falcon Dashboard.
  5. Enter your AWS access key id. You can find this key ID in your CrowdStrike Falcon Dashboard
  6. Enter your AWS secret access key id. You can find this key ID in your CrowdStrike Falcon Dashboard
  7. Click Add.

Configure a CrowdStrike event filter

Specify a filter to define which CrowdStrike agent events should be consumed or dropped. By default, new inputs use a predefined filter that drops all heartbeat events.

  1. Open the Splunk Add-on for Crowdstrike FDR Configuration page on your heavy forwarder or IDM. You must repeat the following task for each heavy forwarder or IDM. If you are on the Splunk Cloud Platform, perform this task in Splunk Web.
  2. Select the CrowdStrike event filter tab. You can create a new filter, or clone or edit an existing predefined filter.
  3. Click Add button on the page top right to create a new filter.
  4. Provide a CrowdStrike event filter name.
  5. Select a filter type. If you selected Drop matching events, Splunk Add-on for CrowdStrike FDR ingests all events except those that match the provided Filter value. If you select Ingest only matching events, the Splunk Add-on for CrowdStrike FDR ingests events matching the specified Filter value.
  6. Specify a Filter value. Provide a space-separated list of CrowdStrike FDR events' event_simpleName property values. You can create this list in an editor of your choice and then copy it into the Filter value field. You can find a full list of names in the "Event Data Dictionary" in your CrowdStrike Falcon support documentation.
  7. Click Add.

Configure a CrowdStrike device field filter

Specify a filter to define by which CrowdStrike device fields should be events enriched or skipped. Default filter includes next fields: agent_version, connection_ip, connection_mac_address, default_gateway_ip, external_ip, hostname, last_seen, local_ip, mac_address, os_version, provision_status, serial_number, status

  1. Open the Splunk Add-on for Crowdstrike FDR Configuration page on your heavy forwarder or IDM. You must repeat the following task for each heavy forwarder or IDM. If you are on the Splunk Cloud Platform, perform this task in Splunk Web.
  2. Select the CrowdStrike device field filter tab. You can create a new filter, or clone or edit an existing predefined filter.
  3. Click Add button on the page top right to create a new filter.
  4. Provide a CrowdStrike device field filter name.
  5. Select a filter type. If you select Enrich events with matching fields, Splunk Add-on for CrowdStrike FDR will enrich events with provided fields in Filter value. If you select Skip matching field when enriching events, the Splunk Add-on for CrowdStrike FDR will enrich events without specified in Filter value.
  6. Specify a Filter value. Select values from drop down list that you want to include in filter.
  7. Click Add.

Configure the Crowdstrike FDR SQS based S3 consumer input

  1. Open the Splunk Add-on for CrowdStrike FDR Configuration page on your heavy forwarder, IDM, or search head on Splunk Cloud Victoria. Repeat the following task for each heavy forwarder or IDM. However, on Splunk Cloud Victoria search heads, configuration is replicated on clusters automatically, so there is no need to configure each search head separately.
  2. Click Create New Input.
  3. In the dropdown menu select Crowdstrike FDR SQS based S3 consumer
  4. Specify an Input name.
  5. Select your FDR AWS Collection.
  6. Type the AWS SQS queue URL that is specific to your CrowdStrike FDR AWS feed. You can find this information in API Clients and Keys on your CrowdStrike Falcon Dashboard .
  7. Optionally type a date time value for "Ignore SQS messages older than" field. This field tells the add-on to ignore SQS messages created earlier than the date time specified, which leads to skipping corresponding event batches. It expects a UTC time in the following format: YYYY-MM-DD HH:MM. Splunk Add-on for CrowdStrike FDR returns excluded to the SQS queue after visibility timeout expiration and can be read from SQS queue again. AWS deletes these messages together with all the rest unconsumed SQS messages after the retention period defined by Crowdstrike
  8. Select a SQS Message Visibility Timeout. Select this value based on the event load generated by your CrowdStrike environment. The default value is six hours based on environments with 10TB of consumed events per day and takes into account best throughput achieved during performance tests.
  9. For Sensor event filter, select the Crowdstrike FDR event filter that you configured previously. Keep in mind that this filter only applies to events coming from sensors, which are events with sourcetype=crowdstrike:events:sensor. If no filter is needed click "X" next to the filter name to remove the default filter.
  10. Optionally select Ingest time host resolution, by default state is not used. This is where the type of ingest time host resolution is selected. Be aware that "CrowdStrike inventory events" is not supported by Splunk Cloud. If selected type is "CrowdStrike API" make sure that modular input "CrowdStrike Device API Inventory Sync Service" is configured as well
  11. Optionally select Device field filter that you configured before or use default. Keep in mind that Device field filter only will work in pair with "CrowdStrike API" ingest time host resolution configured in the previous step.
  12. Provide your destination Splunk Index. This is where collected events are sent once collection begins. For the events other than crowdstrike:events:sensor you can specify separate destination indexes.
  13. Optionally check External security events to index external FDR events. These events are triggered when a user logs into your CrowdStrike FDR Dashboard or as a result of API calls.
  14. Specify value for Index for external events if you want to store external events in a separate dedicated index.
  15. Check ZTA events if you want "zero trust host assessment" events to be indexed.
  16. Specify value for Index for ZTA events if you want to store ZTA events in a separate dedicated index.
  17. Optionally check Inventory AIDMaster Events if you want to index AIDMaster events. These events are selected by default and are used for agent host name resolution.

    If you choose not to collect AIDMaster events, host resolution does not work because there is no data to resolve agent host information. Avoid stopping AIDMaster events after a period of collection, as host resolution will keep working based on outdated AIDMaster information

  18. Specify value for Index for aidmaster events if you want to store aidmaster events in a separate dedicated index.
  19. Optionally check Inventory managedassets events if you want to index FDR managedassets events.
  20. Specify value for Index for managedassets events if you want to store managedassets events in a separate dedicated index.
  21. Optionally check Inventory notmanaged events to index FDR notmanaged events.
  22. Specify value for Index for notmanaged events if you want to store notmanaged events in a separate dedicated index.
  23. Optionally check Inventory appindo events to index FDR appindo events.
  24. Specify value for Index for appindo events if you want to store appindo events in a separate dedicated index.
  25. Optionally check Inventory userinfo events to index FDR userinfo events.
  26. Specify value for Index for userinfo events if you want to store userinfo events in a separate dedicated index.
  27. Provide an Interval, in seconds, to tell Splunk how often to check that the input is running. Splunk will start the input if it is not running. The default value is 200 seconds.
  28. Click Add.

Configure CrowdStrike FDR SQS based manager input

Repeat this task on each heavy forwarder or IDM. On Splunk Cloud Victoria search heads, configuration is replicated on clusters automatically, so there is no need to configure each search head separately.

  1. Open the Splunk Add-on for CrowdStrike FDR Configuration page on your heavy forwarder, IDM, or search head on Splunk Cloud Victoria.
  2. Click Create New Input.
  3. In the dropdown menu select Crowdstrike FDR SQS based S3 consumer
  4. Specify an Input name.
  5. Select your FDR AWS Collection.
  6. Type the AWS SQS queue URL that is specific to your CrowdStrike FDR AWS feed. You can find this information in API Clients and Keys on your CrowdStrike Falcon Dashboard .
  7. Optionally type a date time value for "Ignore SQS messages older than" field. This tells the add-on to ignore SQS messages created earlier than the date time specified, which leads to skipping corresponding event batches. It expects a UTC time in the following format: YYYY-MM-DD HH:MM. Splunk Add-on for CrowdStrike FDR returns excluded to the SQS queue after visibility timeout expiration and can be read from SQS queue again. AWS deletes these messages together with all the rest unconsumed SQS messages after the retention period defined by CrowdStrike
  8. Select a SQS Message Visibility Timeout. Select this value based on the event load generated by your CrowdStrike environment. The default value is six hours based on environments with 10TB of consumed events per day and takes into account best throughput achieved during performance tests. Please see performance test (add link) results for more information.
  9. For Sensor event filter, select your CrowdStrike FDR event filter that you configured previously. Keep in mind that this filter only applies to events coming from sensors, which are events with sourcetype=crowdstrike:events:sensor. If no filter is needed click "X" next to the filter name field to remove the default filter.
  10. Optionally select Ingest time host resolution, by default state is not used. This is where the type of ingest time host resolution is selected. Be aware that "CrowdStrike inventory events" is not supported by Splunk Cloud. If selected type is "CrowdStrike API" make sure that modular input "CrowdStrike Device API Inventory Sync Service" is configured as well
  11. Optionally select Device field filter that you configured before or use default. Keep in mind that Device field filter only will work in pair with "CrowdStrike API" ingest time host resolution configured in previous step.
  12. Provide your destination Splunk Index. This is where collected events are sent once collection begins. For the events other than crowdstrike:events:sensor you can specify separate destination indexes if needed (see details below).
  13. Optionally check External security events if you want to index external FDR events. These events are triggered when a user logs into your CrowdStrike FDR Dashboard or as a result of API calls.
  14. Specify a value for Index for external events if you want to store external events in a separate dedicated index
  15. Check ZTA events if you want "zero trust host assessment" events to be indexed.
  16. Specify a value for Index for ZTA events if you want to store ZTA events in a separate dedicated index.
  17. Optionally check Inventory AIDMaster Events if you want to index AIDMaster events. These events are selected by default and are used for agent host name resolution.

    If you choose not to collect AIDMaster events, host resolution will not work because there will be no data to resolve agent host information. Avoid stopping AIDMaster events after a period of collection, as host resolution will keep working based on outdated AIDMaster information

  18. Specify value for Index for aidmaster events if you want to store aidmaster events in a separate dedicated index.
  19. Optionally check Inventory managedassets events if you want to index FDR managedassets events.
  20. Specify a value for Index for managedassets events if you want to store managedassets events in a separate dedicated index.
  21. Optionally check Inventory notmanaged events to index FDR notmanaged events.
  22. Specify a value for Index for notmanaged events if you want to store notmanaged events in a separate dedicated index.
  23. Optionally check Inventory appindo events to index FDR appindo events.
  24. Specify a value for Index for appindo events if you want to store appindo events in a separate dedicated index.
  25. Optionally check Inventory userinfo events to index FDR userinfo events.
  26. Specify a value for Index for userinfo events if you want to store userinfo events in a separate dedicated index.
  27. Select your checkpoint type. This option controls how the add-n treats event files that failed to download or ingest and what happens to SQS messages received. In all cases, upon receiving an SQS message, the add-on populates all batch resource URLs into internal journal, and during that ingestion process marks which of them are ingested successfully and which failed. An event batch is not taken as ingested until the last batch file is ingested or ingestion failures are not recoverable (for example failed files do not exist at S3 bucket anymore). There are two checkpoint options available:
    • SQS message (per batch) - SQS message is deleted only after a batch is fully ingested. If some event files failed to ingest but there are recoverable failures, the add-on will wait for the SQS message to return after visibility timeout. When receiving the same SQS message again the add-on compares the list of event files with journal records, skips files that have been ingested successfully, and restarts failed ones. This process repeats until all batch event files are ingested or disappear from S3 bucket.
    • Internal Splunk (per file) - SQS message is deleted as soon as it gets consumed by the manager. If some event files failed to ingest but there are recoverable failures, the add-on will wait for some time and gather info about failed files from KVstore. When going through the list of event files with journal records, skips files that have been ingested successfully, and restarts failed ones. The process repeats until files are available on S3 bucket
  28. Provide an Interval, in seconds, to tell Splunk how often to check that the input is running. Splunk will start the input if it is not running. The default value is 200 seconds.
  29. Click Add.

CrowdStrike FDR managed S3 consumer

  1. Open the Splunk Add-on for CrowdStrike FDR Inputs page on your heavy forwarder or IDM.
  2. Click Create New Input.
  3. In the menu select CrowdStrike FDR managed S3 consumer.
  4. Specify an input name.
  5. Select previously created CrowdStrike FDR SQS based manager input as a manager for this input.
  6. Provide an Interval, in seconds, to tell Splunk how often to check that the input is running. Splunk will start the input if it is not running. The default value is 30 seconds.
  7. Click Add.

Configure Crowdstrike FDR Device API Inventory Sync Service

Repeat this task on each heavy forwarder or IDM. On Splunk Cloud Victoria search heads, configuration is replicated on clusters automatically, so there is no need to configure each search head separately.

  1. Open the Splunk Add-on for CrowdStrike FDR Configuration page on your heavy forwarder, IDM, or search head on Splunk Cloud Victoria.
  2. Click Create New Input.
  3. In the dropdown menu select CrowdStrike FDR host information sync.
  4. Specify an input name.
  5. Specify an OAuth2 API Client ID from your CrowdStrike account. You can create new pair or use existing one in tab API Clients and Keys on your CrowdStrike Falcon Dashboard .
  6. Specify an OAuth2 API Client Secret from your CrowdStrike account. You can create new pair or use existing one in tab API Clients and Keys on your CrowdStrike Falcon Dashboard .
  7. Crowdstrike API Base URL populated by default and pointing to CrowdStrike API address.
  8. Bucket check interval is populated by default and set to 600 second, that means API calls to CrowdStrike will be made every 600 seconds. It can be changed to higher or lower value depending how often your hosts information updated.

Configure Crowdstrike FDR host information sync input

Repeat this task on each heavy forwarder or IDM. On Splunk Cloud Victoria search heads, configuration is replicated on clusters automatically, so there is no need to configure each search head separately.

  1. Open the Splunk Add-on for CrowdStrike FDR Configuration page on your heavy forwarder, IDM, or search head on Splunk Cloud Victoria.
  2. Click Create New Input.
  3. In the dropdown menu select CrowdStrike FDR host information sync.
  4. Specify an input name.
  5. In Search head host, provide the IP address or FQDN of any search head in the environment search head cluster. This is used to access collection storing agent host resolution information. On the Splunk Cloud Platform this can be localhost.
  6. If your environment is configured with a custom port, provide the Splunk REST API port for the Search head port. The default value is the default Splunk REST API port 8089.
  7. In the Search head user field, provide the Splunk user name created for the search head host. Do not use a personal Splunk user account.
  8. For Search head password, provide the password for the user account specified in the previous step.
  9. Check Use failover Search head if you plan to use another search head as a failover in the event that the primary search head is not accessible.
  10. If you checked 'Use failover Search head, specify values for the failover search head:
    • Failover Search head host
    • Failover Search head port
    • Failover Search head user
    • Failover Search head password
  11. For Inventory sync interval, specify the number of seconds to wait between sync iterations.
  12. Click Add.

Starting from the 1.5.0 version a new type of index-time host resolution is available. It works in Splunk Cloud Platform (SCP) stacks and in Splunk Enterprise. Check the "Index time host resolution" page for more information.I

Starting from the 1.5.0 version users should select desired way of host resolution in SQS based S3 consumer or in SQS based manager

Sensor events enrichment with host, user and app/file information

Sensor events provided by CrowdStrike FDR in the AWS S3 bucket do not contain robust information about the host, user, and file connected to an event. However, sensor events do have key properties that point to additional information provided in Crowdstrike FDR inventory events, for example: The aid' property represents agent ID and points to host general information, such as host name and geographical location, in aidmaster events'. The aid property can also be mapped to a host's MAC and IP addresses in managedassets events. Some UserSid values can be resolved to user information in userinfo events. The SHA256HashData' property identifies file and application information stored in appinfo events.


Crowdstrike FDR file/app resolution

As mentioned earlier, sensor events provided by Crowdstrike FDR in the AWS S3 bucket do not contain information about files and applications associated with the events. However, some events do contain associated file hash in SHA256HashData property that can be mapped to inventory AppInfo events containing the associated file and application details.

Properties added by AppInfo based user resolution

added property appinfo event property
appinfo_company_name CompanyName
appinfo_file_description FileDescription
appinfo_file_name FileName
appinfo_file_version FileVersion
appinfo_product_name ProductName
appinfo_product_version ProductVersion
appinfo_detection_count detectionCount

File/app resolution flow

  1. File/app resolution information is collected from CrowdStrike appinfo events by a scheduled savedsearch named crowdstrike_ta_build_appinfo_resolution_table
  2. This savedsearch stores collected data in crowdstrike_ta_build_appinfo_resolution_lookup lookup based on crowdstrike_ta_build_appinfo_resolution_collection kvstore collection located at a search head (search head cluster).
  3. Every time a user runs a search, Splunk Add-on for CrowdStrike FDR attempts to add associated application information found in the lookup table to each event based on its SHA256HashData property value.
  4. If the Splunk Add-on for CrowdStrike FDR input has never been configured to collect appinfo events, in other words if the index does not have any appinfo events collected, then search results will not be enriched with file/app information at all. If the index has outdated appinfo events collected, for example, if input was initially configured to ingest appinfo events but later it was reconfigured to stop ingesting them, then Add-on will be enriching events with outdated file/app information. To mitigate this, you must clean up crowdstrike_ta_build_appinfo_resolution_collection. Take this into consideration when deciding whether to stop ingesting appinfo events.

Configure file/app information resolution search interval

By default, the saved search building file/app resolution collection runs every eleventh minute of an hour. Ideally the schedule for this search should be aligned with the frequency that CrowdStrike FDR uploads appinfo updates to the feed with some time reserved for downloading and indexing appinfo events, which depends on CrowdStrike environment configuration. If you do not know this schedule but you see that savedsearch schedule does not have a noticeable impact on your host, you can use Splunk Web to check inventory updates more often. Or in contrast you can make it run less often if you see this search takes too much time or has an unnecessary high impact to the environment resources. This can be done at a search head:

  1. Go to Setting > KNOWLEDGE > Searches, reports, and alerts.
  2. Set App filter to "Splunk Add-on for CrowdStrike FDR (Splunk_TA_CrowdStrike_FDR)" Set Owner filter to "All" or "Nobody".
  3. Find crowdstrike_ta_build_appinfo_resolution_table, in the Action column and click "Edit".
  4. Select "Edit Schedule" to change only the schedule.
  5. Select "Advanced Edit" to change all the parameters, find the "cron_schedule" parameter and type a new cron expression. See splunk documentation for more details about cron expressions
  6. Click Save.

Set a retention period for file/app information resolution searches

The file/app resolution search parameter dispatch.earliest_time defines how far back to search when building the list of file hashes. This parameter is set to -60d by default, which tells the system to search only within the last 60 days. However, for large environments 60 days can still mean a lot of appinfo data to look through that can make the savedsearch slow. To mitigate this, you can decrease the value of "dispatch.earliest_time" parameter to less days or even hours. As a general rule it is recommended to use the retention period adopted by the organization. To make an adjustment perform the following task on a search head:

  1. Go to Setting > KNOWLEDGE > Searches, reports, and alerts.
  2. Set the App filter to "Splunk Add-on for CrowdStrike FDR (Splunk_TA_CrowdStrike_FDR)"
  3. Set the Owner filter to "All" or "Nobody".
  4. Find crowdstrike_ta_build_appinfo_resolution_table, in the "Action" column.
  5. Click Edit and select "Advanced Edit" to change all the parameters.
  6. Find the "dispatch.earliest_time" parameter and type a new value. for details and examples on dispatch.earliest_time parameter in savedsearch.conf file splunk documentation
  7. Click Save.

Index for file/app information resolution

To determine from which index file/app information resolution saved search should collect resolution information, the add-on uses the crowdstrike_ta_appinfo_index macro. As a rule appinfo events are collected in an index other than main. Since it cannot be determined in advance what index will be used, the macro is set to * (asterisk), which avoids limiting searches to the main index only (the default behavior when index is not specified in a search). This is acceptable when only one index is used for CrowdStrike appinfo data. However, this can consume more resources and increase search time. In cases where CrowdStrike appinfo data is collected from different feeds into different indexes, the saved search may be not accurate. To mitigate this, update the macro by specifying the index actually used to collect appinfo data.

Use the following steps to configure the search index:

  1. Go to Menu > Settings > Advanced Search.
  2. Click Search macros.
  3. Set App filter to "Splunk Add-on for CrowdStrike FDR(Splunk_TA_CrowdStrike_FDR)"
  4. Set Owner filter to "Any" or "No owner".
  5. Click crowdstrike_ta_appinfo_index link in the Name column
  6. In the Definition field expressions replace asterisk (*) with the required index
  7. Click Save.


Crowdstrike FDR user resolution

As mentioned earlier, sensor events provided by Crowdstrike FDR in the AWS S3 bucket do not contain information about users associated with events. However, some events do contain an UserSid property that can be mapped to inventory UserInfo events containing more user details.

Not all UserSid values can be resolved using '''userinfo''' events. For more accurate approach for user information resolution consider using Splunk Enterprise Security Assets and Identity framework.

Properties added UserInfo based user resolution

added property userinfo event property
userinfo_account_type AccountType
userinfo_user User
userinfo_user_name UserName

UserInfo based user resolution flow

  1. User resolution information is collected from CrowdStrike userinfo events by a scheduled savedsearch named crowdstrike_ta_build_userinfo_resolution_table
  2. This savedsearch stores collected data in crowdstrike_ta_build_userinfo_resolution_lookup lookup based on crowdstrike_ta_build_userinfo_resolution_collection kvstore collection located at a search head (search head cluster).
  3. Every time a user runs a search, the Splunk Add-on for CrowdStrike FDR tries to add user information from the lookup table to each event based on its UserSid property value.
  4. If the Splunk Add-on for CrowdStrike FDR input has never been configured to collect userinfo events, in other words if the index does not have any userinfo events collected, then search results will not be enriched with user information. If the index has outdated userinfo events collected, for example, if input was initially configured to ingest userinfo events but later it was reconfigured to stop ingesting them, then Add-on will enrich events with outdated user information. To mitigate this, you must clean up crowdstrike_ta_build_userinfo_resolution_collection. Take this into consideration when deciding whether to stop ingesting userinfo events.

Configure user information resolution search interval

By default, the saved search building user information resolution collection runs every eleventh minute of an hour. Ideally the schedule for this search should be aligned with the frequency that CrowdStrike FDR uploads user updates to the feed. Some time, depending on your CrowdStrike environment configuration, should be reserved for downloading and indexing user events. If you do not know this schedule but you feel that the savedsearch schedule does not have a noticeable impact on your host, you can use Splunk Web to check inventory updates more often. Or in contrast you can make it run less often if you see this search takes too much time or has an unnecessary high impact to the environment resources. This can be done at a search head:

  1. Go to Setting > KNOWLEDGE > Searches, reports, and alerts.
  2. Set App filter to "Splunk Add-on for CrowdStrike FDR (Splunk_TA_CrowdStrike_FDR)" Set Owner filter to "All" or "Nobody".
  3. Find crowdstrike_ta_build_userinfo_resolution_table, in the Action column and click "Edit".
  4. Select "Edit Schedule" to change only the schedule.
  5. Select "Advanced Edit" to change all the parameters, find the "cron_schedule" parameter and type a new cron expression. See splunk documentation for more details about cron expressions
  6. Click Save.

Set a retention period for user information resolution searches

The user information resolution search parameter dispatch.earliest_time defines how far back to search when building the list of unique UserSid values. This parameter is set to -60d by default, which tells the system to search only within the last 60 days. However, for large environments 60 days can still mean a lot of userinfo data to look through that can make the savedsearch slow. To mitigate this, you can decrease the value of "dispatch.earliest_time" parameter to fewer days or even a few hours. As a general rule it is recommended to use the retention period adopted by the organization. To make an adjustment perform the following task on a search head:

  1. Go to Setting > KNOWLEDGE > Searches, reports, and alerts.
  2. Set the App filter to "Splunk Add-on for CrowdStrike FDR (Splunk_TA_CrowdStrike_FDR)"
  3. Set the Owner filter to "All" or "Nobody".
  4. Find crowdstrike_ta_build_userinfo_resolution_table, in the "Action" column.
  5. Click Edit and select "Advanced Edit" to change all the parameters.
  6. Find the "dispatch.earliest_time" parameter and type a new value. for details and examples on dispatch.earliest_time parameter in savedsearch.conf file splunk documentation
  7. Click Save.

Index for user information resolution

To determine from which index user information resolution saved search should collect resolution information, the add-on uses the crowdstrike_ta_userinfo_index macro. As a rule userinfo events are collected in an index other than main. Since it cannot be determined in advance what index will be used, the macro is set to * (asterisk), which avoids limiting searches to the main index only (the default behavior when index is not specified in a search). This is acceptable when only one index is used for CrowdStrike userinfo data. However, this can consume more resources and increase search time. In cases where CrowdStrike userinfo data is collected from different feeds into different indexes, the saved search may be not accurate. To mitigate this, update the macro by specifying the index actually used to collect userinfo data.

Use the following steps to configure the search index:

  1. Go to Menu > Settings > Advanced Search.
  2. Click Search macros.
  3. Set App filter to "Splunk Add-on for CrowdStrike FDR(Splunk_TA_CrowdStrike_FDR)"
  4. Set Owner filter to "Any" or "No owner".
  5. Click crowdstrike_ta_userinfo_index link in the Name column
  6. In the Definition field expressions replace asterisk (*) with the required index
  7. Click Save.

Using Splunk Enterprise Security Asset and Identity (SES A&I) for user resolution

Splunk Enterprise Security uses an and Identity system to correlate asset and identity information with events to enrich and provide context to your data. This system takes information from external data sources to populate lookups, which Enterprise Security correlates with events at search time. Splunk Add-on for CrowdStrike FDR extracts user property at search time required by SES A&I framework to automatically resolve user information based on static or dynamic lookups configured by customers. So enabling user resolution with the SES A&I framework actually means defining the lookups within the SES A&I application and the way of keeping them up to date. Refer to A&I administration documentation for details.

Add-on extracts '''user''' property only for events that are covered by CIM normalization (i.e have CIM data model assigned). For the rest of events that require user resolution you may need to add '''user''' property extraction rules yourself

Crowdstrike FDR host IP and MAC resolution

Sensor events provided by Crowdstrike FDR in the AWS S3 bucket do not contain information about the host IP and MAC addresses they originate from. However, these events do contain an identifier of the agent (sensor) installed on a host. Host IP and MAC resolution enriches CrowdStrike sensor events with sensor and agent host IP and MAC by mapping agent identifiers in an event to the same identifier in inventory managedassets events.

Properties added by host IP and MAC resolution

added property managedassets event property
aid_gateway_IP GatewayIP
aid_gateway_mac GatewayMAC
aid_local_address_ip4 LocalAddressIP4
aid_mac MAC


Host IP and MAC resolution flow

  1. Host IP and MAC resolution information is collected from CrowdStrike managedassets events by a scheduled savedsearch named crowdstrike_ta_build_mac_ip_resolution_table
  2. This savedsearch stores collected data in crowdstrike_ta_build_mac_ip_resolution_lookup lookup table based on crowdstrike_ta_build_mac_ip_resolution_collection kvstore collection located at a search head (search head cluster).
  3. Every time a user runs a search, the Splunk Add-on for CrowdStrike FDR attempts to add host IP and MAC to each event based on its agent identifier value.
  4. If the Splunk Add-on for CrowdStrike FDR input has never been configured to collect managedassets events, in other words if the index does not have any managedassets events collected, then search results will not be enriched with agents' host IP and MAC values at all. If the index has outdated managedassets events collected, for example, if input was initially configured to ingest managedassets events but later it was reconfigured to stop ingesting them, then host IP and MAC resolution will be based on outdated agent host information. To mitigate this, you must clean up crowdstrike_ta_build_mac_ip_resolution_collection. Take this into consideration when deciding whether to stop ingesting managedassets events.

Configure a host IP and MAC resolution search interval

By default, the saved search building the host IP and MAC resolution runs every eleven minutes starting at 11 minutes after the hour. For example: 13:11, 13:22.13:33, 13:44, 13:55, 14:11, 14:22. Ideally the schedule for this search should be aligned with the frequency that CrowdStrike FDR uploads managedassets updates to the feed with some time reserved for downloading and indexing managedassets events, which depends on CrowdStrike environment configuration. If you do not know this schedule but you see that savedsearch schedule does not have a noticeable impact on your host, you can use Splunk Web to check inventory updates more often. Or you can make it run less often if you see this search takes too much time or has an unnecessarily high impact to the environment resources. This can be done at a search head:

  1. Go to Setting > KNOWLEDGE > Searches, reports, and alerts.
  2. Set App filter to "Splunk Add-on for CrowdStrike FDR (Splunk_TA_CrowdStrike_FDR)" Set Owner filter to "All" or "Nobody".
  3. Find "crowdstrike_ta_build_mac_ip_resolution_table", in the Action column and click "Edit".
  4. Select "Edit Schedule" to change only the schedule.
  5. Select "Advanced Edit" to change all the parameters, find the "cron_schedule" parameter and type a new cron expression. See splunk documentation for more details about cron expressions
  6. Click Save.


Set a retention period for host IP and MAC resolution searches

The host IP and MAC resolution search parameter "dispatch.earliest_time" defines how far back to search when building the list of agent identifiers. This parameter is set to -60d by default, which tells the system to search only within the last 60 days. However, for large environments 60 days can still mean a lot of managedassets data to look through that can make the savedsearch slow. To mitigate this, you can decrease the value of "dispatch.earliest_time" parameter to fewer days or even just a few hours. As a general rule, you should use the retention period adopted by the organization. To make an adjustment, perform the following task on a search head:

  1. Go to Setting > KNOWLEDGE > Searches, reports, and alerts.
  2. Set the App filter to "Splunk Add-on for CrowdStrike FDR (Splunk_TA_CrowdStrike_FDR)"
  3. Set the Owner filter to "All" or "Nobody".
  4. Find "crowdstrike_ta_build_mac_ip_resolution_table", in the "Action" column.
  5. Click Edit and select "Advanced Edit" to change all the parameters.
  6. Find the "dispatch.earliest_time" parameter and type a new value. for details and examples on dispatch.earliest_time parameter in savedsearch.conf file splunk documentation
  7. Click Save.

Index for host IP and MAC resolution

To determine from which index host IP and MAC resolution saved search should collect resolution information, the add-on uses the "crowdstrike_ta_managedassets_index" macro. As a rule events are collected in an index other than main. Since it cannot be determined in advance what index will be used, the macro is set to * (asterisk), which avoids limiting searches to the main index only (the default behavior when index is not specified in a search). This is acceptable when only one index is used for CrowdStrike managedassets data. However, this can consume more resources and increase search time. In cases where CrowdStrike managedassets data is collected from different feeds into different indexes, the saved search may be not accurate. To mitigate this, update the macro by specifying the index actually used to collect managedassets data.

Use the following steps to configure the search index:

  1. Go to Menu > Settings > Advanced Search.
  2. Click Search macros.
  3. Set App filter to "Splunk Add-on for CrowdStrike FDR(Splunk_TA_CrowdStrike_FDR)"
  4. Set Owner filter to "Any" or "No owner".
  5. Click crowdstrike_ta_managedassets_index link in the Name column
  6. In the Definition field expressions replace asterisk (*) with the required index
  7. Click Save.


Crowdstrike FDR generic host information resolution

As mentioned earlier, sensor events provided by Crowdstrike FDR in the AWS S3 bucket do not contain information about the host they originate from. However, these events do contain an identifier of the agent (sensor) installed on a host. Host resolution enriches CrowdStrike sensor events with sensor and agent host information by mapping agent identifiers in an event to the same identifier in inventory events.

Properties added by generic host information resolution

added property aidmaster event property
aid_computer_name ComputerName
aid_machine_domain MachineDomain
aid_ou OU
aid_site_name SiteName
aid_system_product_name SystemProductName
aid_os_version Version
aid_continent Continent
aid_country Country
aid_city City

Generic host information resolution flow

  1. Generic host information resolution data is collected from CrowdStrike aidmaster events by a scheduled savedsearch named crowdstrike_ta_build_host_resolution_table
  2. This savedsearch stores collected data in crowdstrike_ta_build_host_resolution_collection' kvstore collection in the search head (search head cluster).
  3. Every time a user runs a search, Splunk Add-on for CrowdStrike FDR attempts to add host information for the agent identifier. This happens only for sensor events for which host information has not been resolved at index time.
  4. If the Splunk Add-on for CrowdStrike FDR input is configured to never collect aidmaster events, for example if the index does not have aidmaster events collected, then search results will not be enriched with agents' host information. If the index has AIDMaster events collected but for some reason input was reconfigured to stop ingesting them, then host resolution will be based on outdated agent host information. You should take this into consideration if you decide to stop ingesting aidmaster events.


Configure a host resolution search interval

By default, the saved search building collection host resolution runs every eleven minutes. Ideally the schedule for this search should be aligned with the frequency that CrowdStrike FDR uploads aidmaster' updates to the feed with some time reserved for downloading and indexing aidmaster events, which depends on CrowdStrike environment configuration. If you do not know this schedule but you see that savedsearch schedule does not have a noticeable impact on your host, you can use Splunk Web to check inventory updates more often. Or in contrast you can make it run less often if you see this search takes too much time or has an unnecessarily high impact to the environment resources. This change can be done at a search head:

  1. Go to Setting > KNOWLEDGE > Searches, reports, and alerts.
  2. Set App filter to "Splunk Add-on for CrowdStrike FDR (Splunk_TA_CrowdStrike_FDR)" Set Owner filter to "All" or "Nobody".
  3. Find "crowdstrike_ta_build_host_resolution_table", in the Action column and click "Edit".
  4. Select "Edit Schedule" to change only the schedule.
  5. Select "Advanced Edit" to change all the parameters, find the "cron_schedule" parameter and type a new cron expression. See splunk documentation for more details about cron expressions
  6. Click Save.


Set a retention period for Host resolution searches

The host resolution search parameter "dispatch.earliest_time" defines how far back to search when building the list of agent identifiers. This parameter is set to 0 by default, which tells the system to search all data. However, aidmaster data can accumulate and eventually make searches slower and consume more resources. To mitigate this, you can add a limit using the "dispatch.earliest_time" parameter to set a new retention period, for example, the retention period adopted by the organization. Perform the following task on a search head:

  1. Go to Setting > KNOWLEDGE > Searches, reports, and alerts.
  2. Set the App filter to "Splunk Add-on for CrowdStrike FDR (Splunk_TA_CrowdStrike_FDR)"
  3. Set the Owner filter to "All" or "Nobody".
  4. Find "crowdstrike_ta_build_host_resolution_table", in the "Action" column.
  5. Click Edit and select "Advanced Edit" to change all the parameters.
  6. Find the "dispatch.earliest_time" parameter and type a new value. for details and examples on dispatch.earliest_time parameter in savedsearch.conf file splunk documentation
  7. Click Save.

Index for host resolution

To determine the index from which host resolution saved search should collect resolution information, the add-on uses the "crowdstrike_ta_aidmaster_index" macro. As a rule CrowdStrike events are collected in an index other than main. Since it cannot be determined in advance what index will be used, by default the index in the macro is set to * (asterisk), which avoids limiting searches to the main' index only (the default behavior when index is not specified in a search). This works best when only one index is used for CrowdStrike aidmaster data. This setting creates higher resource consumption and search time. When CrowdStrike aidmaster data is collected from different feeds into different indexes, the saved search may be not accurate. To mitigate this, update the macro by specifying the index actually used to collect aidmaster data.

  1. Go to Menu > Settings > Advanced Search.
  2. Click Search macros.
  3. Set App filter to "Splunk Add-on for CrowdStrike FDR(Splunk_TA_CrowdStrike_FDR)"
  4. Set Owner filter to "Any" or "No owner".
  5. Click 'crowdstrike_ta_aidmaster_index' link in the Name column
  6. In the Definition field expressions replace asterisk (*) with the required index
  7. Click Save.

Crowdstrike FDR combined host information resolution collection

As generic host information resolution and resolution of local host ip and mac addresses both based on agent identifier (aid) as a key, it's worth combining them into a single resolution collection. Instead of looking up through the same agent identifiers twice to extract these two types of data separately, combined collection will make only one search required. In order to combine the two lookup collection there is another scheduled savedsearch has been added to the add-on - crowdstrike_ta_merge_host_mac_ip_resolution_tables that fills crowdstrike_ta_combined_host_resolution_lookup kvstore collection based lookup with combined host resolution information. This savedsearch is scheduled to run every 3 minutes during each hour.

Last modified on 14 December, 2023
PREVIOUS
Install the Splunk Add-on for Crowdstrike FDR
  NEXT
Index time vs search time JSON field extractions

This documentation applies to the following versions of Splunk® Supported Add-ons: released


Was this documentation topic helpful?


You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters