
Set up a recurring import of entities in ITSI
After the bulk import process is complete, ITSI gives you the option of creating a modular input that repeats the import function on a recurring basis. This is convenient if you want to add or update entities or services without repeating the entire import from CSV or search workflow.
You cannot set up a recurring import from the UI in a search head cluster environment. Follow the steps in Set up a recurring import on a search head cluster below.
Prerequisite
Before setting up recurring import, you must import entities from a CSV file or a Splunk search.
Set up a recurring import on a single instance
- After the import from CSV or search process is complete, click Set up Recurring Import.
- Provide a name for the recurring import.
- If you're importing a CSV file, enter the full path to the file on the server. The CSV file must be on the same server as your ITSI installation.
- Set the scheduled time and frequency to run the import.
- Click Submit. ITSI creates the new modular input in
$SPLUNK_HOME/etc/apps/itsi/local/inputs.conf
.
The recurring import search executes as splunk-system-user
, which returns entities from datasets that exist in indexes that the user creating the import might not have access to.
Set up a recurring import on a search head cluster
You cannot set up a recurring import from the UI in a search head cluster environment. You must configure the modular input manually in inputs.conf
and push it to the search peers using the deployer.
- Follow the steps above to create the modular input on a single instance. ITSI adds the input as a new stanza in
$SPLUNK_HOME/etc/apps/itsi/local/inputs.conf
. It is not replicated across search peers.
Alternatively, if you're familiar with the format of modular inputs, you can create the input yourself. - Copy the input stanza from the local version of inputs.conf and add it to
shcluster/apps/itsi/local/inputs.conf
on the deployer. - Let the deployer push the file to the search peers. The file is deployed to the default
inputs.conf
on each search peer. - Remove the modular input stanza from
$SPLUNK_HOME/etc/apps/itsi/local/inputs.conf
on the search head that created it. Otherwise it will take precedence on the deployer.
Modify or delete a recurring import on a search head cluster
You can't manage recurring imports on a search head cluster through the UI. If you need to change the search, pause or stop the import, or delete the import, you must do it through inputs.conf
.
- Navigate to
shcluster/apps/itsi/local/inputs.conf
on the deployer. - Locate the stanza for the recurring import you want to modify or remove. For example:
[itsi_csv_import://Recurring entity import] entity_field_mapping = title=entity_title entity_informational_fields = hostname entity_merge_field = title entity_title_field = title import_from_search = true index_earliest = -60m index_latest = now interval = 0 3 * * * search_string = index=main sourcetype=* | dedup hostname | eval title=hostname | table title hostname service_security_group = default_itsi_security_group update_type = upsert
- Modify, add, or delete any settings.
- For descriptions of all possible settings, see the
[itsi_csv_import://<string>]
stanza of inputs.conf. - To pause or stop the import, add the setting
disabled = 1
to the stanza. - To delete the import, remove the stanza completely.
- For descriptions of all possible settings, see the
- Save the file and let the deployer push the changes to the search peers.
See also
PREVIOUS Import entities from a Splunk search in ITSI |
NEXT Conflict resolution examples in ITSI |
This documentation applies to the following versions of Splunk® IT Service Intelligence: 4.2.0, 4.2.1, 4.2.2, 4.2.3, 4.3.0, 4.3.1, 4.4.0, 4.4.1, 4.4.2, 4.4.3, 4.4.4, 4.4.5
Feedback submitted, thanks!