Install and configure the Content Pack for Monitoring Unix and Linux
Perform the following high-level steps to configure the Content Pack for Monitoring Unix and Linux:
- Install the content pack on your search head.
- (Optional) Change the module macro definition for indexes.
- Update the index search macro that includes all indexes you're using for event data collection.
- Update the KPI Base Search macros if you are not using the recommended method (metrics data) of data ingestion.
- Enable entity discovery to automatically discover entities for which relevant data has been collected.
- Tune KPI base searches.
- Tune KPI threshold levels for your environment.
- You have to collect data using the Splunk Add-on for Unix and Linux to use this content pack. See Data Requirements for the Content Pack for Monitoring Unix and Linux.
- Create a full backup of your ITSI environment in case you need to uninstall the content pack later. See Create a full backup of ITSI.
Install the content pack
The Content Pack for Monitoring Unix and Linux is automatically available for installation once you have installed the Splunk App for Content Packs on the search head with ITSI. For steps to install the Splunk App for Content Packs, see Install the Splunk App for Content Packs.
After you install the Splunk App for Content Packs, you can follow these steps to install this content pack:
- From the ITSI main menu, click Configuration > Data Integrations.
- Select Add content packs or Add structure to your data depending on your version of ITSI.
- Select the content pack.
- Review what's included in the content pack and then click Proceed.
- Configure the following settings:
Setting Description Choose which objects to install For a first-time installation, select the items you want to install and deselect any you're not interested in.
For an upgrade, the installer identifies which objects from the content pack are new and which ones already exist in your environment from a previous installation. You can selectively choose which objects to install from the new version, or install them all.
Choose a conflict resolution rule for the objects you install For upgrades or subsequent installs, decide what happens to duplicate objects introduced from the content pack. Choose from the following options:
- Install as new - Objects are installed and any existing identical objects in your environment remain intact.
- Replace existing - Existing identical objects are replaced with those from the new installation. Any changes you previously made to these objects are overwritten.
Import as enabled Select whether to install objects as enabled or to leave them in their original state. It's recommended that you import objects as disabled to ensure your environment doesn't break from the addition of new content.
This setting only applies to services, correlation searches, and aggregation policies. All other objects such as KPI base searches and saved searches are installed in their original state regardless of which option you choose.
Add a prefix to your new objects Optionally, append a custom prefix to each object installed from the content pack. For example, you might prefix your objects with
CP-to indicate they came from a content pack. This option can help you locate and manage the objects post-install.
Backfill service KPIs Optionally backfill your ITSI environment with the previous seven days of KPI data. Consider enabling backfill if you want to configure adaptive thresholding and Predictive Analytics for the new services. This setting only applies to KPIs and not service health scores.
- When you're satisfied with your selections, click Install selected.
- Click Install to confirm the installation. When the installation completes you can view all objects that were successfully installed in your environment. A green check mark in the main Content Library list indicates which content packs you've already installed.
(Optional) Change the module macro definition for indexes
The ITSI Operating System Module includes dashboards displaying OS metrics and other data. You can edit the module's search macro to populate the module's dashboards with data collected using the approaches in this content pack. Add the default indexes that you're using for data collection. For more information, see About the Operating System Module in the ITSI Modules Manual.
- From Splunk Web, click Settings > Advanced Search > Search macros.
- In the filter bar, search for
- Select the itsi_os_module_indexes macro.
- In the Definition field, add all of the indexes that you're using for data collection from add-ons combined with OR operators.
(index=windows OR index=perfmon OR index=os OR index=<index-name>)
Enable automatic entity discovery for events
Perform the following steps to ensure that ITSI automatically detects your Unix and Linux hosts. For best results, perform these steps after you configure at least one host to send data to Splunk Enterprise using the Splunk Add-on for Unix and Linux.
- Navigate to ITSI on the search head.
- Click Configuration > Entities.
- Click Create Entity > Import from Search.
- Select Ad hoc search and enter the following search:
`itsi-cp-nix-indexes` (sourcetype="Unix:Version" OR source=hardware) earliest=-24h | eval role="operating_system_host" | stats latest(family) as family, latest(version) as version, latest(vendor_product) as vendor_product, latest(role) as itsi_role, latest(cpu_cores) as cpu_cores, latest(mem) as memory, latest(cpu_architecture) as cpu_architecture by host | fields + host, family, version, vendor_product, itsi_role, cpu_cores, memory, cpu_architecture
- Click the search icon to run the search and confirm that one or more hosts are shown with all columns populated.
- Click Next.
- In the Import Column As column, set the host field to Entity Title. Set all other fields to Entity Information Field.
- Set Conflict Resolution to Update Existing Entities and set the Conflict Resolution Field to host.
- Click Import.
- After the import completes, click Set up Recurring Import.
- Name the recurring import ITSI discovery of Unix and Linux servers and set the frequency based on the needs of your deployment. Use Run on cron Schedule for maximum flexibility.
- Click Submit.
ITSI creates the new modular input in
(Optional) Update the index search macro with custom index
You can update the index search macro with a custom index.
- You must have the admin role to update the index search macro.
- You must know the names of the indexes that your organization uses to send data from the Add-on to your Splunk platform deployment.
- From Splunk Web, select Settings > Advanced Search > Search Macros.
- Configure the custom index settings as shown in the following table:
|Macro Name||Index Type||Default Macro Definition||Macro Definition|
||All of the indexes that you're using for data collection from add-ons combined with OR operators.|
Update the KPI base search macro with a new definition based on data ingested
If you are a customer that is not using the recommend data ingestion (metrics data) method, you will need to update 4 macros to use this content pack. Refer to the sections below and update the macros based on your data ingestion method.
Update the macros to use events data
If you are a customer that is ingesting events data, you need to edit the following macros with this new definition to use this content pack:
|Macro Name||Current Definition||Updated Definition|
Update the macros to use the mixed mode of data
If you are a customer using different methods of data ingestion on different Unix hosts, you need to edit the following macros, revising them to use an updated definition to use this content pack.
You may get truncated results when using this method.
|Macro Name||Current Definition||Updated Definition|
Tune KPI base searches
This content pack ships with the following KPI base searches:
Each search runs every 5 minutes with a 5-minute calculation window and uses only the latest value on a per-entity basis. The 5-minute calculation window ensures that you won't see
N/A for less frequent data. Using the latest value means that the KPI status refreshes as quickly as possible for data collected more frequently.
You must review and tune all base searches to run at a frequency that matches your data collection interval.
Tune KPI thresholds
Aggregate KPI thresholds use Normal, Medium, and Low levels, while per-entity thresholds except for available disk space don't exceed the Medium level. Lower threshold levels for OS-level monitoring allow application-level KPIs to take a more prominent threshold level. For example, a server at 100% CPU isn't a critical issue if the apps running on that server are responding normally.
Aggregate threshold values are calculated for general use only. You must tune these threshold values according to your environment. Use the sample service that's linked to the
Unix and Linux server health service template to validate your thresholds. For more information, see Overview of creating KPIs in ITSI in the Service Insights manual.
Data requirements for the Content Pack for Monitoring Unix and Linux
Use the Content Pack for Monitoring Unix and Linux
This documentation applies to the following versions of Content Pack for Monitoring Unix and Linux: 1.2.0