Splunk® Enterprise

Inherit a Splunk Enterprise Deployment

Acrobat logo Download manual as PDF


Splunk Enterprise version 7.3 is no longer supported as of October 22, 2021. See the Splunk Software Support Policy for details. For information about upgrading to a supported version, see How to upgrade Splunk Enterprise.
Acrobat logo Download topic as PDF

Investigate knowledge object problems

Knowledge objects are user-defined entities that enrich your existing data. They include the following objects:

You manage most knowledge objects through their listing pages in the Search and Reporting view, or through the pages listed under Knowledge in the Settings menu.

Organizations with large Splunk Enterprise deployments often have knowledge managers, people whose roles consist of creating, organizing, and maintaining knowledge objects for other Splunk Enterprise users. See the Knowledge Manager Manual.

Survey your knowledge object landscape

Review the knowledge object collections in your Splunk Enterprise deployment. You can use the knowledge object pages in Settings to review each knowledge object category across all of your installed apps. For example, if you want to look at your saved searches, select Settings > Searches, Reports, and Alerts.

As you review your knowledge objects, note their names, app affiliations, owners, and permission status. Identify knowledge objects that have naming or permissions conflicts, are redundant, or are orphaned.

Knowledge object naming conflicts

As you review a category of knowledge objects, look for two types of nomenclature conflicts:

  • Objects that share the same name but which have different definitions.
  • Objects that share the same definition, but which have different names.

Same name, different definitions

All objects within a knowledge object category must have unique names. For example, there can be no duplicate names among the saved searches in the Searches, reports, and alerts listing page in Settings. Most of these knowledge objects are applied to your search results at search-time. If you have more than two objects of the same category with the same name, only one of those objects is applied.

Duplicate naming can happen when objects have their permissions changed. For example, you can have lookups in two separate apps that have the same name. They do not conflict with each other when they are shared at the app level. However, if one of those lookups has their permissions changed so that they are shared globally, it is possible for one of those lookups to be applied instead of the other.

See Give knowledge objects the same names in the Knowledge Manager Manual.

Avoid this problem by establishing naming conventions. See Develop naming conventions for knowledge objects in the Knowledge Manager Manual.

Same definition, different names

If you have multiple knowledge objects in a category that have the same or similar definitions, but different names, you have a normalization problem. This can especially be a problem with extracted fields. When you index data from multiple source types, you can have several fields with different names but which represent the same kind of data. This leads to a misunderstanding of your indexed data. You might inadvertently build searches that account for a portion of the information that you want to capture.

If your Splunk Enterprise deployment has data normalization problems, install the Splunk Common Information Model Add-on. The CIM Add-on can help you to normalize the data from multiple source types so that you can develop reports, correlation searches, and dashboards that present unified views of your data domains.

See the Splunk Common Information Add-on Manual.

Understand your object permissions

As you manage the knowledge objects that you have inherited, ensure that you understand how roles, capabilities, and permissions are set up for your Splunk deployment.

When a user creates a knowledge object, its permissions are private to that user by default. Depending on how your Splunk Enterprise deployment is set up, that user may need to rely on someone with an admin or power user role to share that object with other users and roles.

Permissions and knowledge object interdependencies

If all of your objects have the same permissions, it is easy to resolve dependency issues between knowledge objects. For example, you can have a private scheduled report that uses the outputlookup command to update widely-used lookups with global permissions. Over time your users may find that the lookup is behaving unpredictably, but because private knowledge objects are invisible to most users the cause of the problem can be hard to troubleshoot and resolve.

For more examples, see Object interdependency considerations.

Other uses of permissions

There are more aspects to permissions than expanding or restricting the visibility of knowledge objects. You can use the permissions features for the following tasks:

  • Use role-based capabilities to restrict or expand the ability to create and edit knowledge objects.
  • Enable roles other than Admin and Power User to set permissions and share objects.
  • Set permissions for knowledge object categories. For example, you can restrict the ability of certain roles to use all event types or all lookups.

To learn more about roles and capabilities see About configuring role-based user access in the Securing Splunk Enterprise manual.

To learn more about knowledge object permissions, see Manage knowledge object permissions in the Knowledge Manager Manual.

Object interdependency considerations

There can be significant interdependencies between groups of objects. An object change or deletion can affect other objects that are dependent on that object. For example, you can have a lookup with a definition that references a custom field extraction. If you change how that field is extracted, it can affect the accuracy of the lookup. If that lookup is used to add fields to a data model dataset, the change will cascade down through all of its child data model datasets.

In many cases the only way to uncover knowledge object interdependencies is by studying your object definitions, or by analyzing the downstream object breakages that occur when upstream objects are changed, disabled, or deleted. See Disable or delete knowledge objects in the Knowledge Manager Manual.

The sequence of search time operations

If you have interdependent knowledge objects, it is important to understand the sequence of search-time operations. At search time, the Splunk software applies knowledge objects to the results of a search in a specific order. This means that you cannot set up knowledge object interdependencies that depend on objects that have not yet been defined. If you find that some of your interdependent knowledge objects do not work, this is a possible cause.

For example, the Splunk software applies custom field extractions to search results before it processes lookups. This means that a lookup can have a definition that refers to a field that is extracted at search time. However, a custom field extraction cannot use a lookup-derived field in its definition, because that lookup field does not yet exist. It is derived only after the custom field extraction is processed.

See The sequence of search-time operations in the Knowledge Manager Manual.

Lookup object interdependencies

Lookups can involve knowledge object interdependency by design.

Following are the three knowledge object categories related to lookups:

  • Lookup definitions
  • Lookup table files
  • Automatic lookups

Any object from these categories can be assigned its own permissions and sharing status.

You can utilize those knowledge object categories to create the following lookup types:

  • CSV lookups
  • External lookups
  • KV store lookups
  • Geospatial lookups

All lookup types require a lookup definition. Two of the lookup types, CSV and geospatial, require a lookup table file. You can optionally create an automatic lookup for all lookup types.

Use caution when deleting or modifying lookup objects. A lookup table file can be associated with multiple lookup definitions. A lookup definition can also be associated with multiple automatic lookups.

You can run into permissions issues with lookups. If a lookup table file has permissions that are more restrictive than the definitions that it is associated with, the lookup does not work. The same is true for lookup definitions and automatic lookups.

  • The permissions of a lookup table file should be wider than or equal to the permissions of the lookup definitions it is associated with.
  • The permissions of a lookup definition should be wider than or equal to the permissions of the automatic lookups it is associated with.

See Introduction to lookup configuration in the Knowledge Manager Manual.

Data model dataset hierarchies

Data models can be hierarchically-organized collections of data model datasets with parent/child relationships. A change to a parent dataset will cascade down through all of the child datasets that are descended from it. You can view these relationships with the Data Model Editor.

See Design data models in the Knowledge Manager Manual.

Dataset extension

All dataset types, lookup, data model, and table, can be extended as table datasets. When extended, the original dataset has a parent relationship to the table dataset that is extended from it. A change to the original dataset affects any datasets that are extended from it. You can see what datasets a dataset is extended from by expanding its row on the Datasets listing page.

See Dataset extension in the Knowledge Manager Manual.

Find orphaned objects

When the Splunk account of a knowledge object owner is deactivated, the knowledge objects that they owned remain in the system. These objects are orphaned, and they can cause problems. Orphaned objects can adversely affect objects that they are interdependent with.

Orphaned scheduled reports are especially problematic. The search scheduler cannot run a scheduled report on behalf of a nonexistent user. This affects dashboard panels and embedded reports that use the scheduled report. If the results of the report are sent by email to stakeholders, those emails cease.

Splunk Enterprise provides several methods of detecting orphaned knowledge objects, especially orphaned scheduled searches. When you find orphaned knowledge objects, you can use the Reassign Knowledge Objects page to reassign one or more of those knowledge objects to a new owner.

See Manage orphaned knowledge objects in the Knowledge Manager Manual.

Scheduled searches and search concurrency

If your Splunk Enterprise deployment depends on a large number of scheduled reports and alerts, check whether it is encountering search concurrency issues.

All Splunk Enterprise deployments have limits to the number of scheduled searches that can run concurrently. Once this limit is reached, a background process called the search scheduler prioritizes the excess reports and runs them as other scheduled reports and alerts complete their runs.

The goal of the search scheduler is to have each scheduled report and alert run at some point within their periods, over the time ranges they were originally scheduled to run for. But you can encounter situations where certain reports regularly skip their scheduled runs.

Use the Monitoring Console to identify scheduler issues

You can use the Monitoring Console to identify searches that are frequently skipped, or that are causing other searches to be frequently skipped. You can also use the monitoring console to see your system-wide concurrent search limits. See Scheduler activity in Monitoring Splunk Enterprise.

Reduce the number of skipped reports

You can reduce the number of skipped scheduled reports by applying either the Schedule Window or Schedule Priority settings to scheduled reports. These settings are mutually exclusive. Apply a Schedule Window to low-importance report to enable other reports to run ahead of them. Use Schedule Priority to improve the run priority of high-value reports. See Prioritize concurrently scheduled reports in Splunk Web in the Knowledge Manager Manual.

Report, data model, and dataset acceleration

You might have inherited a Splunk Enterprise deployment that uses acceleration to improve the performance of reports, data models, and table datasets. Your deployment might include apps that are delivered with accelerated data models and reports by default, such as Splunk Enterprise Security, Splunk IT Service Intelligence, and the Common Information Model Add-on .

If your deployment has other objects accelerated beyond the ones provided by your apps and add-ons, verify that they are functioning correctly and determine whether their summaries are needlessly using valuable disk space.

On the listing pages for reports, data models, and table datasets, acceleration is indicated by a yellow lightning bolt symbol.

For an overview of the summary-based acceleration options offered by Splunk, see Overview of summary-based search acceleration in the Knowledge Manager Manual.

Review report acceleration summaries

You can access report acceleration summary statistics by selecting Settings > Report acceleration summaries.

Action Details
Identify unnecessary summaries. On the Report Acceleration Summaries page in Settings, look for summaries with a high Summarization Load and a low Access Count.

Consider removing these summaries. The stats indicate that they are using a lot of system resources but are not being accessed often.

You can click through to the Summary Details of a particular summary to see its Size on Disk. Infrequently-used summaries that take up a lot of space are also good removal candidates.
Identify dysfunctional summaries. On the Report Acceleration Summaries page in Settings, if the Summary Status is Suspended or Not enough data to summarize, the summary might have problems that need to be resolved. The Splunk software might not create summaries when it projects that they will be too large.
Resolve summary issues.
  1. Select Settings > Report acceleration summaries.
  2. Find the summary that should be removed and open its detail page by clicking its Summary ID.
  3. (Optional) Click Verify if you suspect the summary contains inconsistent data.
  4. (Optional) Click Update if the summary has not been updated in some time and you want to make it current.
  5. (Optional) Click Rebuild to rebuild summaries that fail verification or which seem to have data loss issues. Rebuilds of significantly large summaries can be time-consuming.
Delete summaries that are unnecessary.
  1. Select Settings > Report acceleration summaries.
  2. Find the summary that should be removed and open its detail page by clicking its Summary ID.
  3. Click Delete to remove the summary.

See Manage report acceleration in the Knowledge Manager Manual.

Investigate data model and dataset summaries

You can manage acceleration for data models and table datasets through the Data Models page in Settings. Expand the rows of the data models and table datasets to see their stats.

Look for summaries with low Access Count numbers and high Size on Disk numbers. Consider removing these summaries, or reducing their summary windows so that they do not take up as much disk space.

If you have summaries with build processes that are not completing, refer to Accelerate data models in the Knowledge Manager Manual. It discusses advanced configurations that can help you resolve these issues.

Review size-based summary retention rules

A deployment that includes size-based retention rules for report, data model, and table dataset summaries can use an unlimited amount of disk space. Your deployment might have configured size-based retention rules to prevent that.

Review these configurations, identify the summaries that are affected by them, and evaluate whether the configurations need to be updated or removed.

For information about report acceleration summary retention configurations, see Manage report acceleration in the Knowledge Manager Manual.

For information about data model and table dataset summary retention configurations see Accelerate data models in the Knowledge Manager Manual.

Check parallel summarization for data models and table datasets

Parallel summarization is a background process that increases the speed with which the Splunk software builds acceleration summaries for data models and table datasets. It does this by running concurrent searches to build the summaries. It is enabled by default for all Splunk Enterprise deployments.

If you are encountering persistent search concurrency or search performance issues, check whether your predecessor has raised the parallel summarization setting above its default. It might be running more concurrent searches than your system can support.

See Accelerate data models in the Knowledge Manager Manual.

Evaluate your summary indexes

Summary indexing is a report acceleration method that you can use for reports that do not qualify for report acceleration. If your deployment uses summary indexing, it has indexes that are used specifically for summary indexing. Review these summary indexes and the searches that populate them with data, and evaluate whether they can be replaced by report acceleration summaries.

See Overview of summary-based search acceleration in the Knowledge Manager Manual.

Last modified on 12 June, 2019
PREVIOUS
Monitor system health
 

This documentation applies to the following versions of Splunk® Enterprise: 7.0.0, 7.0.1, 7.0.2, 7.0.3, 7.0.4, 7.0.5, 7.0.6, 7.0.7, 7.0.8, 7.0.9, 7.0.10, 7.0.11, 7.0.13, 7.1.0, 7.1.1, 7.1.2, 7.1.3, 7.1.4, 7.1.5, 7.1.6, 7.1.7, 7.1.8, 7.1.9, 7.1.10, 7.2.0, 7.2.1, 7.2.2, 7.2.3, 7.2.4, 7.2.5, 7.2.6, 7.2.7, 7.2.8, 7.2.9, 7.2.10, 7.3.0, 7.3.1, 7.3.2, 7.3.3, 7.3.4, 7.3.5, 7.3.6, 7.3.7, 7.3.8, 7.3.9, 8.0.0, 8.0.1, 8.0.2, 8.0.3, 8.0.4, 8.0.5, 8.0.6, 8.0.7, 8.0.8, 8.0.9, 8.0.10, 8.1.0, 8.1.1, 8.1.2, 8.1.3, 8.1.4, 8.1.5, 8.1.6, 8.1.7, 8.1.8, 8.1.9, 8.1.10, 8.1.11, 8.1.12, 8.1.13, 8.1.14, 8.2.0, 8.2.1, 8.2.2, 8.2.3, 8.2.4, 8.2.5, 8.2.6, 8.2.7, 8.2.8, 8.2.9, 8.2.10, 8.2.11, 8.2.12, 9.0.0, 9.0.1, 9.0.2, 9.0.3, 9.0.4, 9.0.5, 9.0.6, 9.0.7, 9.0.8, 9.0.9, 9.1.0, 9.1.1, 9.1.2, 9.1.3, 9.1.4, 9.2.0, 9.2.1


Was this documentation topic helpful?


You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters