Troubleshoot the Splunk Add-on for ServiceNow
For troubleshooting tips that you can apply to all add-ons, see Troubleshoot add-ons in Splunk Add-ons. For additional resources, see Support and resource links for add-ons in Splunk Add-ons.
KV Store error when collecting data
If you encounter this error: Error occurred while updating the start_timestamp for the input: xyz:HTTP 503 Service Unavailable -- KV Store is disabled.
You must enable your KV Store service to resume data collection in your splunk_ta_snow_main.log,
Cannot launch add-on
This add-on does not have views and is not intended to be visible in Splunk Web. If you are trying to launch or load views for this add-on and you are experiencing results you do not expect, turn off visibility for the add-on.
For more details about add-on visibility and instructions for turning visibility off, see the Check if the add-on is intended to be visible or not section of the Splunk Add-ons Troubleshooting topic.
Cannot access configuration page
If you are trying to reach the setup page but cannot see a link to it on your instance, confirm that you are signed in with an account that is a member of the admin or sc_admin role.
Find relevant errors
Search for the following event types to find errors relevant to the Splunk Add-on for ServiceNow.
Search eventtype=snow_ta_collector_error
for errors related to data collection from ServiceNow.
Search eventtype=snow_ticket_error
for errors related to creating events or incidents in ServiceNow from the Splunk platform.
Search eventtype=ta_frwk_error
for errors related to low-level functions of the add-on.
Search eventtype=snow_ta_log_error
for errors related to the add-on as well as account and input configuration.
Missing data
If you do not receive data from all of your enabled inputs, check that the ServiceNow account that you are using to connect to your ServiceNow instance from the Splunk platform has, at minimum, read-only access to all of the database tables from which you are attempting to collect data. Then, disable and re-enable the inputs for which you are not receiving data.
To validate your permissions:
- Edit the following URL to use your ServiceNow instance name:
https://<myservicenowinstance>.service-now.com/<service_now_table>.do?JSONv2&sysparm_query=sys_created_on>=2016-01-01+00:00:00^ORDERBYsys_created_on&sysparm_record_count=50
<myservicenowinstance>.service-now.com
- Change
service_now_table
to the ServiceNow table you are trying to query - Change
2016-01-01
to the actual date you want to query from. - Paste the URL into a browser.
- When prompted, log in with the same username and password that you use for the integration account in the add-on.
If you receive the historical data you expect and a sys_updated_on
field for each event, you have the correct permissions.
Turn off SSL certificate communication
Communication to ServiceNow is performed via HTTPS. SSL certificate validation is enabled by default. If your ServiceNow data collection is over unencrypted communication (without certificate checks), you must disable the SSL check flag in splunk_ta_snow_account.conf
when upgrading the Splunk Add-on for ServiceNow.
- Navigate to
$SPLUNK_HOME/etc/apps/Splunk_TA_snow/local
and create asplunk_ta_snow_account.conf
file if it does not already exist. - Set
disable_ssl_certificate_validation=1
. - Save your changes.
Custom search commands or alert-triggered scripts fail with no results
Check that you have successfully integrated your ServiceNow instance with your Splunk platform instances. If the configuration is unsuccessful, your searches will return "No results found" and the Splunk software logs a u_splunk_incident does not exist
error, which you can find by searching for eventtype=snow_ticket_error
.
If your integration is successful, but incident and event creation fails, run the search "eventtype=snow_ticket_error"
to see what errors are reported. If the failure reason is error code 302, review the ServiceNow URL that you entered in the Setup page to make sure it is correct and does not end with any special characters or trailing slashes.
See Configure ServiceNow to integrate with the Splunk platform to learn more
Errors for data collection for specific database tables
If you are missing data for a specific database table, check your splunk_ta_snow_main.log
.
- "Failure occurred...Not Found" means that the database table might not have any records.
- "Failure occurred...bad request" means that the database table might not exist.
Missing fields after upgrading to Splunk Add-on for ServiceNow 4.0.0
If you have ServiceNow data indexed into your Splunk instance after upgrading to Splunk Add-on for ServiceNow 3.0.04.0.0 from an earlier version, the following panels in the Splunk App for ServiceNow do not display the existing data correctly. Any newly indexed data is not impacted.
- Change Ticket Lookup under cmdb
- Incident Ticket Lookup under cmdb
- Incident Count by Location under Incidents > Open Incidents by Geography
If fields are missing or new fields start with "dv" after upgrading, see Upgrade.
Remove deleted configuration items from the configuration management database lookups
Service Now API for configuration management database (CMDB) does not tell you what configuration items (CI) have been deleted from CMDB. As a result, Splunk does not remove CIs from the CMDB lookups that are deleted. You can manually delete the CIs from the CMDB:
- Enable the data collection for sys_audit_delete:
- Navigate to the Inputs tab in the Splunk Add-on for ServiceNow.
- Configure and enable the
sys_audit_delete
data input.
- Create a saved search:
- Create a saved search with the name "ServiceNow Sys Delete List"
sourcetype="snow:sys_audit_delete" | stats count by tablename,documentkey | rename documentkey as sys_id
- Set the
Earliest
as 0 andLatest
as now. - Check the Accelerate this search check box and select All Time as Summary Range.
- Save the search.
- Set the saved search to Global.
- Set the
- After creating the saved search, update the existing savedsearch. This change should match the lookup ids with the
sys_audit_delete
table ids and remove it from the lookup. Update the saved search of cmdb tables. In this example, the saved search is named "ServiceNow CMDB CI Server":
eventtype=snow_cmdb_ci_server | dedup sys_id | fields - _bkt, _cd,_indextime,_kv,_raw,_serial,_si,_sourcetype,_subsecond, punct, index, source, sourcetype | inputlookup append=t cmdb_ci_server_lookup | dedup sys_id | outputlookup cmdb_ci_server_lookup
Add the following to each query:
| join max=0 type=left sys_id [ | savedsearch "ServiceNow Sys Delete List" | eval sys_id_delete=sys_id | table sys_id,sys_id_delete ] | where isnull(sys_id_delete)
Modified query:
eventtype=snow_cmdb_ci_server | dedup sys_id | fields - _bkt, _cd,_indextime,_kv,_raw,_serial,_si,_sourcetype,_subsecond, punct, index, source, sourcetype | join max=0 type=left sys_id [ | savedsearch "ServiceNow Sys Delete List" | eval sys_id_delete=sys_id | table sys_id,sys_id_delete ] | where isnull(sys_id_delete) | dedup sys_id | outputlookup cmdb_ci_server_lookup
Repeat this procedure for each of the following saved searches:
- ServiceNow CMDB CI List
- ServiceNow CMDB CI Server
- ServiceNow CMDB CI VM
- ServiceNow CMDB CI Infra Services
- ServiceNow CMDB CI Database Instances
- ServiceNow CMDB CI App Servers
- ServiceNow CMDB CI Relation
- ServiceNow CMDB CI Services
ServiceNow data collection stops after upgrading Splunk Add-on for ServiceNow to 4.0.0 or later
See SSL certificate issues to collect data over encrypted communication.
For an on-premises installation using data collection over unencrypted communication, a message—"Data collection over unencrypted communication is unsecured"—displays. See Turn off SSL certificate communication.
Make sure you have followed the steps in Upgrade the Splunk Add-on for ServiceNow. To check whether data is indexing, run this search:
index="_internal" sourcetype="ta_snow"
.
If a configuration is missing, one of the following log messages displays:
No configured inputs found. To collect data from ServiceNow, configure new input(s) or update existing input(s) either from Inputs page of the Add-on or manually from inputs.conf.
This message indicates that no inputs are enabled. Go to the Inputs page and configure new inputs or update existing inputs.No account configurations found for this add-on. To start data collection, configure new account on Configurations page and link it to an input on Inputs page. Exiting TA.
This message indicates that no account is configured. You must configure an account and link it to input.No ServiceNow account linked to the data input <input_name>. To resume data collection, either configure new account on Configurations page or link an existing account to the input on Inputs page.
This indicates that an account is configured, but not linked to your input. You must link the specified input to the account.- 2018-12-28 17:41:58,703 ERROR pid=2953 tid=MainThread file=snow.py:stream_events:471 | Error Traceback (most recent call last): This indicates that the user has changed the account name from the back-end. This is not a best practice. In this case, re-enter the password for this account to resume the data collection.
File "/opt/splunk_snow_test/splunk/etc/apps/Splunk_TA_snow/bin/snow.py", line 352, in stream_events splunk_ta_snow_account_conf = account_cfm.get_conf("splunk_ta_snow_account", refresh=True).get_all() File "/opt/splunk_snow_test/splunk/etc/apps/Splunk_TA_snow/bin/Splunk_TA_snow/solnlib/utils.py", line 154, in wrapper return func(*args, **kwargs) File "/opt/splunk_snow_test/splunk/etc/apps/Splunk_TA_snow/bin/Splunk_TA_snow/solnlib/conf_manager.py", line 241, in get_all key_values = self._decrypt_stanza(name, stanza_mgr.content) File "/opt/splunk_snow_test/splunk/etc/apps/Splunk_TA_snow/bin/Splunk_TA_snow/solnlib/conf_manager.py", line 126, in _decrypt_stanza self._cred_mgr.get_password(stanza_name)) File "/opt/splunk_snow_test/splunk/etc/apps/Splunk_TA_snow/bin/Splunk_TA_snow/solnlib/utils.py", line 154, in wrapper return func(*args, **kwargs) File "/opt/splunk_snow_test/splunk/etc/apps/Splunk_TA_snow/bin/Splunk_TA_snow/solnlib/credentials.py", line 126, in get_password (self._realm, user)) CredentialNotExistException: Failed to get password of realm=__REST_CREDENTIAL__#Splunk_TA_snow#configs/conf-splunk_ta_snow_account, user=<account_name>.
Unable to create an incident/event on your ServiceNow instance
If you are unable to create an incident, complete the following steps:
- Perform the following search to check the error message in the internal logs for ServiceNow:
index=_internal sourcetype="ta_snow_ticket" "One of the possible causes of failure is absence of event management plugin or Splunk Integration plugin"
- Check for this error message:
Failed to create ticket. Return code is 400. Reason is Bad Request. One of the possible causes of failure is absence of event management plugin or Splunk Integration plugin on the ServiceNow instance. To fix the issue install the plugin(s) on ServiceNow instance.
- When you see this message, you need to install the Splunk Integration/Event Management plugin on your ServiceNow instance. See Configure ServiceNow to integrate with Splunk Enterprise.
Priority value set in ServiceNow incidents is different than what was passed. (Alert Action, Custom commands, Custom streaming commands)
By default, priority is calculated based on the combination of impact and urgency values in ServiceNow. Priority values passed in alert action/custom commands/custom streaming commands will be set in Incidents only if there is no business rule configured on your ServiceNow instance for your Incident table to calculate priority.
Unable to update the account after upgrading to version 6.0.0 of the Splunk Add-on for ServiceNow
If you are unable to update your account after upgrading to version 6.0.0 of the Splunk Add-on for ServiceNow, verify that your account name contains only alphanumeric values. Versions 6.0.0 and later do not allow spaces and special characters. Account names that contain values other than alphanumeric values will not be updated.
UI does not load after upgrading add-on
if there is an issue loading the user interface (UI) and if you see errors in your splunkd.log similar to the following example:
from framework import state_store as ss File "/opt/splunk/etc/apps/Splunk_TA_snow/bin/framework/state_store.py", line 7, in <module> from . import kv_client as kvc File "/opt/splunk/etc/apps/Splunk_TA_snow/bin/framework/kv_client.py", line 7, in <module> from . import rest File "/opt/splunk/etc/apps/Splunk_TA_snow/bin/framework/rest.py", line 13, in <module> from httplib2 import (socks, ProxyInfo, Http) File "/opt/splunk/etc/apps/Splunk_TA_snow/bin/httplib2/__init__.py", line 897 print "connect: (%s, %s) ************" % (self.host, self.port)
Verify that httplib2 is not present (after add-on version 2.7.0 of the Splunk Add-on for ServiceNow) in the $SPLUNK_HOME/etc/apps/Splunk_TA_snow/bin/
directory. If httplib2 is found in the affected location (for versions after 2.7.0), remove httplib2 and restart Splunk.
Seeing dv_<field> for the fields mentioned in Exclude properties
The Exclude properties setting can only exclude fields that exactly match. For example, mentioning the field contact
in the parameter does not exclude dv_contact
. If you do not want to see every field in the data that is being collected, append comma separated dv_<field>
list in the Exclude properties setting.
Same correlation_id is generated in alert action when using the same alert action name in different Splunk instances
If you have multiple Splunk platform instances pointing to a common ServiceNow instance, and you have created alert actions with the same name on multiple Splunk instances, the same Correlation ID is set, based on the alert action's name. To resolve this, enter a unique Correlation ID for each alert action on each Splunk platform instance.
JSONDecodeError while collecting data and data loss
If you experience the following error in your $SPLUNK_HOME/var/log/splunk/splunk_ta_snow_main.log
file
ERROR pid=9180 tid=Thread-233 file=snow_data_loader.py:_json_to_objects:305 | Obtained an invalid json string while parsing.Got value of type <class 'bytes'>. Traceback : Traceback (most recent call last): File "/opt/splunk/etc/apps/Splunk_TA_snow/bin/snow_data_loader.py", line 302, in _json_to_objects json_object = json.loads(json_str) File "/opt/splunk/lib/python3.7/json/__init__.py", line 348, in loads return _default_decoder.decode(s) File "/opt/splunk/lib/python3.7/json/decoder.py", line 337, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File "/opt/splunk/lib/python3.7/json/decoder.py", line 353, in raw_decode obj, end = self.scan_once(s, idx) json.decoder.JSONDecodeError: Expecting ':' delimiter: line 1 column 3703855 (char 3703854)
The issue will only arise when there are multiple frequent API calls being made. So you need to make sure the rate at which API calls are made on the instance is lower to prevent semaphore wait times, which in turn returns an invalid JSON string in the response. Attempt to collect the missing data again from the point where data collection had stopped for the input having this issue.
Note that there may be a possible data duplication issue, so perform the following steps:
- Clone the affected input.
- Set Start date to the time of the last event collected.
- Enable the input.
Set up the scripted REST endpoint in ServiceNow
Ensure that you're using these instructions:
- The supported request format and supported response format both contain
application/json
. - The response returned is in JSON format.
- In order to use this feature with ITSI, you must include an Incident Number and Correlation ID in the response. You can send the incident number using the
number
key and send the correlation ID using thecorrelation_id
in the JSON response. - The HTTP method is POST and the endpoint is active.
How to use the the scripted REST endpoint with custom commands or the ServiceNow Incident Integration
You can pass in the value of the scripted REST endpoint using 2 methods:
- Use this format:
/api/<API namespace>/<API ID>/<Relative path>
- Copy the Resource path and paste in the
scripted_endpoint
parameter.
Missing data for intermediate updates between data collection cycles
For example, the interval set for collecting data from the incident table is 15 minutes. In these 15 minutes, an incident gets updated 3 times. When the add-on makes the API call, ServiceNow returns only the latest update of that incident, and not all the changes that occurred.
As a workaround, you can decrease the interval set in the ServiceNow add-on to capture updates between data collection cycles.
The "Opened" field post-noon timestamp displays as a pre-noon format on the Incident table
When creating incidents from the ServiceNow add-on, the timestamp of the "Opened" field of the incident is in a 12-hour format. To change it to a 24-hour format, follow these steps:
- From the ServiceNow interface, select Transform Maps under the System Import Sets -> Administration section and search for "Splunk Incident Transformation" under the Name column.
- Click on the Splunk Incident Transformation transform map and in the "Field Maps" section, access the "sys_created_on" selection under the "Source Field" column.
- Click on the "sys_created_on" row.
- Change the Date format field from "yyyy-MM-dd hh:mm:ss" to "yyyy-MM-dd HH:mm:ss" and click Update in the top right corner.
- Click Update again to update the "Splunk Incident Transformation" transform map.
As a result, when you create new incidents from the ServiceNow add-on, the timestamp of the Opened field on the incident table will display as post-noon, as intended.
Maximum Execution Time Exceeded Error
If you experience the following error in your $SPLUNK_HOME/var/log/splunk/splunk_ta_snow_main.log
file:
Failure occurred while getting records for the table: cmdb_ci from https://<host>/. The reason for failure= {'message': 'Exception while executing request: Transaction cancelled: maximum execution time exceeded', 'detail': 'Transaction cancelled: maximum execution time exceeded Check logs for error trace or enable glide.rest.debug property to verify REST request processing'}. Contact Splunk administrator for further information.
The issue arises when the transaction timeout limit is exceeded while making the API calls on the Servicenow instance. You can check the transaction timeout limit from the Servicenow instance by navigating to System Definition > Transaction Quota Rules
After that, you will have to select the Transaction Quota rule for which you are facing the error. To mitigate the issue, you can perform either of the below steps:
- Increase the Transaction Limit for the Transaction Quota for which you are facing the issue on the Servicenow instance. Refer to ServiceNow documentation for more information.
- Reduce the record count parameter on the Account configuration page in the Add-on configurations. Note that by reducing the record count value, this will result in a lower data collection rate.
Performance benchmark for custom streaming commands |
This documentation applies to the following versions of Splunk® Supported Add-ons: released, released
Feedback submitted, thanks!