Splunk® Supported Add-ons

Splunk Add-on for NetApp Data ONTAP

Acrobat logo Download manual as PDF


Acrobat logo Download topic as PDF

Troubleshoot the Splunk Add-on for NetApp Data ONTAP

See the following troubleshooting tips if you're running into issues with the Splunk Add-on for NetApp Data ONTAP.

Troubleshoot your environment

To troubleshoot your environment, you can set the field worker_log_level in hydra_node.conf to reflect a new log level for a data collection node. The default log level for a data collection node is INFO. DEBUG will be the most verbose logging level.

  1. On the search head that administers the Distributed Collection Scheduler, create a local version of hydra_node.conf
  2. Edit $SPLUNK_HOME/etc/apps/Splunk_TA_ontap/local/hydra_node.conf to set the log level of for all data collection nodes as per the following example:
[default]

gateway_port = 8008

capabilities = * 

log_level = DEBUG

Distribute API requests across multiple data collection nodes

Distribute API requests across multiple data collection nodes (DCNs) to improve collection processing speed and to reduce collection fails. See the distribute API requests across multiple data collection nodes section of the configure inputs section of this manual.

Troubleshoot hydra scheduler and hydra worker error logs: ValueError: unsupported pickle protocol: 3

Problem

You receive the following error in the hydra worker logs:

 [ta_ontap_collection_worker://gamma:1361] Problem with hydra worker ta_ontap_collection_worker://gamma:1361: unsupported pickle protocol: 3
Traceback (most recent call last):
  File "/opt/splunk/etc/apps/SA-Hydra/bin/hydra/hydra_worker.py", line 618, in run
    self.establishMetadata()
  File "/opt/splunk/etc/apps/SA-Hydra/bin/hydra/hydra_worker.py", line 64, in establishMetadata
    metadata_stanza = HydraMetadataStanza.from_name("metadata", self.app, "nobody")
  File "/opt/splunk/etc/apps/SA-Hydra/bin/hydra/models.py", line 610, in from_name
    host_path=host_path)
  File "/opt/splunk/lib/python2.7/site-packages/splunk/models/base.py", line 557, in get
    return self._from_entity(entity)
  File "/opt/splunk/etc/apps/SA-Hydra/bin/hydra/models.py", line 345, in _from_entity
    obj.from_entity(entity)
  File "/opt/splunk/lib/python2.7/site-packages/splunk/models/base.py", line 926, in from_entity
    super(SplunkAppObjModel, self).from_entity(entity)
  File "/opt/splunk/lib/python2.7/site-packages/splunk/models/base.py", line 684, in from_entity
    return self.set_entity_fields(entity)
  File "/opt/splunk/etc/apps/SA-Hydra/bin/hydra/models.py", line 544, in set_entity_fields
    from_api_val = wildcard_field.field_class.from_apidata(entity, entity_attr)
  File "/opt/splunk/etc/apps/SA-Hydra/bin/hydra/models.py", line 123, in from_apidata
    obj = cPickle.loads(b64decode(val))
ValueError: unsupported pickle protocol: 3

You receive the following error in the hydra scheduler logs:

ERROR [ta_ontap_collection_scheduler://nidhogg] [HydraWorkerNode] node=https://10.0.12.234:8089 is dead, because some weird stuff happened: unsupported pickle protocol: 3
Traceback (most recent call last):
  File "/opt/splunk/etc/apps/SA-Hydra/bin/hydra/hydra_scheduler.py", line 1452, in setMetadata
    self.session_key)
  File "/opt/splunk/etc/apps/SA-Hydra/bin/hydra/models.py", line 610, in from_name
    host_path=host_path)
  File "/opt/splunk/lib/python2.7/site-packages/splunk/models/base.py", line 557, in get
    return self._from_entity(entity)
  File "/opt/splunk/etc/apps/SA-Hydra/bin/hydra/models.py", line 345, in _from_entity
    obj.from_entity(entity)
  File "/opt/splunk/lib/python2.7/site-packages/splunk/models/base.py", line 926, in from_entity
    super(SplunkAppObjModel, self).from_entity(entity)
  File "/opt/splunk/lib/python2.7/site-packages/splunk/models/base.py", line 684, in from_entity
    return self.set_entity_fields(entity)
  File "/opt/splunk/etc/apps/SA-Hydra/bin/hydra/models.py", line 544, in set_entity_fields
    from_api_val = wildcard_field.field_class.from_apidata(entity, entity_attr)
  File "/opt/splunk/etc/apps/SA-Hydra/bin/hydra/models.py", line 123, in from_apidata
    obj = cPickle.loads(b64decode(val))
ValueError: unsupported pickle protocol: 3

Cause

The add-on is unable to deserialize a Python object that's serialized with a Python version that's different than the version the add-on is running. For example, the add-on is unable to deserialize a Python object that's serialized by Pytho 3, but the add-on is running Python 2.

Resolution

  1. From Collection Configuration page, stop the scheduler.
  2. Stop Splunk on the DCN.
  3. On the DCN, go to $SPLUNK_HOME/etc/apps/Splunk_TA_ontap/local and remove the hydra_metadata.conf file.
  4. Start Splunk on the DCN.
  5. Start Splunk on the Collection Configuration page.
Last modified on 01 September, 2022
PREVIOUS
Upgrade the Splunk Add-on for NetApp Data ONTAP from v3.0.1 to v3.0.3
 

This documentation applies to the following versions of Splunk® Supported Add-ons: released


Was this documentation topic helpful?


You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters