Splunk® App for Infrastructure (Legacy)

Install and Upgrade Splunk App for Infrastructure

Acrobat logo Download manual as PDF


This documentation does not apply to the most recent version of Splunk® App for Infrastructure (Legacy). For documentation on the most recent version, go to the latest release.
Acrobat logo Download topic as PDF

Update a Data Collection Node after migrating Python versions

When you migrate Splunk Enterprise to a different version of Python, Data Collection Nodes (DCN) stop receiving jobs for worker processes that collect data from a VMware vCenter Server. To continue collecting vCenter Server data with a DCN, you have to clear the cache, metadata, and session information on the DCN. To do this, remove the local ta_vmware_cache.conf, hydra_session.conf, and hydra_metadata.conf files in Slunk_TA_vmware.

Until you remove the required files, you'll see this error in worker process logs:

Error: UnicodeDecodeError: 'ascii' codec can't decode byte 0xe3 in position 1: ordinal not in range(128) in Scheduler and Worker logs

The log files for worker processes look like these:

Hydra worker logs

2019-10-10 00:27:48,981 ERROR [ta_vmware_collection_worker://worker_process1:13768] Problem with hydra worker ta_vmware_collection_worker://worker_process1:13768: 'ascii' codec can't decode byte 0xe3 in position 1: ordinal not in range(128)
Traceback (most recent call last):
  File "/opt/splunk/etc/apps/SA-Hydra/bin/hydra/hydra_worker.py", line 622, in run
    self.establishMetadata()
  File "/opt/splunk/etc/apps/SA-Hydra/bin/hydra/hydra_worker.py", line 64, in establishMetadata
    metadata_stanza = HydraMetadataStanza.from_name("metadata", self.app, "nobody")
  File "/opt/splunk/etc/apps/SA-Hydra/bin/hydra/models.py", line 610, in from_name
    host_path=host_path)
  File "/opt/splunk/lib/python3.7/site-packages/splunk/models/base.py", line 557, in get
    return self._from_entity(entity)
  File "/opt/splunk/etc/apps/SA-Hydra/bin/hydra/models.py", line 345, in _from_entity
    obj.from_entity(entity)
  File "/opt/splunk/lib/python3.7/site-packages/splunk/models/base.py", line 926, in from_entity
    super(SplunkAppObjModel, self).from_entity(entity)
  File "/opt/splunk/lib/python3.7/site-packages/splunk/models/base.py", line 684, in from_entity
    return self.set_entity_fields(entity)
  File "/opt/splunk/etc/apps/SA-Hydra/bin/hydra/models.py", line 544, in set_entity_fields
    from_api_val = wildcard_field.field_class.from_apidata(entity, entity_attr)
  File "/opt/splunk/etc/apps/SA-Hydra/bin/hydra/models.py", line 123, in from_apidata
    obj = cPickle.loads(b64decode(val))
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe3 in position 1: ordinal not in range(128)

Hydra scheduler logs

2019-10-10 00:28:39,115 ERROR [ta_vmware_collection_scheduler://Global pool] [HydraWorkerNode] node=https://<worker-ip>:8089 is dead, because some weird stuff happened: 'ascii' codec can't decode byte 0xe3 in position 1: ordinal not in range(128)
Traceback (most recent call last):
  File "/opt/splunk/etc/apps/SA-Hydra/bin/hydra/hydra_scheduler.py", line 1462, in setMetadata
    self.session_key)
  File "/opt/splunk/etc/apps/SA-Hydra/bin/hydra/models.py", line 610, in from_name
    host_path=host_path)
  File "/opt/splunk/lib/python3.7/site-packages/splunk/models/base.py", line 557, in get
    return self._from_entity(entity)
  File "/opt/splunk/etc/apps/SA-Hydra/bin/hydra/models.py", line 345, in _from_entity
    obj.from_entity(entity)
  File "/opt/splunk/lib/python3.7/site-packages/splunk/models/base.py", line 926, in from_entity
    super(SplunkAppObjModel, self).from_entity(entity)
  File "/opt/splunk/lib/python3.7/site-packages/splunk/models/base.py", line 684, in from_entity
    return self.set_entity_fields(entity)
  File "/opt/splunk/etc/apps/SA-Hydra/bin/hydra/models.py", line 544, in set_entity_fields
    from_api_val = wildcard_field.field_class.from_apidata(entity, entity_attr)
  File "/opt/splunk/etc/apps/SA-Hydra/bin/hydra/models.py", line 123, in from_apidata
    obj = cPickle.loads(b64decode(val))
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe3 in position 1: ordinal not in range(128)  

Steps

Follow these steps to clear the cache, metadata, and session information on a DCN so the DCN continues collecting VMware vCenter Server data after you migrate to a new Python version.

  1. Stop the DCS that you used to integrate the DCN.
    1. Open Splunk Web for the Splunk Enterprise instance that runs the DCS.
    2. Go to the Splunk App for Infrastructure.
    3. Go to the Add Data tab and select VMware vSphere in the integrations panel.
    4. Select the DCN tab and disable data collection.
  2. Stop splunkd on the DCN:
    $SPLUNK_HOME/bin/splunk stop
    
  3. On the DCN, go to $SPLUNK_HOME/etc/apps/Splunk_TA_vmware/local and delete these files:
    • ta_vmware_cache.conf
    • hydra_session.conf
    • hydra_metadata.conf
  4. Start splunkd on the DCN:
    $SPLUNK_HOME/bin/splunk start
    
  5. Start the DCS that you used to integrate the DCN.
    1. Open Splunk Web for the Splunk Enterprise instance that runs the DCS.
    2. Go to the Splunk App for Infrastructure.
    3. Go to the Add Data tab and select VMware vSphere in the integrations panel.
    4. Select the DCN tab and enable data collection.
Last modified on 12 June, 2020
PREVIOUS
Upgrade VMware data collection components
 

This documentation applies to the following versions of Splunk® App for Infrastructure (Legacy): 2.0.0, 2.0.1, 2.0.2, 2.0.3, 2.0.4, 2.1.0, 2.1.1 Cloud only


Was this documentation topic helpful?


You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters