Upgrade the Collector for Kubernetes and other updates 🔗
Upgrade the Collector for Kubernetes 🔗
The installer script uses one of the supported package managers to install the Collector. When you update the Collector using the official packages, configuration files are never overridden. If you need to update the configuration after an update, edit them manually before backward compatibility is dropped.
To upgrade the Collector for Kubernetes run the following commands:
Use the flag
--reuse-values
to keep the config values you’d already set while installing or using the Collector:
helm upgrade splunk-otel-collector splunk-otel-collector-chart/splunk-otel-collector
--reuse-values
Use
--values config.yaml
to override your previous configuration while upgrading:
helm upgrade splunk-otel-collector --values config.yaml splunk-otel-collector-chart/splunk-otel-collector --reuse-values
Read more in the official Helm upgrade options documentation.
Upgrade guidelines 🔗
Apply the following changes to the Collector configuration files for specific version upgrades. For more details refer to Helm chart upgrade guidelines in GitHub.
From 0.113.0 to 0.116.0 🔗
Custom resource definition (CRD) configuration has been modified.
Before v0.110.0 CRDs were deployed via a
crds/
directory (upstream default).From v0.110.0 to v1.113.0 CRDs were deployed using Helm templates (upstream default), which had reported issues.
From v0.116.0 and higher, you must explicitly configure your preferred CRD deployment method or deploy the CRDs manually to avoid potential issues. You can deploy CRDs via a
crds/
directory again by enabling a newly added value.
New users 🔗
If you’re a new user deploy CRDs via the crds/
directory. For a fresh installation use the following Helm values:
operatorcrds:
install: true
operator:
enabled: true
To install the chart run:
helm install <release-name> splunk-otel-collector-chart/splunk-otel-collector --set operatorcrds.install=true,operator. enabled=true <extra_args>
Current users 🔗
You might need to migrate if using operator.enabled=true
.
If you’re using versions 0.110.0 to 1.113.0, CRDs are likely deployed via Helm templates. To migrate to the recommended crds/
directory deployment:
Delete the existing chart running
helm delete <release-name>
Verify if the following CRDs are present and delete them if necessary:
kubectl get crds | grep opentelemetry kubectl delete crd opentelemetrycollectors.opentelemetry.io kubectl delete crd opampbridges.opentelemetry.io kubectl delete crd instrumentations.opentelemetry.io
Reinstall the chart with the updated configuration:
helm install <release-name> splunk-otel-collector --set operatorcrds.install=true,operator.enabled=true <extra_args>
Current users maintaining legacy templates 🔗
If you’re using chart versions 0.110.0 to 1.113.0 and prefer to continue deploying CRDs via Helm templates (not recommended), use the following values:
operator:
enabled: true
operator:
crds:
create: true
Caution
This method might cause race conditions during installation or upgrades.
From 0.105.5 to 0.108.0 🔗
Note
If you have no customizations under .Values.operator.instrumentation.spec.*
no migration is required.
The Helm chart configuration for operator auto-instrumentation has been simplified, and the values previously under .Values.operator.instrumentation.spec.*
have been moved to .Values.instrumentation.*
.
The updated path looks like this:
instrumentation:
endpoint: XXX
...
The deprecated path was:
operator:
instrumentation:
spec:
endpoint: XXX
...
General guidelines 🔗
Apply the following changes to the Collector configuration files for specific version upgrades. For more details refer to the Upgrade guidelines in GitHub.
From 0.96.1 to 0.97.0 🔗
memory_ballast
is no longer effective. You can now control garbage collection with a soft memory limit using the SPLUNK_MEMORY_TOTAL_MIB
env var, which is set to 90% of the total memory by default. For more information, see Environment variables.
Follow these steps to ensure your Collector instances work correctly:
If you haven’t customized
memory_ballast
, remove it from the configuration.If you have customized
memory_ballast
usingSPLUNK_BALLAST_SIZE_MIB
(orextensions::memory_ballast::size_mib config
), remove thememory_ballast
extension and use theGOMEMLIMIT
environment variable to set a custom soft memory limit:To increase frequency of garbage collection set
GOMEMLIMIT
to a higher value than the default 90% of total memory.To decrease frequency of garbage collection set
GOMEMLIMIT
to a lower value than the default 90% of total memory.For more information, see Go environment variables .
From 0.68.0 to 0.69.0 🔗
The gke
and gce
resource detectors in the resourcedetection
processor have been replaced with the gcp
resource detector. If you have gke
and gce
detectors configured in the resourcedetection
processor, update your configuration accordingly.
For more information, see Resource detection processor.
From 0.41.0 to 0.42.0 🔗
The Splunk Distribution of the OpenTelemetry Collector used to evaluate user configuration twice and this required escaping of each $
symbol with $$
to prevent unwanted environment variable expansion. The issue was fixed in the 0.42.0 version. Any occurrences of $$
in your configuration need to be replaced with $
.
From 0.35.0 to 0.36.0 🔗
Move the config parameter exporters -> otlp -> insecure
to exporters -> otlp -> tls -> insecure
.
The otlp
exporter configuration must look like this:
exporters:
otlp:
endpoint: "${SPLUNK_GATEWAY_URL}:4317"
tls:
insecure: true
From 0.34.0 to 0.35.0 🔗
Move the ballast_size_mib
parameter from the memory_limiter
processor to the memory_ballast
extension, and rename it to size_mib
.
extensions:
memory_ballast:
size_mib: ${SPLUNK_BALLAST_SIZE_MIB}
Update the access token for the Collector for Kubernetes 🔗
Note
Make sure you don’t update your Helm chart or Collector version in the process of updating your access token. See Step 3 for details.
To update the access token for your Collector for Kubernetes instance follow these steps:
Confirm the Helm release name and chart version. To do so, run:
helm list -f <Release_Name>
Optionally, you can check your current access token:
helm get values <Release_Name>
Deploy your new access token with Helm upgrade. This command will only update your access token, but will mantain your current Helm chart and Collector versions.
helm upgrade --reuse-values --version <Current_Chart_Version> --set splunkObservability.accessToken=<New_Access_Token> <Release_Name> splunk-otel-collector-chart/splunk-otel-collectorIf you want to use the latest Helm version instead of your current one, remove
'--version <Current_Chart_Version>'
from the command.
Verify the value of the updated access token:
helm get values <Release_Name>
Restart the Collector’s DaemonSet and deployments:
If
agent.enabled=true
, restart the Collector’s agent DaemonSet:kubectl rollout restart DaemonSet <Release_Name>-agent
If
clusterReceiver.enabled=true
, restart the Collector’s cluster receiver deployment:kubectl rollout restart deployment <Release_Name>-k8s-cluster-receiver
If
gateway.enabled=true
, restart the Collector’s gateway deployment:kubectl rollout restart deployment <Release_Name>
Verify the status of your clusters’ pods:
kubectl get pod -n <Namespace> | grep <Release_Name>