Memory Limiter processor ๐
The Memory Limiter processor prevents out of memory situations on the Splunk Distribution of OpenTelemetry Collector. The supported pipeline types are traces
, metrics
, and logs
. See Process your data with pipelines for more information.
Get started ๐
Note
This component is included in the default configuration of the Splunk Distribution of the OpenTelemetry Collector when deploying in host monitoring (agent) mode. See Collector deployment modes for more information.
For details about the default configuration, see Configure the Collector for Kubernetes with Helm, Collector for Linux default configuration, or Collector for Windows default configuration. You can customize your configuration any time as explained in this document.
Follow these steps to configure and activate the component:
Deploy the Splunk Distribution of OpenTelemetry Collector to your host or container platform:
Configure the
memory_limiter
processor as described in the next section.Restart the Collector.
Sample configurations ๐
To activate the resource processor, add memory_limiter
to the processors
section of your configuration file.
Define the memory_limiter
as the first processor in the pipeline, immediately after the receivers, to ensure that backpressure can be sent to applicable receivers, and to minimize the likelihood of dropped data when memory_limiter
gets triggered.
Along with the memory_limiter
processor, itโs highly recommended to configure the Ballast extension as well on every Collector. The ballast should be configured to be 1/3 to 1/2 of the memory allocated to the Collector.
See the following example:
processors:
memory_limiter:
check_interval: 1s
limit_mib: 4000
spike_limit_mib: 800
To complete the configuration, include the processor in any pipeline of the service
section of your configuration file. For example:
service:
pipelines:
metrics:
processors: [memory_limiter]
logs:
processors: [memory_limiter]
traces:
processors: [memory_limiter]
Control memory usage ๐
Given that the amount and type of data the Collector processes is specific to the environment, and that the resources used by the Collector are also dependent on the configured processors, itโs important to put checks in place regarding memory usage.
Caution
While the processor can help mitigate out of memory situations, it doesnโt replace proper sizing and configuration of the Collector.
If the soft limit is crossed, the Collector will return errors to all receive operations until enough memory is freed. This might result in dropped data since the receivers might not be able to hold back and retry the data indefinitely.
If the component preceding the Memory Limiter in the pipeline does not correctly retry and send the data, then that data will be permanently lost.
Define soft and hard memory limits ๐
The memory_limiter
processor allows to perform periodic checks of memory usage. If it exceeds the defined limits, it will begin to refuse data and force to reduce memory consumption.
The memory_limiter
uses soft and hard memory limits:
The hard limit is always above or equal to the soft limit.
When memory usage exceeds the soft limit, the processor enters the memory limited mode and starts to refuse the data by returning errors to the preceding component in the pipeline that made the ConsumeLogs/Trace/Metrics function call. The preceding component should be a receiver.
When the memory usage is above the hard limit, in addition to refusing the data, the processor will forcedly perform garbage collection in order to try to free memory.
When the memory usage drops below the soft limit, normal operation is resumed, data is no longer refused, and thereโs no forced garbage collection.
The difference between the soft limit and hard limits is defined via the spike_limit_mib
configuration option. Select a value that ensures that between memory check intervals memory usage cannot increase by more than this value, otherwise memory usage might exceed the hard limit. See more in Configuration options.
Configuration options ๐
The processor has the following configuration options:
check_interval
: Time between measurements of memory usage. The recommended value is 1 second.0
by default.If the expected traffic to the Collector is very spiky, decrease
check_interval
or increasespike_limit_mib
to avoid memory usage going over the hard limit.
limit_mib
: Maximum amount of memory, in MiB, targeted to be allocated by the process heap. This defines the hard limit.0
by default.Typically the total memory usage of the process is about 50MiB higher than this value.
limit_percentage
: Maximum amount of total memory targeted to be allocated by the process heap.0
by default.This option is used to calculate
memory_limit
from the total available memory. For instance, if you set it to 75% with a total memory of 1GiB, the limit will be 750 MiB.The fixed memory setting,
limit_mib
, takes precedence over the percentage configuration.This configuration is supported on Linux systems with cgroups and itโs intended to be used in dynamic platforms like Docker.
spike_limit_mib
: Maximum spike expected between the measurements of memory usage. It must be less thanlimit_mib
. The recommended and default value forspike_limit_mib
is 20% oflimit_mib
.The soft limit value will be equal to (
limit_mib
-spike_limit_mib
).
spike_limit_percentage
: Maximum spike expected between the measurements of memory usage. The value must be less thanlimit_percentage
.0
by default.This option is used to calculate
spike_limit_mib
from the total available memory. For instance, if you set it to 25% with a total memory of 1GiB, the limit will be 250MiB.
Settings ๐
The following table shows the configuration options for the memory_limiter
processor:
Troubleshooting ๐
If you are a Splunk Observability Cloud customer and are not able to see your data in Splunk Observability Cloud, you can get help in the following ways.
Available to Splunk Observability Cloud customers
Submit a case in the Splunk Support Portal .
Contact Splunk Support .
Available to prospective customers and free trial users
Ask a question and get answers through community support at Splunk Answers .
Join the Splunk #observability user group Slack channel to communicate with customers, partners, and Splunk employees worldwide. To join, see Chat groups in the Get Started with Splunk Community manual.