UDP log receiver ๐
The UDP log receiver allows the Splunk Distribution of the OpenTelemetry Collector to collect logs over UDP connections. The supported pipeline is logs
. See Process your data with pipelines for more information.
Get started ๐
Follow these steps to configure and activate the component:
Deploy the Splunk Distribution of the OpenTelemetry Collector to your host or container platform:
Configure the UDP log receiver as described in the next section.
Restart the Collector.
Sample configurations ๐
To activate the UDP receiver add udplog
to the receivers
section of your agent_config.yaml
file, as in the following example configuration:
receivers:
udplog:
listen_address: "0.0.0.0:54525"
To complete the configuration, include the receiver in the logs
pipeline of the service
section of your
configuration file. For example:
service:
pipelines:
logs:
receivers: [udplog]
See Settings for additional settings.
Use operators to format logs ๐
The UDP log receiver uses operators to process logs into a desired format. Each operator fulfills a single responsibility, such as reading lines from a file, or parsing JSON from a field. You need to chain operators together in a pipeline to achieve your desired result.
For instance, you can read lines from a file using the file_input
operator. From there, you can send the results of this operation to a regex_parser
operator that creates fields based on a regex pattern. Next, you can send the results to a file_output
operator to write each line to a file on disk.
All operators either create, modify, or consume entries.
An entry is the base representation of log data as it moves through a pipeline.
A field is used to reference values in an entry.
A common expression syntax is used in several operators. For example, expressions can be used to filter or route entries.
Available operators ๐
For a complete list of available operators, see What operators are available? in GitHub.
The following applies to operators:
Each operator has a
type
.You can give a unique Id to each operator.
If you use the same type of operator more than once in a pipeline, you must specify an Id.
Otherwise, the Id defaults to the value of
type
.
An operator outputs to the next operator in the pipeline.
The last operator in the pipeline emits from the receiver.
Optionally, you can use the output parameter to specify the Id of another operator to pass logs there directly.
Parser operators ๐
Use parser operators to isolate values from a string. There are two classes of parsers, simple and complex.
Parsers with embedded operations ๐
You can configure parsing operators to embed certain follow-up operations such as timestamp and severity parsing.
For more information, see the the GitHub entry on complex parsers at Parsers .
Multiline configuration ๐
If set, the multiline
configuration block instructs the udplog
receiver to split log entries on a pattern other than newlines.
The multiline
configuration block must contain exactly one of line_start_pattern
or line_end_pattern
. These are regex patterns that match either the beginning of a new log entry, or the end of a log entry.
The omit_pattern
setting can be used to omit the start/end pattern from each entry.
Supported encodings ๐
The following encodings are supported:
Key |
Description |
---|---|
|
No encoding validation. Treats the file as a stream of raw bytes. |
|
UTF-8 encoding. |
|
UTF-16 encoding with little-endian byte order. |
|
UTF-16 encoding with big-endian byte order. |
|
ASCII encoding. |
|
The Big5 Chinese character encoding. |
Other less common encodings are supported on a best-effort basis. See the list of available encodings in https://www.iana.org/assignments/character-sets/character-sets.xhtml.
Settings ๐
The following table shows the configuration options for the UDP receiver:
Name | Type | Default | Description |
---|---|---|---|
attributes | map | ||
resource | map | ||
id | string | udp_input | |
type | string | udp_input | |
output | slice | ||
listen_address | string | ||
one_log_per_packet | bool | false | |
add_attributes | bool | false | |
encoding | string | utf-8 | |
multiline (see fields) | struct | Config is the configuration for a split func | |
preserve_leading_whitespaces | bool | false | |
preserve_trailing_whitespaces | bool | false | |
async (see fields) | ptr | ||
operators (see fields) | slice | Config is the configuration of an operator | |
storage | ptr | ID represents the identity for a component. It combines two values:
| |
retry_on_failure (see fields) | struct | Config defines configuration for retrying batches in case of receiving a retryable error from a downstream consumer. If the retryable error doesn't provide a delay, exponential backoff is applied. |
Fields of multiline
Name | Type | Default | Description |
---|---|---|---|
line_start_pattern | string | ||
line_end_pattern | string | .^ | |
omit_pattern | bool | false |
Fields of async
Name | Type | Default | Description |
---|---|---|---|
readers | int | ||
processors | int | ||
max_queue_length | int |
Fields of operators
Name | Type | Default | Description |
---|---|---|---|
builder | interface |
Fields of retry_on_failure
Name | Type | Default | Description |
---|---|---|---|
enabled | bool | false | Enabled indicates whether to not retry sending logs in case of receiving a retryable error from a downstream consumer. Default is false. |
initial_interval | int64 | InitialInterval the time to wait after the first failure before retrying. Default value is 1 second. | |
max_interval | int64 | MaxInterval is the upper bound on backoff interval. Once this value is reached the delay between
consecutive retries will always be | |
max_elapsed_time | int64 | MaxElapsedTime is the maximum amount of time (including retries) spent trying to send a logs batch to a downstream consumer. Once this value is reached, the data is discarded. It never stops if MaxElapsedTime == 0. Default value is 5 minutes. |
Troubleshooting ๐
If you are a Splunk Observability Cloud customer and are not able to see your data in Splunk Observability Cloud, you can get help in the following ways.
Available to Splunk Observability Cloud customers
Submit a case in the Splunk Support Portal .
Contact Splunk Support .
Available to prospective customers and free trial users
Ask a question and get answers through community support at Splunk Answers .
Join the Splunk #observability user group Slack channel to communicate with customers, partners, and Splunk employees worldwide. To join, see Chat groups in the Get Started with Splunk Community manual.