All DSP releases prior to DSP 1.4.0 use Gravity, a Kubernetes orchestrator, which has been announced end-of-life. We have replaced Gravity with an alternative component in DSP 1.4.0. Therefore, we will no longer provide support for versions of DSP prior to DSP 1.4.0 after July 1, 2023. We advise all of our customers to upgrade to DSP 1.4.0 in order to continue to receive full product support from Splunk.
Get data from Microsoft Azure Event Hubs
Use the Microsoft Azure Event Hubs source function to get data from an Azure Event Hubs namespace.
The payload of the ingested data is encoded as bytes. To deserialize and preview your data, see Deserialize and preview data from Microsoft Azure Event Hubs in the Connect to Data Sources and Destinations with the manual.
Prerequisites
Before you can use this function, you must create a connection. See Create a connection to Microsoft Azure Event Hubs in the Connect to Data Sources and Destinations with the manual. When configuring this source function, set the connection_id
argument to the ID of that connection.
Function output schema
This function outputs records with the schema described in the following table.
Key | Description |
---|---|
partitionKey | The partition key of the event as a string. |
body | The payload of the event in bytes. |
partitionId | The ID of the partition in the event hub where the event is stored, given as a string. |
offset | The offset of the event as a string. |
sequenceNumber | The sequence number of the event as a long. |
enqueuedTime | The date and time when the event was queued up for delivery to subscribers, given as a long. |
properties | The user-defined properties associated with the event, given as a map of strings. |
The following is an example of a typical record from the event_hubs
function:
{ "partitionKey": "1", "body": "aGVsbG8gd29ybGQ=", "partitionId": "1", "offset": "8589944464", "sequenceNumber": 83, "enqueuedTime": 1598479296172, "properties": { "MyProperty": "TestVal" } }
Required arguments
- connection_id
- Syntax: string
- Description: The ID of your Azure Event Hubs connection.
- Example in Canvas View: my-azure-event-hubs-connection
- event_hub_name
- Syntax: string
- Description: The name of the Event Hub entity to subscribe to.
- Example in Canvas View: my-event-hub-name
- consumer_group_name
- Syntax: string
- Description: The name of a consumer group. This must match the consumer group name as defined in Azure Event Hubs. If the consumer group does not exist, the pipeline will fail. Consumer groups are limited to 5 concurrent readers. To avoid reaching this limit, create a new, dedicated consumer group for each pipeline.
- Example in Canvas View: my-consumer-group
- starting_position
- Syntax: LATEST | EARLIEST
- Description: The position in the data stream where you want to start reading data. Set this argument to one of the following values:
- LATEST: Start reading data from the latest position on the data stream.
- EARLIEST: Start reading data from the very beginning of the data stream.
- Example in Canvas View: LATEST
SPL2 example
When working in the SPL View, you can write the function by providing the arguments in this exact order:
| from event_hubs("my-connection-id", "my-event-hub-name", "my-consumer-group", "LATEST") | ...;
Alternatively, you can use named arguments to declare the arguments in any order. The following example uses named arguments to declare the arguments in an arbitrary order:
| from event_hubs(starting_position: "LATEST", event_hub_name: "my-event-hub-name", connection_id: "my-connection-id", consumer_group_name: "my-consumer-group") |...;
If you want to use a mix of unnamed and named arguments in your functions, you must list all unnamed arguments in the correct order before providing the named arguments.
Get data from Microsoft 365 | Get data from Microsoft Azure Monitor |
This documentation applies to the following versions of Splunk® Data Stream Processor: 1.2.0, 1.2.1-patch02, 1.2.1, 1.2.2-patch02, 1.2.4, 1.2.5, 1.3.0, 1.3.1, 1.4.0, 1.4.1, 1.4.2, 1.4.3, 1.4.4, 1.4.5, 1.4.6
Feedback submitted, thanks!