Splunk Cloud Platform

Getting Data In

Forward data with the logd input

logd input is a modular input that collects log data. Using the logd modular input, the forwarder pushes Unified Logging data to your Splunk platform deployment. logd input is supported on macOS 10.15, 11, or 12.

Before you begin

Before you run logd input for the first time, decide how much, if any, historical data you want to ingest on the first run. By default, the input ingests all available historical data stored by logd, which can be days, weeks, or even months of data. To limit this, use the logd-starttime configuration parameter described in this task to specify the earliest time for records to be read.

In order to read logd files, you must run Splunk with Admin privileges.

Best practices for configuring logd input

Here's a few best practices to keep in mind when configuring your logd input

  • Start with a simple configuration before you build something more complex.
  • For more information on configurations, see the spec file splunkforwarder/etc/apps/logd_input/README/inputs.conf.spec.

Define your stanzas

  1. On your forwarder, navigate to splunkforwarder/etc/apps/logd_input/default/.
  2. Copy the inputs.conf file.
  3. Navigate to splunkforwarder/etc/apps/logd_input/local/.
  4. Paste the copy of the inputs.conf file.
  5. Open the inputs.conf file with a text editor.
  6. Define the logd stanza by configuring data retrieval and data formatting parameters. For a full list of parameters, see the Parameters table. The number of stanzas determines the number of input instances that are run. For example, if you define five unique stanzas on a forwarder, the logd input returns five unique reports.
  7. Save your changes.
  8. Restart your forwarder.
  9. (Optional) Use a deployment server to push the changes to your settings to other forwarders in your Splunk platform deployment. For more information, see Use forwarder management to manage apps topic in the Updating Splunk Enterprise Instances manual.

Reference: parameter definitions

The following table describes each parameter that you can set in your logd input stanza.

Parameter Description
logd-show = <string> Shows contents of the system log datastore.
logd-backtrace = <string> Backtraces the system log datastore.
logd-debug = <string> Debug logs for the system log datastore.
logd-info = <string> Shows information about the system log datastore.
logd-loss = <string> Shows data loss for the system log datastore.
logd-signpost = <string> Shows data loss for the system log datastore.
logd-predicate = <string> Filters messages using the provided predicate based on NSPredicate. Only a single predicate is supported..
logd-process = <string> The process on which to operate. You can pass this option more than once to operate on multiple processes. This attribute is only supported for macOS 11, it is not supported for macOS 10.
logd-source = <string> Include symbol names and source line numbers for messages, if available.
logd-include-fields = <string> A comma-separated list of fields to include in a query.
logd-exclude-fields = <string> A comma-separated list of fields to exclude from a query.
logd-interval = <string> Query frequency interval in seconds.
logd-starttime = <string> Date and time from when the first query should first pull data, in the format: "YYYY-MM-DD HH:mm:SS"

Configuration examples

Example of two logd inputs on one forwarder

[logd://bigsur]

logd-predicate = (subsystem == "com.apple.locationd.Position") && ((senderImagePath ENDSWITH "locationd") OR (senderImagePath ENDSWITH "IOHDCPFamily"))
logd-backtrace = no
logd-debug = no
logd-info = true
logd-loss = no
logd-signpost = yes
logd-exclude-fields = bootUUID,formatString

[logd://bigsur_2]

logd-backtrace = no
logd-debug = yes
logd-info = no
logd-loss = yes
logd-signpost = false
logd-include-fields = bootUUID,formatString

Example of two universal forwarder instances with two stanzas on each instance

The example shows the same stanza in different universal forwarders.

  • Instance one:
[logd://bigsur]

logd-info = true
logd-source = yes
logd-exclude-fields = bootUUID,formatString

[logd://bigsur_2]

logd-backtrace = no
logd-debug = yes
logd-include-fields = bootUUID,formatString
  • Instance two:
[logd://catalina]

logd-predicate = category IN { "GeneralCLX", "calendarinterval" }
logd-backtrace = no
logd-debug = yes
logd-info = no
logd-loss = yes
logd-signpost = false
logd-source = yes


[logd://bigsur_2]

logd-backtrace = no
logd-debug = yes
logd-include-fields = bootUUID,formatString


Troubleshoot the logd input

Note that the input is subject to all forwarder data transformation and routing rules. For example, if the eventMessage field contains timestamps, by default the pipeline retrieves that timestamp and uses it instead of the timestamp you explicitly specified. To disable this behavior, see Tune timestamp recognition for better indexing performance.

I can't see my logd data

If you cannot see your data, try the following steps:

  • Check to make sure that logd is enabled. By default logd is disabled. You must define a stanza to enable it.
  • Make sure that the logd reading utility is running. Use the command: ps aux | grep "log show".
  • Verify that your parameters are correctly configured. To do this, run a shell command that runs the mod-input against a specific stanza so that you can see the output to stdout.

$SPLUNK_HOME/bin/splunk cmd splunkd print-modinput-config logd logd://z | $SPLUNK_HOME/bin/splunkd logd-modinput

Timestamps are incorrect

The universal forwarder does not parse events before passing them on to the indexer, if you timestamps are incorrection, make sure the props.conf and transforms.conf settings are properly configured on your indexer. See the Managing Indexers and Clusters of Indexers manual for more information about configuring indexers.

The ingested data is not what I expected

If you do not see the data you expect, or you see data that you do not expect, check which switches were added for the journalctl utilities.

Run ps aux | grep "log show" and verify that the result is what you configured in the stanza.

Note that you may need to run this command more than once to capture the results while input instances are running, as this command executes only periodically and does not run continuously

I see data duplicates

If you have multiple stanzas running, make sure the stanza attributes do not overlap.

How the logd reader works

The settings you define in a logd stanza create filters for your data.

If you enable and configure without parameters, the logd input ingests the full content of the logd persistent storage, starting with the oldest entry. logd configuration supports both prescriptive and restrictive declaration of record definitions using "logd-include-fields" and "logd-exclude-fields" parameters. If one or more FIELD=VALUE match arguments are passed, the output is retrieved and formatted accordingly.

Once logd input runs, it starts saving (writing to disk) the timestamp of the last record sent into Splunk platform. This ensures data continuity when the forwarder is restarted.

1. When a forwarder starts, it looks for the checkpoint with a previously saved timestamp. The discovered checkpoint is the starting point for resumed data collection.

2. If a checkpoint is not located, the input uses the logd-starttime value instead.

3. If the input finds neither the checkpoint nor the logd-startime parameter, the input attempts to retrieve all available historical data from the persistent logd storage.

This feature does not support "log stream" ingestion mode.

Last modified on 12 April, 2023
Get data with the Journald input   Overview of the Splunk OpenTelemetry Collector for Kubernetes

This documentation applies to the following versions of Splunk Cloud Platform: 9.2.2406, 8.2.2201, 8.2.2203, 8.2.2112, 8.2.2202, 9.0.2205, 9.0.2208, 9.0.2209, 9.0.2303, 9.0.2305, 9.1.2308, 9.1.2312, 9.2.2403 (latest FedRAMP release)


Was this topic useful?







You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters