Splunk® Enterprise

Getting Data In

Splunk Enterprise version 9.0 will no longer be supported as of June 14, 2024. See the Splunk Software Support Policy for details. For information about upgrading to a supported version, see How to upgrade Splunk Enterprise.

Forward data with the logd input

logd input is a modular input that collects log data. Using the logd modular input, the forwarder pushes Unified Logging data to your Splunk platform deployment. logd input is supported on macOS 10.15, 11, or 12.

Before you begin

Before you run logd input for the first time, decide how much, if any, historical data you want to ingest on the first run. By default, the input ingests all available historical data stored by logd, which can be days, weeks, or even months of data. To limit this, use the logd-starttime configuration parameter described in this task to specify the earliest time for records to be read.

In order to read logd files, you must run Splunk with Admin privileges.

Best practices for configuring logd input

Here's a few best practices to keep in mind when configuring your logd input

  • Start with a simple configuration before you build something more complex.
  • For more information on configurations, see the spec file splunkforwarder/etc/apps/logd_input/README/inputs.conf.spec.

Define your stanzas

  1. On your forwarder, navigate to splunkforwarder/etc/apps/logd_input/default/.
  2. Copy the inputs.conf file.
  3. Navigate to splunkforwarder/etc/apps/logd_input/local/.
  4. Paste the copy of the inputs.conf file.
  5. Open the inputs.conf file with a text editor.
  6. Define the logd stanza by configuring data retrieval and data formatting parameters. For a full list of parameters, see the Parameters table. The number of stanzas determines the number of input instances that are run. For example, if you define five unique stanzas on a forwarder, the logd input returns five unique reports.
  7. Save your changes.
  8. Restart your forwarder.
  9. (Optional) Use a deployment server to push the changes to your settings to other forwarders in your Splunk platform deployment. For more information, see Use forwarder management to manage apps topic in the Updating Splunk Enterprise Instances manual.

Reference: parameter definitions

The following table describes each parameter that you can set in your logd input stanza.

Parameter Description
logd-show = <string> Shows contents of the system log datastore.
logd-backtrace = <string> Backtraces the system log datastore.
logd-debug = <string> Debug logs for the system log datastore.
logd-info = <string> Shows information about the system log datastore.
logd-loss = <string> Shows data loss for the system log datastore.
logd-signpost = <string> Shows data loss for the system log datastore.
logd-predicate = <string> Filters messages using the provided predicate based on NSPredicate. Only a single predicate is supported..
logd-process = <string> The process on which to operate. You can pass this option more than once to operate on multiple processes. This attribute is only supported for macOS 11, it is not supported for macOS 10.
logd-source = <string> Include symbol names and source line numbers for messages, if available.
logd-include-fields = <string> A comma-separated list of fields to include in a query.
logd-exclude-fields = <string> A comma-separated list of fields to exclude from a query.
logd-interval = <string> Query frequency interval in seconds.
logd-starttime = <string> Date and time from when the first query should first pull data, in the format: "YYYY-MM-DD HH:mm:SS"

Configuration examples

Example of two logd inputs on one forwarder

[logd://bigsur]

logd-predicate = (subsystem == "com.apple.locationd.Position") && ((senderImagePath ENDSWITH "locationd") OR (senderImagePath ENDSWITH "IOHDCPFamily"))
logd-backtrace = no
logd-debug = no
logd-info = true
logd-loss = no
logd-signpost = yes
logd-exclude-fields = bootUUID,formatString

[logd://bigsur_2]

logd-backtrace = no
logd-debug = yes
logd-info = no
logd-loss = yes
logd-signpost = false
logd-include-fields = bootUUID,formatString

Example of two universal forwarder instances with two stanzas on each instance

The example shows the same stanza in different universal forwarders.

  • Instance one:
[logd://bigsur]

logd-info = true
logd-source = yes
logd-exclude-fields = bootUUID,formatString

[logd://bigsur_2]

logd-backtrace = no
logd-debug = yes
logd-include-fields = bootUUID,formatString
  • Instance two:
[logd://catalina]

logd-predicate = category IN { "GeneralCLX", "calendarinterval" }
logd-backtrace = no
logd-debug = yes
logd-info = no
logd-loss = yes
logd-signpost = false
logd-source = yes


[logd://bigsur_2]

logd-backtrace = no
logd-debug = yes
logd-include-fields = bootUUID,formatString


Troubleshoot the logd input

Note that the input is subject to all forwarder data transformation and routing rules. For example, if the eventMessage field contains timestamps, by default the pipeline retrieves that timestamp and uses it instead of the timestamp you explicitly specified. To disable this behavior, see Tune timestamp recognition for better indexing performance.

I can't see my logd data

If you cannot see your data, try the following steps:

  • Check to make sure that logd is enabled. By default logd is disabled. You must define a stanza to enable it.
  • Make sure that the logd reading utility is running. Use the command: ps aux | grep "log show".
  • Verify that your parameters are correctly configured. To do this, run a shell command that runs the mod-input against a specific stanza so that you can see the output to stdout.

$SPLUNK_HOME/bin/splunk cmd splunkd print-modinput-config logd logd://z | $SPLUNK_HOME/bin/splunkd logd-modinput

Timestamps are incorrect

The universal forwarder does not parse events before passing them on to the indexer, if you timestamps are incorrection, make sure the props.conf and transforms.conf settings are properly configured on your indexer. See the Managing Indexers and Clusters of Indexers manual for more information about configuring indexers.

The ingested data is not what I expected

If you do not see the data you expect, or you see data that you do not expect, check which switches were added for the journalctl utilities.

Run ps aux | grep "log show" and verify that the result is what you configured in the stanza.

Note that you may need to run this command more than once to capture the results while input instances are running, as this command executes only periodically and does not run continuously

I see data duplicates

If you have multiple stanzas running, make sure the stanza attributes do not overlap.

How the logd reader works

The settings you define in a logd stanza create filters for your data.

If you enable and configure without parameters, the logd input ingests the full content of the logd persistent storage, starting with the oldest entry. logd configuration supports both prescriptive and restrictive declaration of record definitions using "logd-include-fields" and "logd-exclude-fields" parameters. If one or more FIELD=VALUE match arguments are passed, the output is retrieved and formatted accordingly.

Once logd input runs, it starts saving (writing to disk) the timestamp of the last record sent into Splunk platform. This ensures data continuity when the forwarder is restarted.

1. When a forwarder starts, it looks for the checkpoint with a previously saved timestamp. The discovered checkpoint is the starting point for resumed data collection.

2. If a checkpoint is not located, the input uses the logd-starttime value instead.

3. If the input finds neither the checkpoint nor the logd-startime parameter, the input attempts to retrieve all available historical data from the persistent logd storage.

This feature does not support "log stream" ingestion mode.

Last modified on 12 April, 2023
Get data with the Journald input   Overview of the Splunk OpenTelemetry Collector for Kubernetes

This documentation applies to the following versions of Splunk® Enterprise: 7.1.0, 7.1.1, 7.1.2, 7.1.3, 7.1.4, 7.1.5, 7.1.6, 7.1.7, 7.1.8, 7.1.9, 7.1.10, 7.2.0, 7.2.1, 7.2.2, 7.2.3, 7.2.4, 7.2.5, 7.2.6, 7.2.7, 7.2.8, 7.2.9, 7.2.10, 7.3.0, 7.3.1, 7.3.2, 7.3.3, 7.3.4, 7.3.5, 7.3.6, 7.3.7, 7.3.8, 7.3.9, 8.0.0, 8.0.1, 8.0.2, 8.0.3, 8.0.4, 8.0.5, 8.0.6, 8.0.7, 8.0.8, 8.0.9, 8.0.10, 8.1.0, 8.1.1, 8.1.2, 8.1.3, 8.1.4, 8.1.5, 8.1.6, 8.1.7, 8.1.8, 8.1.9, 8.1.10, 8.1.11, 8.1.12, 8.1.13, 8.1.14, 8.2.0, 8.2.1, 8.2.2, 8.2.3, 8.2.4, 8.2.5, 8.2.6, 8.2.7, 8.2.8, 8.2.9, 8.2.10, 8.2.11, 8.2.12, 9.0.0, 9.0.1, 9.0.2, 9.0.3, 9.0.4, 9.0.5, 9.0.6, 9.0.7, 9.0.8, 9.0.9, 9.0.10, 9.1.0, 9.1.1, 9.1.2, 9.1.3, 9.1.4, 9.1.5, 9.1.6, 9.1.7, 9.2.0, 9.2.1, 9.2.2, 9.2.3, 9.2.4, 9.3.0, 9.3.1, 9.3.2, 9.4.0


Was this topic useful?







You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters