Docs » Splunk APMでサービス、スパン、トレースを管理する » Use the service view for a complete view of your service health

Use the service view for a complete view of your service health 🔗

As a service owners you can use the service view in Splunk APM to get a complete view of your service health in a single pane of glass. The service view includes a service-level indicator (SLI) for availability, dependencies, request, error, and duration (RED) metrics, runtime metrics, infrastructure metrics, Tag Spotlight, endpoints, and logs for a selected service. You can also quickly navigate to code profiling and memory profiling for your service from the service view.

The service view is available for instrumented services, pub/sub queues, databases, and inferred services. See Service view support for various service types for details on the information available for various service types.

Access the service view for your service 🔗

You can access the service view for a specific service in several places.

You can search for the service using the search in the top toolbar.

Animation showing a user searching for the checkoutservice and selecting the service result.

You can also access the service view for a specific service within the service map. Start by selecting Service Map on the APM landing page. Select a service in the service map, then select Service view in the panel.

Screenshot of the service view button within the service map when a service is selected.

Finally, you can also access the service view for a specific service by selecting the service from the APM landing page.

Use the service overview to monitor the health of your service 🔗

When you open the service view an environment is selected based on your recently viewed environments. Adjust the environment and time range filters if necessary. Use the following sections to monitor the health of your service.

Service metrics 🔗

Use the following metrics in the Service metrics section to monitor the health of your service. Collapse sub-sections that are not relevant to you to customize your service view.

This animation shows the service metrics for a service in the service view. The user select a chart to view example traces.
  • Success rate SLI - The success service-level indicator (SLI) shows the percentage of time requests for your service were successful in the last 30 days. The chart shows successful and unsuccessful requests. If you configured a success rate service-level objective (SLO), an additional chart displays success rate over the compliance window you specified in your objective. See サービスレベル目標(SLO)を用いて、サービスの健全性メトリクスを測定および追跡します。.

  • Service map - The service map shows the immediate upstream and downstream dependencies for the service you are viewing. The service map in service view is limited to 20 services, sorted by the most number of requests. Hover over the chart and select View full service map to go to the service map.

  • Service requests - The service requests chart shows streaming request data for the service. If you have detectors for the service requests configured, triggered alerts display below the chart. Select the chart to view example traces. Select the alert icon to view alert details.

  • Service latency - The service latency chart shows p50, p90, and p99 latency data for the service. If you have detectors for the service latency configured, triggered alerts display below the chart. Select the chart to view example traces. Select the alert icon to view alert details.

  • Service error - The service error chart shows streaming error data for the service. If you have detectors for the service error rate configured, triggered alerts display below the chart. Select the chart to view example traces. Select the alert icon to view alert details.

  • Dependency latency by type - The dependency latency by type chart shows the latency for each of the downstream systems. Select the chart to see details about each system category. Systems are categorized as follows:
    • Services - instrumented services

    • データベース

    • Inferred services - un-instrumented third-party services

    • Pub/sub queues - Publisher/subscriber queues

Runtime metrics 🔗

Instrument your back-end applications to send spans to Splunk APM to view runtime metrics. See バックエンドアプリケーションをインストルメンテーションして、スパンを Splunk APM に送信する.

The available runtime metrics vary based on language. See Metric reference for more information.

Infrastructure metrics 🔗

If you are using the Splunk Distribution of the OpenTelemetry Collector and the SignalFx Exporter, infrastructure metrics for the environment and service you are viewing display. See Splunk Distribution of the OpenTelemetry Collector の利用開始 and SignalFx エクスポーター.

以下のインフラストラクチャ メトリクスが利用できます:

  • ホストのCPU使用率

  • ホストのメモリ使用量

  • ホストのディスク使用量

  • ホストのネットワーク使用量

  • ポッドのCPU使用率

  • Pod memory usage

  • ポッドのディスク使用量

  • Pod network utilization

View Tag Spotlight view for your service 🔗

Select Tag Spotlight to view Tag Spotlight view filtered for your service. See Tag Spotlightを使用してサービスパフォーマンスを分析する to learn more about Tag Spotlight.

View endpoints for your service 🔗

Select the Endpoints tab to view endpoints for the service. Use the search field to search for specific endpoints. Use the sort drop-down list to change how endpoints are sorted. Select an endpoint to view endpoint details or go to Tag Spotlight, traces, code profiling, or the dashboard for the endpoint.

View logs for your service 🔗

Select Logs to view logs for the environment and service you are viewing. By default, logs are displayed for all indices that correspond to first listed Log Observer Connect connection. Logs are filtered by the service you are viewing using the service.name value. If your logs do not have a service.name value, you can create an alias in Splunk Web. See Create field aliases in Splunk Web .

To select a different connection or refine which indices logs are pulled from, select Configure service view.

  1. In the Log Observer Connect Index drop-down list, select the Log Observer Connect connection, then select the corresponding indices you want to pull logs from.

  2. Select Apply

  3. Select Save changes.

The connection and indices you select are saved for all users in your organization for each unique service and environment combination.

View traces for your service 🔗

Select Traces to view traces for the environment and service you are viewing. The Traces tab includes charts for Service requests and errors and Service latency. Select within the charts to see example traces.

Under the charts are lists of Traces with errors and Long traces. Select the trace ID link to open the trace in trace waterfall view. Select View more in Trace Analyzer to search additional traces. See Splunk APMのTrace Analyzerを使用してトレースを調査する for more information about using Trace Analyzer to search traces.

View top commands or queries for your databases 🔗

If you select a Redis or SQL database from the service dropdown menu, you can select Database Query Performance to view top commands or queries for your database. See Database Query Performanceの監視 to learn more.

Go to the code profiling view for your service 🔗

Select Code profiling to go to the code profiling view of AlwaysOn Profiling filtered for your service. See Splunk APMのAlwaysOn Profilingの概要 to learn more about AlwaysOn Profiling.

Go to the memory profiling view for your service 🔗

Select Memory profiling to go to the memory profiling view of AlwaysOn Profiling filtered for your service. See Splunk APMのAlwaysOn Profilingの概要 to learn more about AlwaysOn Profiling.

サービスビューの設定 🔗

Select Configure service view to modify the Log Observer Connect connection and indices for the logs you want to display for your service.

  1. In the Log Observer Connect Index drop-down list, select the Log Observer Connect connection, then select the corresponding indices you want to pull logs from.

  2. Select Apply

  3. Select Save changes.

The connection and indices you select are saved for all users in your organization for each unique service and environment combination.

Service view support for various service types 🔗

The information available in your service view varies based on the type of service you select. The following table shows which sections are available for each service type.

Service view section

Instrumented services

Databases

Pub/sub queues

Inferred services

概要

Yes, includes service metrics, runtime metrics, and infrastructure metrics

Yes, includes only service metrics

Yes, includes only service metrics

Yes, includes only service metrics

Tag Spotlight

あり

あり

あり

あり

Endpoints

あり

なし

なし

あり

ログ

あり

あり

あり

あり

Traces

あり

あり

あり

あり

Database Query Performance

なし

Yes, only displays for Redis and SQL databases.

なし

なし

Code profiling

あり

なし

なし

なし

Memory profiling

あり

なし

なし

なし

Metric reference 🔗

The following metrics are used in the service view.

Service metrics 🔗

Chart

メトリクス

Service requests

service.request with a count function

Service latency

  • service.request with a median function

  • service.request with a percentile function and a percentile value 90

  • service.request with a percentile function and a percentile value 99

Service errors

service.requests with a count function and a sf_error:True filter

SLI/SLO

service.request with a count function

.NET runtime metrics 🔗

Chart

メトリクス

Heap usage

process.runtime.dotnet.gc.committed_memory.size

GC collections

process.runtime.dotnet.gc.collections.count

Application activity

process.runtime.dotnet.gc.allocations.size

GC heap size

process.runtime.dotnet.gc.heap.size

GC pause time

process.runtime.dotnet.gc.pause.time

Monitor lock contention

process.runtime.dotnet.monitor.lock_contention.count

Threadpool thread

process.runtime.dotnet.monitor.lock_contention.count

Exceptions

process.runtime.dotnet.exceptions.count

Java runtime metrics 🔗

Charts

メトリクス

メモリ使用量

  • runtime.jvm.gc.live.data.size

  • runtime.jvm.memory.max

  • runtime.jvm.memory.used

割当率

process.runtime.jvm.memory.allocated

Class loading

  • runtime.jvm.classes.loaded

  • runtime.jvm.classes.unloaded

GCアクティビティ

  • runtime.jvm.gc.pause.totalTime

  • runtime.jvm.gc.pause.count

GC overhead

runtime.jvm.gc.overhead

Thread count

  • runtime.jvm.threads.live

  • runtime.jvm.threads.peak

Thread pools

  • executor.threads.active

  • executor.threads.idle

  • executor.threads.max

Node.js runtime metrics 🔗

Charts

メトリクス

Heap usage

  • process.runtime.nodejs.memory.heap.total

  • process.runtime.nodejs.memory.heap.used

Resident set size

process.runtime.nodejs.memory.rss

GCアクティビティ

  • process.runtime.nodejs.memory.gc.size

  • process.runtime.nodejs.memory.gc.pause

  • process.runtime.nodejs.memory.gc.count

Event loop lag

  • Process.runtime.nodejs.event_loop.lag.max

  • process.runtime.nodejs.event_loop.lag.min

Infrastructure metrics 🔗

Chart

メトリクス

ホストのCPU使用率

cpu.utilization

ホストのメモリ使用量

memory.utilization

ホストのディスク使用量

disk.summary_utilization

ホストのネットワーク使用量

network.total

ポッドのCPU使用率

  • container_cpu_utilization

  • cpu.num_processors

  • machine_cpu_cores

  • k8s.container.ready

Pod memory usage

  • k8s.container.ready

  • container_memory_usage_bytes

  • container_spec_memory_limit_bytes

ポッドのディスク使用量

  • k8s.container.ready

  • container_fs_usage_bytes

Pod network utilization

  • k8s.container.ready

  • pod_network_receive_bytes_total

  • pod_network_transmit_bytes_total

This page was last updated on 2024年08月06日.