diff --git a/modules/ROOT/pages/otel-support.adoc b/modules/ROOT/pages/otel-support.adoc index 048a00488..e53bc975b 100644 --- a/modules/ROOT/pages/otel-support.adoc +++ b/modules/ROOT/pages/otel-support.adoc @@ -4,45 +4,46 @@ include::_attributes.adoc[] endif::[] :page-aliases: otel, otel support, OpenTelemetry -OpenTelemetry is an observability standard consisting of specifications, APIs, and SDKs. It helps you instrument, generate, collect, and export telemetry data, such as metrics, logs, and traces, to analyze software behavior. +OpenTelemetry is an observability standard consisting of specifications, APIs, and SDKs. It helps you instrument, generate, collect, and export telemetry data, such as metrics, logs, and traces, to analyze software behavior. -OpenTelemetry enables Mule runtime engine to provide observability into the behavior of Mule apps. With distributed tracing, Mule also brings observability to interactions between Mule apps and non-Mule components that leverage this standard. +OpenTelemetry enables Mule runtime engine to provide observability into the behavior of Mule applications. With tracing context propagation, Mule also brings observability to interactions between Mule apps and non-Mule components that leverage this standard. -Mule supports generating and exporting OpenTelemetry distributed traces. However, metrics and logs aren't yet supported through OpenTelemetry. +Mule supports generating and exporting OpenTelemetry traces and logs. Metrics aren't yet supported through OpenTelemetry. == Before You Begin To use the OpenTelemetry Mule features, you must be familiar with: -* Distributed tracing concepts, including traces and spans. +* Distributed tracing concepts, including traces and spans. + If you're new to distributed tracing and OpenTelemetry, we recommend starting with https://opentelemetry.io/docs/concepts/what-is-opentelemetry/[What is OpenTelemetry] and https://opentelemetry.io/docs/concepts/observability-primer/#understanding-distributed-tracing[Understanding distributed tracing]. * xref:monitoring::telemetry-exporter.adoc[How to export telemetry data to third-party monitoring systems]. Before starting with the observability standard, you must have: -* For flow tracing, Mule runtime 4.6.0 or later. -* For distributed tracing, Anypoint Connector for HTTP (HTTP Connector) 1.8. -* Telemetry exporter configured. +* For OpenTelemetry application tracing, Mule runtime 4.6.0 or later. +* For OpenTelemetry tracing context propagation, Anypoint Connector for HTTP (HTTP Connector) 1.8 or later. +* For OpenTelemetry logs and OpenTelemetry direct export, Mule runtime 4.10.0 or later. +* Telemetry exporter or Mule runtime direct export configured. -== Tracing a Mule App +== Tracing a Mule Application Mule leverages OpenTelemetry to provide: -* App execution observability +* Application execution observability + -Observability includes how each component of the app behaves. The main span involved in a Mule app is the flow span. Each Mule component execution involved in the message processing is represented as a child span of the corresponding flow. +Tracing a Mule runtime application allows you to observe its execution while maintaining the mental model of a Mule engineer. Its main objective is to clearly show how the Mule application processes messages, helping you to develop, monitor, and troubleshoot more effectively. * Distributed tracing + -Tracing enables you to track Mule app interactions. When multiple systems and services are involved, distributed tracing tracks app requests as they flow through distributed environments, providing a comprehensive view of the app's execution. +Tracing also enables you to track Mule application interactions. When multiple systems and services are involved, distributed tracing tracks application requests as they flow through distributed environments, providing a comprehensive view of the overall execution. -For example, if an HTTP Listener receives https://www.w3.org/TR/trace-context/[W3C Trace Context] headers, the flow span in Mule acquires the client's remote parent span, the span where the HTTP call originated. Mule automatically propagates the current span to the server endpoints when using the HTTP Requester component. +Distributed tracing is achieved via the tracing context propagation. For example, if an HTTP Listener receives https://www.w3.org/TR/trace-context/[W3C Trace Context] headers, the flow span in Mule acquires the client's remote parent span, the span where the HTTP call originated. Mule automatically propagates the current span to the server endpoints when using the HTTP Requester component. The span context format supported for receiving and propagating the context is the https://www.w3.org/TR/trace-context/[W3C Trace Context] format. === Mule Span Data -A trace is a sequence of spans representing a complete operation in a distributed system. Spans can be nested and represent a unit of work or operation. Each span includes: +A trace is a sequence of spans representing a complete operation in a distributed system. Spans can be nested and represent a unit of work or operation. Each span includes: [%header%autowidth.spread,cols="a,a,a"] |=== @@ -50,7 +51,7 @@ A trace is a sequence of spans representing a complete operation in a distribute | Name | Name of the span | `mule:set-variable` | Parent Span ID | Span ID of current span's parent | `86838830d494d679` | Start and End Timestamps | Earliest and latest timestamp for a span | `2021-10-22 16:04:01.209458162 +0000 UTC` -| Span Context | Data that uniquely identifies the span: trace ID, span ID, trace flags, and trace state | +| Span Context | Data that uniquely identifies the span: trace ID, span ID, trace flags, and trace state | | Attributes | Key-value pairs containing metadata used to annotate a span to carry information about the operation it's tracking. | { “location”: “flow/processors/2”, “correlation.id”: “1234abcd” @@ -60,7 +61,7 @@ A trace is a sequence of spans representing a complete operation in a distribute "message": "OK", "timestamp": "2021-10-22 16:04:01.209512872 +0000 UTC" } -| Span Links | Links that associate one span with one or more spans, indicating a causal relationship. | +| Span Links | Links that associate one span with one or more spans, indicating a causal relationship. | | Span Status | Status attached to a span. You can set a span status when a known error, such as an exception, occurs in the app code. | `ERROR`, `OK`, `UNSET` | Span Kind | Information about how the trace is assembled. | `CLIENT`, `INTERNAL`, `PRODUCER` |=== @@ -69,18 +70,18 @@ A trace is a sequence of spans representing a complete operation in a distribute When tracing a flow execution, each Mule component execution involved in processing the message is represented as a span. The component spans describe the relevant aspects of the execution. They can range from a single span for a simple component, such as the Set Payload, to more complex structures, such as a Batch Job. -.Batch Instance +.Batch Instance The Mule runtime traces complex components, such as Batch Jobs. image::otel-batch-instance.png["A batch flow including Listener and Set Payload components, and a more complex structure"] .Spans Generated -The trace shows how the first step of the Batch Job processes each batch record, with each record span containing a child span that represents the unit of work the Logger component performs. -Next, it displays the Batch Aggregator span, along with the For Each component and the processors within it, such as Transform Message and Logger. -Then, the trace displays the other two batch steps and their respective components. +The trace shows how the first step of the Batch Job processes each batch record, with each record span containing a child span that represents the unit of work the Logger component performs. +Next, it displays the Batch Aggregator span, along with the For Each component and the processors within it, such as Transform Message and Logger. +Then, the trace displays the other two batch steps and their respective components. Finally, it shows the Batch On Complete span, along with the Logger component that appears after the Batch Job in the flow. -=== Example of Distributed Tracing Feature +=== Example of Tracing Context Propagation Feature In this example, one Mule app sends a request to another Mule app: @@ -90,7 +91,7 @@ image::otel-mule-app2.png["A flow showing the process flow for an asynchronous a Distributed tracing captures both flows as part of the same trace because OpenTelemetry is instrumented in Mule and HTTP Connector. -=== Errors in Tracing +=== Mule Errors in Tracing If the execution of the unit of work represented by a span produces an error, the span's status is set to `Error`. MuleSoft follows the semantic convention defined by OpenTelemetry that adds one or more `exception` events to the span: @@ -98,7 +99,7 @@ If the execution of the unit of work represented by a span produces an error, th * `exception.message`: Mule error message * `exception.stacktrace`: Mule flow stack string representation -In this example, when an error is produced, the span corresponding to the Raise Error component shows an error status in the Lightstep platform: +In this example, when an error is produced, the span corresponding to the Raise Error component shows an error status in the Lightstep platform: image::otel-errors-in-tracing.png["A flow of a telemetry flow with error handling"] @@ -117,7 +118,7 @@ Additional attributes for an HTTP Listener associated flow span: [%header%autowidth.spread] |=== -| Attribute | Description +| Attribute | Description | `net.host.name` | Host address of the HTTP Listener | `net.host.port` | Port of the HTTP Listener | `http.user_agent` | User agent of the received request @@ -126,7 +127,7 @@ Additional attributes for an HTTP Listener associated flow span: | `http.flavor` | HTTP version |=== -For flows with an HTTP Request: +For flows with an HTTP Request: * Span name: HTTP Method. For example, `GET`. * Span kind: `CLIENT`. @@ -135,7 +136,7 @@ Additional attributes for an HTTP Request associated flow span: [%header%autowidth.spread] |=== -| Attribute | Description +| Attribute | Description | `http.url` | Target URL | `net.host.port` | Port of the HTTP Request | `net.peer.name` | Target IP address @@ -143,6 +144,265 @@ Additional attributes for an HTTP Request associated flow span: | `http.flavor` | HTTP version |=== +== OpenTelemetry Logs + +The Mule runtime implements its xref:logging-in-mule.adoc[logging] by leveraging Log4j, where you can configure appenders and logging levels to send logs to different locations. + +The OpenTelemetry logging export is implemented along with Log4j logging. You can configure both OpenTelemetry logging export and Log4j logging at the same time. This flexible architecture doesn't force a full transition between OpenTelemetry and Log4j. + +You must still configure logging levels as part of the Log4j logging configuration, and these settings also apply to the OpenTelemetry logging exporter. + +Exported logs include application, domain, and Mule runtime server logs. + +=== Mule Log Record Data + +OpenTelemetry logging is a sequence of log records with a unified data model, where each record corresponds to a Java log call. +Each log record contains: + +[%header%autowidth.spread,cols="a,a,a"] +|=== +| Categories | Description | Example Values +| Timestamp | Time of the Java log call | `2021-10-22 16:04:01.209458162 +0000 UTC` +| Observed Timestamp | Time of the OpenTelemetry log event registration | `2021-10-22 16:04:01.209458162 +0000 UTC` +| Trace ID | Trace where the log event originated. Not present if the log occurred outside an application trace. | 4a7b9e1d5c3f8a0e2b6d1f7c8a9e0d2c +| Span ID | Span where the log event originated. Not present if the log occurred outside an application trace. | 4b7f3c798460 +| Trace Flags | W3C trace flags corresponding to the trace where the log event originated. Not present if the log occurred outside an application trace. | 01x +| Severity Text | Severity of the log event. Also known as log level. | INFO +| Body | Body of the log record. Also known as log message. | "Stopping flow: test-flow" +| Instrumentation Scope | Fully qualified name of the Java class where the Java log call originated | "org.mule.runtime.core.internal.lifecycle.AbstractLifecycleManager" +| Attributes | Key-value pairs that annotate a log record with context metadata. Includes Log4j MDC entries. | { +“location”: “flow/processors/2”, +“correlation.id”: “1234abcd” +} +|=== + +=== Correlation with Traces + +OpenTelemetry Log Records define fields for the Trace ID and Span ID. When tracing is enabled, the Mule runtime exports OpenTelemetry logs that include these values, enabling correlation between application traces and logs. + +If you are not exporting OpenTelemetry logs, you can still correlate Log4j logs by configuring the MDC logging. +Any entry added to the Log4j MDC is added to the OpenTelemetry logs as Log Record attributes, so all MDC data is exported as part of OpenTelemetry logs. + +== Direct Export from Mule Runtime + +Starting with Mule runtime 4.10.0, you can configure OpenTelemetry traces and logs outside Anypoint Monitoring. +This means Mule runtime supports exporting its OpenTelemetry data to a customer-managed OpenTelemetry collector instead of the Anypoint Monitoring OpenTelemetry ingestion pipeline. + +When using this direct export feature, Anypoint Monitoring OpenTelemetry features aren't available. + +=== Recommended Architecture for Direct Export + +Mule runtime supports exporting all its telemetry data using the OpenTelemetry (OTLP) protocol and binary/protobuf format. You can configure an endpoint for each signal type (traces and logs), and data will be sent there. +However, we strongly recommend using a collector because it provides the most flexible and feature-rich architecture for a wide range of use cases and deployment scenarios. + +See https://opentelemetry.io/docs/collector/[OpenTelemetry Collector] for more information. + +Key reasons for using a collector include: + +* Compatibility: + +Mule runtime sends telemetry signals to one endpoint (the collector) in one (OTEL binary/protobuf) format, ensuring vendor neutrality. Then, the collector manages exporting signals to the selected backends. +This is especially useful for monitoring solutions like Azure, which can't ingest OTEL signals directly and require a specific exporter, or Grafana, which implements authentication and additional metrics via collector components. + +Exporting the same signal to multiple third-party solutions is only possible through a collector because Mule runtime allows one endpoint per signal. + +Most third-party monitoring solutions support or provide their own collector implementations. + +* Security: + +Applications don't need external Internet access to reach third-party monitoring solutions. + +The collector handles authentication and authorization against third-party monitoring solutions. + +You can use it as a single point for dedicated network policies. + +* Performance: + +You can optimize collectors for specific goals, scale independently, and distribute load without affecting Mule runtime instances. + +Configuration changes don't affect Mule runtime instances. + +* Observability: + +Collectors include multiple built in metrics and health checks. + +* Centralize data processing: + +A collector can centralize data enrichment and transformation for Mule runtime telemetry and additional non-Mule services. + +=== Direct Export Configuration + +Mule runtime ships with predefined configuration templates that allow you to configure both traces and logs. + +Standalone and Hybrid Standalone: + +You can access and modify the configuration templates as needed. Configuration templates include Java system properties placeholders that you can modify or replace by fixed values if required. +Providing all the configuration values when using Java System properties placeholders isn't necessary. If a system property isn't defined, the default value of the corresponding configuration entry is used instead. +Tracing and logs export configurations apply to all the corresponding Mule runtime applications deployed, with no support for individual application configuration. + +RTF, CloudHub, and CloudHub 2.0: + +The Mule runtime configuration templates are managed internally and can't be accessed or modified. Configure these deployments by using Runtime Manager application properties: + +* For Runtime Fabric, configure exports by setting xref:runtime-fabric::secure-application-properties.adoc[secure application properties] only. +* For CloudHub and CloudHub 2.0, configure exports by setting both secure and non-secure xref:cloudhub-2::ch2-manage-props.adoc#add-application-properties-in-runtime-manager[application properties]. + +=== Direct Export for Traces Configuration + +Tracing export setup properties include: + +[%header%autowidth.spread,cols="a,a,a"] +|=== +| Property | Default Value | Description +| `mule.openTelemetry.tracer.exporter.enabled` | `false` | Set to `true` to enable the OpenTelemetry tracing exporter. +| `mule.openTelemetry.tracer.exporter.endpoint` | http://localhost:4317 for GRPC, http://localhost:4318/v1/traces for HTTP | Endpoint where the traces are exported to. +| `mule.openTelemetry.tracer.exporter.type` | `GRPC` | Transport type. Available values: `GRPC`, `HTTP`. +| `mule.openTelemetry.tracer.exporter.headers` | Empty list | Headers included on every export call made to the export endpoint. +|=== + +Tracing export tuning properties include: + +[%header%autowidth.spread,cols="a,a,a"] +|=== +| Property | Default Value | Description +| `mule.openTelemetry.tracer.level` | `MONITORING` | Mule runtime tracing level. Available values: `OVERVIEW`, `MONITORING`, `DEBUG`. +| `mule.openTelemetry.tracer.exporter.sampler` | `parentbased_traceidratio` | Head sampling strategy used. Available values: `always_on`, `always_off`, `traceidratio`, `parentbased_always_off`, `parentbased_traceidratio`. +| `mule.openTelemetry.tracer.exporter.sampler.arg` | 0.1 | Argument for head sampling strategy. For example, sampling percentage for the `traceidratio` strategy. +| `mule.openTelemetry.tracer.exporter.timeout` | 10 Seconds | Timeout for every export call made to the export endpoint. +| `mule.openTelemetry.tracer.exporter.tls.enabled` | `false` | Set to `true` to enable the use of TLS for every export call made to the export endpoint. +| `mule.openTelemetry.tracer.exporter.tls.certFileLocation` | - | TLS certificate file path. +| `mule.openTelemetry.tracer.exporter.tls.keyFileLocation` | - | TLS certificate public key file path. +| `mule.openTelemetry.tracer.exporter.tls.caFileLocation` | - | TLS certificate authority file path. +| `mule.openTelemetry.tracer.exporter.compression` | `none` | Compression algorithm used to compress the tracing date to be sent on every export call made to the export endpoint. Available values: `none`, `gzip`. +| `mule.openTelemetry.tracer.exporter.backoff.maxAttempts` | 5 | Maximum retry attempts to export a batch of spans when the export endpoint responds with a retryable error. +| `mule.openTelemetry.tracer.exporter.backoff.initial` | 1 second | Wait time before the first retry attempt. +| `mule.openTelemetry.tracer.exporter.backoff.maximum` | 5 seconds | Maximum wait time before a retry attempt. +| `mule.openTelemetry.tracer.exporter.backoff.multiplier` | 1.5 | Multiplier applied to the last wait time before a new retry attempt. +| `mule.openTelemetry.tracer.exporter.batch.scheduledDelay` | 5000 milliseconds | Maximum interval between two consecutive exports. +| `mule.openTelemetry.tracer.exporter.batch.queueSize` | 2048 spans | Export queue size. When full, non-fitting spans are dropped, and a WARNING log appears. Example, "Export queue overflow: 1 spans have been dropped. Total spans dropped since the export started: 2". +| `mule.openTelemetry.tracer.exporter.batch.maxSize` | 512 spans | Maximum size of batches exported per span. +|=== + +=== Direct Export for Logs Configuration + +Logging export setup properties include: + +[%header%autowidth.spread,cols="a,a,a"] +|=== +| Property | Default value | Description +| `mule.openTelemetry.logging.exporter.enabled` | `false` | Set to `true` to enable the OpenTelemetry logging exporter. +| `mule.openTelemetry.logging.exporter.endpoint` | http://localhost:4317 for GRPC, http://localhost:4318/v1/logs for HTTP | Endpoint for exporting logs. +| `mule.openTelemetry.logging.exporter.type` | `GRPC` | Transport type. Available values: `GRPC`, `HTTP`. +| `mule.openTelemetry.logging.exporter.headers` | Empty list | Headers included on every export call made to the export endpoint. +|=== + +Logging export tuning properties include: + +[%header%autowidth.spread,cols="a,a,a"] +|=== +| Property | Default Value | Description +| `mule.openTelemetry.logging.exporter.timeout` | 10 Seconds | Timeout for every export call made to the export endpoint. +|`mule.openTelemetry.logging.exporter.compression` | `none` | Compression algorithm used to compress the logging data to be sent on every export call made to the export endpoint. Available values: `none`, `gzip`. +| `mule.openTelemetry.logging.exporter.tls.enabled` | `false` | Set to `true` to enable the use of TLS for every export call made to the export endpoint. +| `mule.openTelemetry.logging.exporter.tls.certFileLocation` | - | TLS certificate file path. +| `mule.openTelemetry.logging.exporter.tls.keyFileLocation` | - | TLS certificate public key file path +| `mule.openTelemetry.logging.exporter.tls.caFileLocation` | - | TLS certificate authority file path +| `mule.openTelemetry.logging.exporter.backoff.maxAttempts` | 5 | Maximum retry attempts to export a batch of log records when the export endpoint responds with a retryable error. +| `mule.openTelemetry.logging.exporter.backoff.initial` | 1 second | Wait time before the first retry attempt. +| `mule.openTelemetry.logging.exporter.backoff.maximum` | 5 seconds | Maximum wait time before a retry attempt. +| `mule.openTelemetry.logging.exporter.backoff.multiplier` | 1.5 | Multiplier applied to the last wait time before a new retry attempt. +| `mule.openTelemetry.logging.exporter.batch.scheduledDelay` | 5000 milliseconds | Maximum interval between two consecutive exports. +| `mule.openTelemetry.logging.exporter.batch.queueSize` | 2048 log records | Export queue size. When full, non-fitting log records are dropped, and a WARNING log appears. Example, "Export queue overflow: 1 log records have been dropped. Total dropped log records since the export started: 2". +| `mule.openTelemetry.logging.exporter.batch.maxSize` | 512 log records | Maximum size of batches exported per log record. +| `mule.openTelemetry.logging.exporter.batch.backPressure.strategy` | `BLOCK` | Strategy for handling full export queues. Available values: `BLOCK`, `DROP`. +|=== + +== Advanced Features + +This section describes advanced features that are available for OpenTelemetry support in Mule runtime. + +=== Custom Service Name and Namespace + +Service attributes identify the source of the telemetry data. Mule runtime supports setting the `service.name` and `service.namespace` properties to custom values: + +[%header%autowidth.spread,cols="a,a,a"] +|=== +| Property | Default Value | Description +| `mule.openTelemetry.tracer.exporter.resource.service.name` | Artifact ID | Service name used by the exporter for traces. +| `mule.openTelemetry.tracer.exporter.resource.service.namespace` | `mule` | Service namespace used by the exporter for traces. +| `mule.openTelemetry.logging.exporter.resource.service.name` | Artifact ID | Service name used by the exporter for logs. +| `mule.openTelemetry.logging.exporter.resource.service.namespace` | `mule` | Service namespace used by the exporter for logs. +|=== + +[NOTE] +==== +* The configured service namespace applies to both traces and logs. +* For Standalone and Hybrid Standalone, setting a custom service namespace applies to all the applications deployed. Setting `service.name` isn't supported. +==== + +=== Custom Attributes + +Adding custom data to the generated spans and log records can help with: + +* Adding business context, such as an order ID. +* Adding user context, such as a user ID. +* Enabling filtering and search. +* Improving troubleshooting by adding debug or troubleshoot data. + +Mule runtime supports adding custom attributes to the spans and log records through the xref:tracing-module::tracing-module-reference.adoc[Tracing Module] +Any logging variable set by the module is also added as an attribute of the span that represents the `set-logging-variable` operation and the spans and log records that are generated during the flow's execution, including subflows, flow references, and error handlers. + +While useful, custom attributes must be handled with care. Include only relevant information. Too many attributes can affect performance and reach the feature limits: + +* Maximum of 128 attributes per span (additional attributes are dropped) +* Maximum length of 16384 bytes per attribute (exceeding bytes are truncated) + +=== Backpressure Strategy + +Backpressure occurs when export queues for traces or logs are full. Backpressure strategies define how to deal with that event: + +* Traces backpressure: When the queue is full, non-fitting spans are dropped. +* Logs backpressure: The default strategy is to block until space is generated for the log records that cannot enter the export queue, matching the Log4j logging behavior. You can also configure a drop strategy. + +When spans or log records are dropped, a WARNING log is generated. Internal statistics track the total drops, so the log messages show the totals. + +Example of a drop warning: + +"Export queue overflow: 1 spans have been dropped. Total spans dropped since the export started: 2" + +=== Tracing Levels + +Mule runtime tracing is structured in a way that offers a base tracing level that can be `OVERVIEW`, `MONITORING` or `DEBUG`. +Every generated span has a corresponding tracing level. Flow spans are of level `OVERVIEW`, flow component spans are of level `MONITORING`, and flow component internal execution spans (parameter resolution, connection obtention) are of level `DEBUG`. + +Normally, the default tracing level is the best choice. Use `DEBUG` for troubleshooting. + +=== Tracing Sampling + +Sampling helps reduce the costs of observability. Mule runtime supports Head sampling, which makes the sampling decision as early as possible. In case of a Mule application, that moment is when a message source is about to trigger a message processing, for example, when an HTTP listener has received a request. + +Head sampling is a good fit for: + +* Supporting use cases with limited budgets for observability. +* Accommodating use cases when there isn't a collector involved (consider early development stages) where observability is also useful. + +Regarding decision algorithms, Mule runtime currently supports these standardized OpenTelemetry algorithms: `AlwaysOn`, `AlwaysOff`, `ParentBasedAlwaysOn`, `ParentBasedAlwaysOff`, `TraceIdRatio`, and `ParentBasedTraceIdRatio`. + +While Head sampling is simple and efficient, it has some drawbacks: + +* Always sampling traces that contain an error +* Sampling traces based on overall latency +* Sampling traces based on specific attributes on one or more spans in a trace, for example, sampling more traces originating from a newly deployed service +* Applying different sampling rates to traces based on certain criteria, for example, when traces only come from low-volume services versus traces with high-volume services + +Tail sampling supports these advanced cases but is costly and can affect performance, therefore Mule runtime only supports Head sampling. +If needed, implement Tail sampling in the OpenTelemetry collector. + +=== Retry Logic + +When errors occur during export requests, Mule runtime uses an exponential back-off with jitter mechanism to avoid overwhelming the destination until recovery. +Only errors marked as retryable defined by OpenTelemetry standards trigger retry logic. See https://opentelemetry.io/docs/specs/otlp/#failures[HTTP transport protocol reference] and https://opentelemetry.io/docs/specs/otlp/#retryable-response-codes[gRPC transport protocol reference] for more information. + == See Also -* xref:monitoring::telemetry-exporter.adoc[] \ No newline at end of file +* xref:monitoring::telemetry-exporter.adoc[]