-
Notifications
You must be signed in to change notification settings - Fork 616
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
1 parent
795a913
commit b8e6102
Showing
5 changed files
with
77 additions
and
70 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
28 changes: 14 additions & 14 deletions
28
...ers/kafka-binder/spring-cloud-stream-binder-kafka-reactive/src/test/resources/logback.xml
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,20 +1,20 @@ | ||
<configuration> | ||
<appender name="stdout" class="ch.qos.logback.core.ConsoleAppender"> | ||
<encoder> | ||
<pattern>%d{ISO8601} %5p %t %c{2}:%L - %m%n</pattern> | ||
</encoder> | ||
</appender> | ||
<logger name="org.apache.kafka" level="WARN"/> | ||
<logger name="reactor.kafka" level="DEBUG"/> | ||
<logger name="org.springframework.integration.kafka" level="INFO"/> | ||
<logger name="org.springframework.kafka" level="DEBUG"/> | ||
<logger name="org.springframework.cloud.stream" level="INFO" /> | ||
<logger name="org.springframework.integration.channel" level="DEBUG" /> | ||
<appender name="stdout" class="ch.qos.logback.core.ConsoleAppender"> | ||
<encoder> | ||
<pattern>%d{ISO8601} %5p %t %c{2}:%L - %m%n</pattern> | ||
</encoder> | ||
</appender> | ||
<logger name="org.apache.kafka" level="WARN"/> | ||
<logger name="reactor.kafka" level="DEBUG"/> | ||
<logger name="org.springframework.integration.kafka" level="INFO"/> | ||
<logger name="org.springframework.kafka" level="DEBUG"/> | ||
<logger name="org.springframework.cloud.stream" level="INFO" /> | ||
<logger name="org.springframework.integration.channel" level="DEBUG" /> | ||
<logger name="kafka.server.ReplicaFetcherThread" level="ERROR"/> | ||
<logger name="kafka.server.LogDirFailureChannel" level="FATAL"/> | ||
<logger name="kafka.server.BrokerMetadataCheckpoint" level="ERROR"/> | ||
<logger name="kafka.utils.CoreUtils$" level="ERROR"/> | ||
<root level="WARN"> | ||
<appender-ref ref="stdout"/> | ||
</root> | ||
<root level="WARN"> | ||
<appender-ref ref="stdout"/> | ||
</root> | ||
</configuration> |
89 changes: 50 additions & 39 deletions
89
docs/modules/ROOT/pages/kafka/kafka-reactive-binder/reactive_observability.adoc
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,70 +1,81 @@ | ||
[[reactive-kafka-binder-observability]] | ||
= Observability in Reactive Kafka Binder | ||
|
||
In this section, we will describe how Micrometer based observability is enabled in the reactive Kafka binder. | ||
This section describes how Micrometer-based observability is enabled in the reactive Kafka binder. | ||
|
||
There is built in support for observability when it comes to producer binding, but you need to opt-in for this by enabling the following property. | ||
== Producer Binding | ||
|
||
There is built-in support for observability in producer binding. | ||
To enable it, set the following property: | ||
|
||
``` | ||
spring.cloud.stream.kafka.binder.enable-observation | ||
``` | ||
|
||
When this property is set to `true`, you can trace the publishing of records. | ||
When this property is set to `true`, you can observe the publishing of records. | ||
Both publishing records using `StreamBridge` and regular `Supplier<?>` beans can be observed. | ||
|
||
== Consumer Binding | ||
|
||
Both publishing records using `StreamBridge` and regular `Supplier<?>` beans can be now traced when enabling the above property. | ||
Enabling observability on the consumer side is more complex than on the producer side. | ||
There are two starting points for consumer binding: | ||
|
||
However, on the consumer side, enabling observability is not as straightforward as on the producer side. | ||
1. A topic where data is published via a producer binding | ||
2. A topic where data is produced outside of Spring Cloud Stream | ||
|
||
There are two starting points for consumer binding - one a topic where the data is published via a producer binding, another one where the data is produced via not Spring Cloud Stream. | ||
In the first case, the application ideally wants to carry the observability headers down to the consumer inbound. | ||
In the second case, if there was no upstream observation started, it will start a new observation. | ||
|
||
Let's look at the following function. | ||
=== Example: Function with Observability | ||
|
||
``` | ||
@Bean | ||
Function<Flux<ReceiverRecord<byte[], byte[]>>, Flux<Message<String>>> receive(ObservationRegistry observationRegistry) { | ||
return s -> s | ||
.flatMap(record -> { | ||
Observation receiverObservation = | ||
KafkaReceiverObservation.RECEIVER_OBSERVATION.start(null, | ||
KafkaReceiverObservation.DefaultKafkaReceiverObservationConvention.INSTANCE, | ||
() -> | ||
new KafkaRecordReceiverContext( | ||
record, "user.receiver", "localhost:9092"), | ||
observationRegistry); | ||
|
||
return Mono.deferContextual(contextView -> Mono.just(record) | ||
.map(rec -> new String(rec.value()).toLowerCase()) | ||
.map(rec -> MessageBuilder.withPayload(rec).setHeader(IntegrationMessageHeaderAccessor.REACTOR_CONTEXT, contextView).build())) | ||
.doOnTerminate(receiverObservation::stop) | ||
.doOnError(receiverObservation::error) | ||
.contextWrite(context -> context.put(ObservationThreadLocalAccessor.KEY, receiverObservation)); | ||
}); | ||
|
||
return s -> s.flatMap(record -> { | ||
Observation receiverObservation = KafkaReceiverObservation.RECEIVER_OBSERVATION.start( | ||
null, | ||
KafkaReceiverObservation.DefaultKafkaReceiverObservationConvention.INSTANCE, | ||
() -> new KafkaRecordReceiverContext(record, "user.receiver", "localhost:9092"), | ||
observationRegistry | ||
); | ||
|
||
return Mono.deferContextual(contextView -> Mono.just(record) | ||
.map(rec -> new String(rec.value()).toLowerCase()) | ||
.map(rec -> MessageBuilder.withPayload(rec) | ||
.setHeader(IntegrationMessageHeaderAccessor.REACTOR_CONTEXT, contextView) | ||
.build())) | ||
.doOnTerminate(receiverObservation::stop) | ||
.doOnError(receiverObservation::error) | ||
.contextWrite(context -> context.put(ObservationThreadLocalAccessor.KEY, receiverObservation)); | ||
}); | ||
} | ||
``` | ||
|
||
In this example, when we receive a record, we first create an observation. | ||
If there is an upstream observation, then that will be part of the `KafkaRecordReceiverContext`. | ||
After that, a `Mono` is created with context deferred, and when the `map` operation is invoked, the context has access to the correct observation. | ||
Finally, the result of the `flatMap` operation will be sent back to the binding as `Flux<Message<?>`. | ||
The outbound record will have the same observability headers from the input binding. | ||
In this example: | ||
|
||
1. When a record is received, an observation is created. | ||
2. If there's an upstream observation, it will be part of the `KafkaRecordReceiverContext`. | ||
3. A `Mono` is created with context deferred. | ||
4. When the `map` operation is invoked, the context has access to the correct observation. | ||
5. The result of the `flatMap` operation is sent back to the binding as `Flux<Message<?>>`. | ||
6. The outbound record will have the same observability headers from the input binding. | ||
|
||
If you have a `Consumer`, here is how you can do the same. | ||
=== Example: Consumer with Observability | ||
|
||
``` | ||
@Bean | ||
Consumer<Flux<ReceiverRecord<?, String>>> receive(ObservationRegistry observationRegistry, @Value("${spring.kafka.bootstrap-servers}") String bootstrap) { | ||
return f -> f.doOnNext(record -> KafkaReceiverObservation.RECEIVER_OBSERVATION.observation(null, | ||
KafkaReceiverObservation.DefaultKafkaReceiverObservationConvention.INSTANCE, | ||
() -> | ||
new KafkaRecordReceiverContext( | ||
record, "user.receiver", bootstrap), | ||
observationRegistry) | ||
.observe(() -> System.out.println(record))) | ||
return f -> f.doOnNext(record -> KafkaReceiverObservation.RECEIVER_OBSERVATION.observation( | ||
null, | ||
KafkaReceiverObservation.DefaultKafkaReceiverObservationConvention.INSTANCE, | ||
() -> new KafkaRecordReceiverContext(record, "user.receiver", bootstrap), | ||
observationRegistry).observe(() -> System.out.println(record))) | ||
.subscribe(); | ||
} | ||
``` | ||
|
||
In this case, since there is no output binding, instead of using the `flatMap`, you can simply call the `doOnNext` operation on `Flux`. | ||
The direct call to `observe` in this case will start the observation and properly shut it down when finished. | ||
In this case: | ||
|
||
1. Since there's no output binding, `doOnNext` is used on the `Flux` instead of `flatMap`. | ||
2. The direct call to `observe` starts the observation and properly shuts it down when finished. |