diff --git a/docs/modules/ROOT/pages/kafka/kafka-binder/partitions.adoc b/docs/modules/ROOT/pages/kafka/kafka-binder/partitions.adoc index 382fd5704..da50563e0 100644 --- a/docs/modules/ROOT/pages/kafka/kafka-binder/partitions.adoc +++ b/docs/modules/ROOT/pages/kafka/kafka-binder/partitions.adoc @@ -73,7 +73,7 @@ You can override this default by using the `partitionSelectorExpression` or `par Since partitions are natively handled by Kafka, no special configuration is needed on the consumer side. Kafka allocates partitions across the instances. -NOTE: The partitionCount for a kafka topic may change during runtime (e.g. due to an adminstration task). +NOTE: The partitionCount for a kafka topic may change during runtime (e.g. due to an administration task). The calculated partitions will be different after that (e.g. new partitions will be used then). Since 4.0.3 of Spring Cloud Stream runtime changes of partition count will be supported. See also parameter 'spring.kafka.producer.properties.metadata.max.age.ms' to configure update interval. diff --git a/docs/modules/ROOT/pages/kafka/kafka-binder/retry-dlq.adoc b/docs/modules/ROOT/pages/kafka/kafka-binder/retry-dlq.adoc index 4d814c90d..5267386b0 100644 --- a/docs/modules/ROOT/pages/kafka/kafka-binder/retry-dlq.adoc +++ b/docs/modules/ROOT/pages/kafka/kafka-binder/retry-dlq.adoc @@ -1,7 +1,7 @@ [[retry-and-dlq-processing]] = Retry and Dead Letter Processing -By default, when you configure retry (e.g. `maxAttemts`) and `enableDlq` in a consumer binding, these functions are performed within the binder, with no participation by the listener container or Kafka consumer. +By default, when you configure retry (e.g. `maxAttempts`) and `enableDlq` in a consumer binding, these functions are performed within the binder, with no participation by the listener container or Kafka consumer. There are situations where it is preferable to move this functionality to the listener container, such as: diff --git a/docs/modules/ROOT/pages/kafka/kafka-reactive-binder/consuming.adoc b/docs/modules/ROOT/pages/kafka/kafka-reactive-binder/consuming.adoc index bd521b30d..2d5a1e4ec 100644 --- a/docs/modules/ROOT/pages/kafka/kafka-reactive-binder/consuming.adoc +++ b/docs/modules/ROOT/pages/kafka/kafka-reactive-binder/consuming.adoc @@ -53,7 +53,7 @@ spring.cloud.stream.kafka.bindings.lowercase-in-0.consumer.converterBeanName=ful ``` `lowercase-in-0` is the input binding name for our `lowercase` function. -For the outbound (`lowecase-out-0`), we still use the regular `MessagingMessageConverter`. +For the outbound (`lowercase-out-0`), we still use the regular `MessagingMessageConverter`. In the `toMessage` implementation above, we receive the raw `ConsumerRecord` (`ReceiverRecord` since we are in a reactive binder context) and then wrap it inside a `Message`. Then that message payload which is the `ReceiverRecord` is provided to the user method. diff --git a/docs/modules/ROOT/pages/kafka/kafka-reactive-binder/examples.adoc b/docs/modules/ROOT/pages/kafka/kafka-reactive-binder/examples.adoc index ecd10286f..50a38d1a0 100644 --- a/docs/modules/ROOT/pages/kafka/kafka-reactive-binder/examples.adoc +++ b/docs/modules/ROOT/pages/kafka/kafka-reactive-binder/examples.adoc @@ -50,7 +50,7 @@ In these cases, the acknowledgment header is not present. IMPORTANT: 4.0.2 also provided `reactiveAutoCommit`, but the implementation was incorrect, it behaved similarly to `reactiveAtMostOnce`. -The following is an example of how to use `reaciveAutoCommit`. +The following is an example of how to use `reactiveAutoCommit`. [source, java] ---- diff --git a/docs/modules/ROOT/pages/kafka/kafka-reactive-binder/pattern.adoc b/docs/modules/ROOT/pages/kafka/kafka-reactive-binder/pattern.adoc index 540361083..6583d702d 100644 --- a/docs/modules/ROOT/pages/kafka/kafka-reactive-binder/pattern.adoc +++ b/docs/modules/ROOT/pages/kafka/kafka-reactive-binder/pattern.adoc @@ -2,4 +2,4 @@ = Destination is Pattern Starting with version 4.0.3, the `destination-is-pattern` Kafka binding consumer property is now supported. -The receiver options are conigured with a regex `Pattern`, allowing the binding to consume from any topic that matches the pattern. +The receiver options are configured with a regex `Pattern`, allowing the binding to consume from any topic that matches the pattern. diff --git a/docs/modules/ROOT/pages/kafka/kafka-streams-binder/accessing-metrics.adoc b/docs/modules/ROOT/pages/kafka/kafka-streams-binder/accessing-metrics.adoc index 48f313020..1e6f89830 100644 --- a/docs/modules/ROOT/pages/kafka/kafka-streams-binder/accessing-metrics.adoc +++ b/docs/modules/ROOT/pages/kafka/kafka-streams-binder/accessing-metrics.adoc @@ -8,5 +8,5 @@ For Spring Boot version 2.2.x, the metrics support is provided through a custom For Spring Boot version 2.3.x, the Kafka Streams metrics support is provided natively through Micrometer. When accessing metrics through the Boot actuator endpoint, make sure to add `metrics` to the property `management.endpoints.web.exposure.include`. -Then you can access `/acutator/metrics` to get a list of all the available metrics, which then can be individually accessed through the same URI (`/actuator/metrics/`). +Then you can access `/actuator/metrics` to get a list of all the available metrics, which then can be individually accessed through the same URI (`/actuator/metrics/`). diff --git a/docs/modules/ROOT/pages/kafka/kafka-streams-binder/configuration-options.adoc b/docs/modules/ROOT/pages/kafka/kafka-streams-binder/configuration-options.adoc index a342ab0d6..970d8a747 100644 --- a/docs/modules/ROOT/pages/kafka/kafka-streams-binder/configuration-options.adoc +++ b/docs/modules/ROOT/pages/kafka/kafka-streams-binder/configuration-options.adoc @@ -118,7 +118,7 @@ Default: See the discussion above on outbound partition support. producedAs:: Custom name for the sink component to which the processor is producing to. + -Deafult: `none` (generated by Kafka Streams) +Default: `none` (generated by Kafka Streams) [[kafka-streams-consumer-properties]] == Kafka Streams Consumer Properties diff --git a/docs/modules/ROOT/pages/kafka/kafka-streams-binder/event-type-based-routing-in-applications.adoc b/docs/modules/ROOT/pages/kafka/kafka-streams-binder/event-type-based-routing-in-applications.adoc index b2cbe5af0..8ad054a7f 100644 --- a/docs/modules/ROOT/pages/kafka/kafka-streams-binder/event-type-based-routing-in-applications.adoc +++ b/docs/modules/ROOT/pages/kafka/kafka-streams-binder/event-type-based-routing-in-applications.adoc @@ -32,7 +32,7 @@ For instance, if we want to change the header key on this binding to `my_event` `spring.cloud.stream.kafka.streams.bindings.process-in-0.consumer.eventTypeHeaderKey=my_event`. -When using the event routing feature in Kafkfa Streams binder, it uses the byte array `Serde` to deserialze all incoming records. +When using the event routing feature in Kafka Streams binder, it uses the byte array `Serde` to deserialize all incoming records. If the record headers match the event type, then only it uses the actual `Serde` to do a proper deserialization using either the configured or the inferred `Serde`. This introduces issues if you set a deserialization exception handler on the binding as the expected deserialization only happens down the stack causing unexpected errors. In order to address this issue, you can set the following property on the binding to force the binder to use the configured or inferred `Serde` instead of byte array `Serde`. diff --git a/docs/modules/ROOT/pages/kafka/kafka-streams-binder/programming-model.adoc b/docs/modules/ROOT/pages/kafka/kafka-streams-binder/programming-model.adoc index 8059c189c..4935c0de3 100644 --- a/docs/modules/ROOT/pages/kafka/kafka-streams-binder/programming-model.adoc +++ b/docs/modules/ROOT/pages/kafka/kafka-streams-binder/programming-model.adoc @@ -353,15 +353,15 @@ spring.cloud.function.definition=foo|bar;foo;bar The composed function's default binding names in this example becomes `foobar-in-0` and `foobar-out-0`. -[[limitations-of-functional-composition-in-kafka-streams-bincer]] -==== Limitations of functional composition in Kafka Streams bincer +[[limitations-of-functional-composition-in-kafka-streams-binder]] +==== Limitations of functional composition in Kafka Streams binder When you have `java.util.function.Function` bean, that can be composed with another function or multiple functions. The same function bean can be composed with a `java.util.function.Consumer` as well. In this case, consumer is the last component composed. A function can be composed with multiple functions, then end with a `java.util.function.Consumer` bean as well. When composing the beans of type `java.util.function.BiFunction`, the `BiFunction` must be the first function in the definition. -The composed entities must be either of type `java.util.function.Function` or `java.util.funciton.Consumer`. +The composed entities must be either of type `java.util.function.Function` or `java.util.function.Consumer`. In other words, you cannot take a `BiFunction` bean and then compose with another `BiFunction`. You cannot compose with types of `BiConsumer` or definitions where `Consumer` is the first component. diff --git a/docs/modules/ROOT/pages/kafka/kafka-streams-binder/streamsbuilderfactorybean-customizer.adoc b/docs/modules/ROOT/pages/kafka/kafka-streams-binder/streamsbuilderfactorybean-customizer.adoc index 0ed2ede92..89900fb57 100644 --- a/docs/modules/ROOT/pages/kafka/kafka-streams-binder/streamsbuilderfactorybean-customizer.adoc +++ b/docs/modules/ROOT/pages/kafka/kafka-streams-binder/streamsbuilderfactorybean-customizer.adoc @@ -104,7 +104,7 @@ If you have multiple processors, you want to attach the global state store to th == Using StreamsBuilderFactoryBeanConfigurer to register a production exception handler In the error handling section, we indicated that the binder does not provide a first class way to deal with production exceptions. -Though that is the case, you can still use the `StreamsBuilderFacotryBean` customizer to register production exception handlers. See below. +Though that is the case, you can still use the `StreamsBuilderFactoryBean` customizer to register production exception handlers. See below. ``` @Bean