-
Notifications
You must be signed in to change notification settings - Fork 672
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
NOISSUE - Kafka support #1579
NOISSUE - Kafka support #1579
Conversation
2778396
to
5ef5fee
Compare
Codecov Report
@@ Coverage Diff @@
## master #1579 +/- ##
==========================================
+ Coverage 67.14% 67.21% +0.07%
==========================================
Files 120 122 +2
Lines 9145 9341 +196
==========================================
+ Hits 6140 6279 +139
- Misses 2368 2413 +45
- Partials 637 649 +12
📣 We’re building smart automated test selection to slash your CI/CD build times. Learn more |
pkg/messaging/nats/pubsub.go
Outdated
const ( | ||
chansPrefix = "channels" | ||
// SubjectAllChannels represents subject to subscribe for all the channels. | ||
SubjectAllChannels = "channels.>" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would construct this one as chansPrefix + ".>"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That is what is implemented currently across all brokers. Should we change?
@0x6f736f646f Do not forget to update |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please fix CI.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
CI is failing.
The Another problem is that on mainflux when we publish message for the first time it throws the same error This is an error with regard to Kafka as it is being experienced with other library users such as Sarama. My recommendation would be to increase the retrying on public method upto 10s then manage the error after that |
Can you point out the exact section for this?
Looks like the problem is actually that first-time writing always arises this error, not due to Kafka not being ready but because we do not have a topic created. I guess Kafka creates a topic internally when we try to publish, and the next attempt will pass because on the first - Kafka took care of topic for us. Please investigate if it works like that and how to correctly handle topics lifetime (we do not remove them when we unsubscribe, and I assume there are other ways to configure Kafka to handle it for us). |
You can check it from here
Yes it works like that.
topicConfigs := []kafka.TopicConfig{
{
Topic: subject,
NumPartitions: 1,
ReplicationFactor: 1,
},
}
err = pub.conn.CreateTopics(topicConfigs...)
if err != nil {
return err
} after initially creating the kafka connection object.
|
Topic managementKafka topics are the categories used to organize messages. Each topic has a name that is unique across the entire Kafka cluster. Creation
Deletion
We are currently deleting topics when we close the publisher interface Publishing message |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
From this it appears that in the segmentio library you can list all topics with a bit of work around. But the problem is when the MQTT forwarder is subscribing to the broker, it uses the wildcard to subscribe to all topics but they are no topics in the kakfa broker. |
Regarding Kafka support for wildcard, I have tested using confluent-kafka-python and it supports. I think the issue is the library we are using I have tried to pass in a regex on the topic and it throws this error 2023/02/07 14:24:37 Failed to join group groupID2: [17] Invalid Topic: a request which attempted to access an invalid topic (e.g. one which has an illegal name), or if an attempt was made to write to an internal topic (such as the consumer offsets topic)
2023/02/07 14:24:37 Leaving group groupID2, member main@r0x6f736f646f (github.com/segmentio/kafka-go)-77b03394-27c9-4846-89c2-fe1fb51f7034 I have tried with another library https://github.com/confluentinc/confluent-kafka-go and it works. I want to write the implementation on this kafka PR and update after testing it on mainflux |
Let's address the subscriber problem and research our options in a separate PR and make this PR only a |
5bf7d40
to
ec7e316
Compare
2c5ba59
to
07b67c2
Compare
5f83992
to
27d8ad9
Compare
27d8ad9
to
8eb5e6f
Compare
8eb5e6f
to
764ee6b
Compare
This PR might be a better fit for https://github.com/absmach/mg-contrib since Kafka does not support all the features MG messaging requires, but may also be useful for integration with systems that use Kafka and do not need those features. |
764ee6b
to
7e7f756
Compare
7e7f756
to
ab76b19
Compare
ab76b19
to
cd3496e
Compare
c885bff
to
eb85775
Compare
eb85775
to
f5e434f
Compare
f5e434f
to
39839ed
Compare
Signed-off-by: rodneyosodo <[email protected]>
Signed-off-by: Rodney Osodo <[email protected]>
Signed-off-by: Rodney Osodo <[email protected]>
…ncies. Modified Close function in Kafka messaging package and updated pubsub struct and NewPubSub function. Signed-off-by: Rodney Osodo <[email protected]> Signed-off-by: Rodney Osodo <[email protected]>
39839ed
to
0f0c9b8
Compare
As already commented here, this may be a better fit for the |
What does this do?
Adds Kafka support. Since we are introducing a multi-broker system, we need to add support for Kafka. This allows us to use Kafka as a message broker for our system. Kafka has been chosen because it is the most popular legacy message broker. This allows us to support legacy systems that use Kafka.
Which issue(s) does this PR fix/relate to?
Resolves #990
List any changes that modify/break current functionality
N/A
Have you included tests for your changes?
Yes
Did you document any new/modified functionality?
absmach/supermq-docs#157
Notes
The publisher implementation is based on the
segmentio/kafka-go
library. Publishing messages is well supported by the library, but subscribing to topics is not. The library does not provide a way to subscribe to all topics, but only to a specific topic. This is a problem because the Mainflux platform uses a topic per channel, and the number of channels is not known in advance. The solution is to use the Zookeeper library to get a list of all topics and then subscribe to each of them. The list of topics is obtained by connecting to the Zookeeper server and reading the list of topics from the/brokers/topics
node. The first message received from the topic can be lost if subscription happens closely followed by publishing. After the subscription, we guarantee that all messages will be received.