-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: Mirror Maker OAuth Authentication with Client Assertion expiration #11002
Comments
I don't think this is a bug. The client connector is instantiated once and is not reconfigured by Kafka MirrorMaker2/Connect. |
Agree this is more of a generic question about design, nevertheless when the access token expires, reauthentication is needed and for that the fresh client assertion is needed. I was thinking about using the clientAssertionPath instead where I would use volume shared with additional custom pod, which would take care of updating the secret (client assertion). Would this work? If there is no way to periodically update the client assertion it essentialy makes it unusable, but there might be some gap in my udnerstanding of the process. |
Maybe you need to use it in the way that allows the library to refresh the token - as far as I know if you configure it with client id and secret it will auto-refresh the tokens. |
I am looking for as secure approach as possible and use of certificates/client assertion is preffered for that reason. I understand strimzi CRD supports feeding the client assertion to underlying Kafka libraries via yaml file and I would expect kafka libraries to be able to request new token. Please can you help me understand where exactly the gap which prevents reauthentication is? Is it udenrlying Kafka or strimzi? Thank you. |
As I said, the assertion is loaded when the container starts and not updated later. |
Sorry for delay, I needed some time for testing. I have explored approach with clientAssertionPath. I was able to validate that if I change the client assertion value on the provided path, the reauthentication of MirrorMaker against event hubs works. However there is additional problem to this approach. I was not able to figure out how to create a volume mapping for the Mirror Maker container spinned up thru Strimzi CRD. Is this possible? I would like to have a standalone container or a pod, making care of the refresh of client assertion and propagating the update to mirror maker pod via volume |
The assertion is mounted from a file from a mounted Secret, or? That normally auto-updates when you update the Secret. But the assertion is loaded at the startup. So I do not think you can reload it regardless of how you update the file. |
Actually, I guess it depends whether you use it in the source or target cluster. For source cluster, it is loaded through the FileConfigProvider when the connector (task?) is started. But that will definitely not do any periodic reloading of the file either. |
So this is my auth specification for target cluster Event Hub
I produce the assertion jwt.txt file using cron job baked into custom Mirror Maker 2 Image which I reference in Strimzi Operator config:
And this works I am not seeing authentication errors as I have seen when referencing client assertion from secret. Nevertheless I'd like to prevent creation of custom docker image and running the cron job and as I mentioned rather have additional custom pod generating the assertion and writting it into a volume or a secret. I was trying to achieve this but I am unable to add a volume into Mirror Maker pod. In the Mirror Maker 2 CRD spec I found the option to add a volume and volume mounts like so:
but no mattter what mount path I use I do get following error from operator when trying to spin up the Mirror Maker: Message: Pod "mymirrormaker-mirrormaker2-0" is invalid: spec.containers[0].volumeMounts[3].mountPath: Invalid value: "/mnt/my-secret": must be unique. Received status: Status(apiVersion=v1, code=422, details=StatusDetails(causes=[StatusCause(field=spec.containers[0].volumeMounts[3].mountPath, message=Invalid value: "/mnt/my-secret": must be unique, reason=FieldValueInvalid, additionalProperties={})], group=null, kind=Pod, name=mymirrormaker-mirrormaker2-0, retryAfterSeconds=null, uid=null, additionalProperties={}), kind=Status, message=Pod "mymirrormaker-mirrormaker2-0" is invalid: spec.containers[0].volumeMounts[3].mountPath: Invalid value: "/mnt/my-secret": must be unique, metadata=ListMeta(_continue=null, remainingItemCount=null, resourceVersion=null, selfLink=null, additionalProperties={}), reason=Invalid, status=Failure, additionalProperties={}). |
I guess you would need to share the full custom resource to understand why you get the error. |
Of course here is the Mirror Maker definition I use:
|
There is some misalignment in the YAML. But I guess that was caused when copy pasting it as it would be otherwise invalid. But I do not seem to be able to reproduce the |
Yes, sorry and copy paste error.
|
Ok, I managed to reproduce it. I will look into it deeper. But this is likely some bug in the code. |
I understand, thank you Jakub for your support |
FYI: I found the issue. I will open a PR later today. But I'm afraid it is a bug and there is no workaround for it. So it will be fixed in the next release only. |
Thank you Jakub, please do you have some estimation on when it could be released? |
I opened #11022 to fix it. I think it should be backported to 0.45.0 release as the 0.46.0 release is still far away. But I think right now there is no exact plan for a patch release and it would need to be discussed. |
I started a Slack thread to see what other issues we might have for a possible 0.45.1 patch release: https://cloud-native.slack.com/archives/C018247K8T0/p1736344099810809 |
Triaged 9.1.2025: @MarekLani is there anything else that you find issues with? |
Thank you, nothing at this point. Just one kindly ask, would it be possible to update this thread once you have estimated fix release date? Thank you! |
It was discussed in the community call today and the overall feeling was that we will not do a patch release yet. But we can keep you posted. |
understand, please if no patch release will be done, can we expect roughly 2 months between minor version releases? I am just judging based on previous cadence |
The next minor version depends on Kafka 4.0. I think we will do the patch release sooner or later. The question is more when exactly. That will likely depend on if more bugs in 0.45 are found, how does it look with Kafka 4.0 etc. |
Bug Description
I am using OAuth authentication with Client Assertion from within Mirror Maker. I have configured Mirror Maker to pickup the client assertion from Kubernetes secret. Since the client assertion JWT has its expiration, I do have an mechanism which generates new client assertions and updates Kubernetes Secret. Mirror Maker however seems not to have a mechanism to pickup the updated version of the secret and it locks to the value of secret set during the pod/container start and naturaly fails to authenticate to target Kafka cluster after some time (in my case it is Azure Event Hubs Kafka interface)
Given the limited time validity of client assertions I would need the Mirror Maker to be able to force refresh the client assertion value. Please what is the suggeste approach here?
Steps to reproduce
See above and provided configuration.
Expected behavior
I would expect Mirror Maker to pickup the new value of the client assertion secret
Strimzi version
0.44
Kubernetes version
1.30.6
Installation method
Helm Chart
Infrastructure
Azure Kubernetes Service
Configuration files and logs
Mirror Maker configuration:
Additional context
No response
The text was updated successfully, but these errors were encountered: