Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix additional volumes in MirrorMaker 2 #11022

Merged
merged 1 commit into from
Jan 9, 2025

Conversation

scholzj
Copy link
Member

@scholzj scholzj commented Jan 8, 2025

Type of change

  • Bugfix

Description

The additional volume mounts in MirrorMaker 2 deployments are currently added twice to the list because they are already inherited from the underlying Connect cluster. As a result, when you try to use them, the Pods fail to create with the following error:

2025-01-08 13:07:17 ERROR StrimziPodSetController:380 - Reconciliation #627(watch) StrimziPodSet(myproject/my-mirror-maker-2-mirrormaker2): StrimziPodSet my-mirror-maker-2-mirrormaker2 in namespace myproject reconciliation failed
io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: POST at: https://10.96.0.1:443/api/v1/namespaces/myproject/pods. Message: Pod "my-mirror-maker-2-mirrormaker2-0" is invalid: spec.containers[0].volumeMounts[9].mountPath: Invalid value: "/mnt/my-secret": must be unique. Received status: Status(apiVersion=v1, code=422, details=StatusDetails(causes=[StatusCause(field=spec.containers[0].volumeMounts[9].mountPath, message=Invalid value: "/mnt/my-secret": must be unique, reason=FieldValueInvalid, additionalProperties={})], group=null, kind=Pod, name=my-mirror-maker-2-mirrormaker2-0, retryAfterSeconds=null, uid=null, additionalProperties={}), kind=Status, message=Pod "my-mirror-maker-2-mirrormaker2-0" is invalid: spec.containers[0].volumeMounts[9].mountPath: Invalid value: "/mnt/my-secret": must be unique, metadata=ListMeta(_continue=null, remainingItemCount=null, resourceVersion=null, selfLink=null, additionalProperties={}), reason=Invalid, status=Failure, additionalProperties={}).
	at io.fabric8.kubernetes.client.KubernetesClientException.copyAsCause(KubernetesClientException.java:205) ~[io.fabric8.kubernetes-client-api-7.0.1.jar:?]
	at io.fabric8.kubernetes.client.dsl.internal.OperationSupport.waitForResult(OperationSupport.java:507) ~[io.fabric8.kubernetes-client-7.0.1.jar:?]
	at io.fabric8.kubernetes.client.dsl.internal.OperationSupport.handleResponse(OperationSupport.java:524) ~[io.fabric8.kubernetes-client-7.0.1.jar:?]
	at io.fabric8.kubernetes.client.dsl.internal.OperationSupport.handleCreate(OperationSupport.java:340) ~[io.fabric8.kubernetes-client-7.0.1.jar:?]
	at io.fabric8.kubernetes.client.dsl.internal.BaseOperation.handleCreate(BaseOperation.java:754) ~[io.fabric8.kubernetes-client-7.0.1.jar:?]
	at io.fabric8.kubernetes.client.dsl.internal.BaseOperation.handleCreate(BaseOperation.java:98) ~[io.fabric8.kubernetes-client-7.0.1.jar:?]
	at io.fabric8.kubernetes.client.dsl.internal.CreateOnlyResourceOperation.create(CreateOnlyResourceOperation.java:42) ~[io.fabric8.kubernetes-client-7.0.1.jar:?]
	at io.fabric8.kubernetes.client.dsl.internal.BaseOperation.create(BaseOperation.java:1155) ~[io.fabric8.kubernetes-client-7.0.1.jar:?]
	at io.fabric8.kubernetes.client.dsl.internal.BaseOperation.create(BaseOperation.java:98) ~[io.fabric8.kubernetes-client-7.0.1.jar:?]
	at io.strimzi.operator.cluster.operator.assembly.StrimziPodSetController.maybeCreateOrPatchPod(StrimziPodSetController.java:441) ~[io.strimzi.cluster-operator-0.46.0-SNAPSHOT.jar:0.46.0-SNAPSHOT]
	at io.strimzi.operator.cluster.operator.assembly.StrimziPodSetController.reconcile(StrimziPodSetController.java:369) ~[io.strimzi.cluster-operator-0.46.0-SNAPSHOT.jar:0.46.0-SNAPSHOT]
	at io.strimzi.operator.cluster.operator.assembly.StrimziPodSetController.run(StrimziPodSetController.java:536) ~[io.strimzi.cluster-operator-0.46.0-SNAPSHOT.jar:0.46.0-SNAPSHOT]
	at java.lang.Thread.run(Thread.java:840) ~[?:?]
Caused by: io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: POST at: https://10.96.0.1:443/api/v1/namespaces/myproject/pods. Message: Pod "my-mirror-maker-2-mirrormaker2-0" is invalid: spec.containers[0].volumeMounts[9].mountPath: Invalid value: "/mnt/my-secret": must be unique. Received status: Status(apiVersion=v1, code=422, details=StatusDetails(causes=[StatusCause(field=spec.containers[0].volumeMounts[9].mountPath, message=Invalid value: "/mnt/my-secret": must be unique, reason=FieldValueInvalid, additionalProperties={})], group=null, kind=Pod, name=my-mirror-maker-2-mirrormaker2-0, retryAfterSeconds=null, uid=null, additionalProperties={}), kind=Status, message=Pod "my-mirror-maker-2-mirrormaker2-0" is invalid: spec.containers[0].volumeMounts[9].mountPath: Invalid value: "/mnt/my-secret": must be unique, metadata=ListMeta(_continue=null, remainingItemCount=null, resourceVersion=null, selfLink=null, additionalProperties={}), reason=Invalid, status=Failure, additionalProperties={}).
	at io.fabric8.kubernetes.client.dsl.internal.OperationSupport.requestFailure(OperationSupport.java:642) ~[io.fabric8.kubernetes-client-7.0.1.jar:?]
	at io.fabric8.kubernetes.client.dsl.internal.OperationSupport.requestFailure(OperationSupport.java:622) ~[io.fabric8.kubernetes-client-7.0.1.jar:?]
	at io.fabric8.kubernetes.client.dsl.internal.OperationSupport.assertResponseCode(OperationSupport.java:582) ~[io.fabric8.kubernetes-client-7.0.1.jar:?]
	at io.fabric8.kubernetes.client.dsl.internal.OperationSupport.lambda$handleResponse$0(OperationSupport.java:549) ~[io.fabric8.kubernetes-client-7.0.1.jar:?]
	at java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:646) ~[?:?]
	at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:510) ~[?:?]
	at java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2147) ~[?:?]
	at io.fabric8.kubernetes.client.http.StandardHttpClient.lambda$completeOrCancel$10(StandardHttpClient.java:141) ~[io.fabric8.kubernetes-client-api-7.0.1.jar:?]
	at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) ~[?:?]
	at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) ~[?:?]
	at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:510) ~[?:?]
	at java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2147) ~[?:?]
	at io.fabric8.kubernetes.client.utils.AsyncUtils.lambda$retryWithExponentialBackoff$3(AsyncUtils.java:91) ~[io.fabric8.kubernetes-client-api-7.0.1.jar:?]
	at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) ~[?:?]
	at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) ~[?:?]
	at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:510) ~[?:?]
	at java.util.concurrent.CompletableFuture.postFire(CompletableFuture.java:614) ~[?:?]
	at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:844) ~[?:?]
	at java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:482) ~[?:?]
	... 1 more

This Pr fixes it and makes sure they are added only once. It also fixes the tests to properly check the full volume list and not just check a presence of individual volume which helped to cover this bug in the unit tests.

This was raised by #11002.

This should be backported to the 0.45.0 branch.

Checklist

  • Write tests
  • Make sure all tests pass
  • Try your changes from Pod inside your Kubernetes and OpenShift cluster, not just locally
  • Reference relevant issue(s) and close them after merging

@scholzj
Copy link
Member Author

scholzj commented Jan 8, 2025

/azp run regression

Copy link

Azure Pipelines successfully started running 1 pipeline(s).

@scholzj scholzj merged commit ac319a6 into strimzi:main Jan 9, 2025
21 checks passed
@scholzj scholzj deleted the fix-additional-volumes-in-MM2 branch January 9, 2025 09:38
scholzj added a commit to scholzj/strimzi-kafka-operator that referenced this pull request Jan 13, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants