-
Hello, I really don't know whether this is a bug or not. This is the scenario to what I am experiencing. I start an embed etcd cluster of 5 pods on k8. I scale down the deployment to 3 and I started observing the following log message:
All embed etcd server started with the same initial-cluster and token with cluster state as new. My expectation was that the cluster should auto-adjust. I am sure I am missing something here or doing something wrong. |
Beta Was this translation helpful? Give feedback.
Replies: 4 comments 9 replies
-
Hey @Tochemey - Thanks for raising this question. When you say:
Was this purely by reducing the number of replicas for example on the k8s side, or did you also update the etcd cluster configuration to remove the peers that are no longer desired? Additionally can you please confirm what etcd version you are running? |
Beta Was this translation helpful? Give feedback.
-
@jmhbnz I only scaled down the deployment from 5 to 3. I am using the etcd v3.5.9. |
Beta Was this translation helpful? Give feedback.
-
Just scaling in/out etcd instances (deployment replica in your example) isn't enough. The deleted etcd instances are still in the member list, so you should also remove the member from the member list using the Similarly, you need to execute Please also read https://etcd.io/docs/v3.5/op-guide/runtime-configuration/ |
Beta Was this translation helpful? Give feedback.
-
@Tochemey Can we close this discussion? |
Beta Was this translation helpful? Give feedback.
Just scaling in/out etcd instances (deployment replica in your example) isn't enough. The deleted etcd instances are still in the member list, so you should also remove the member from the member list using the
etcdctl member remove <ID>
command or etcd client SDK interface MemberRemoveSimilarly, you need to execute
etcdctl member add [flags]
or use MemberAdd when adding new member.Please also read https://etcd.io/docs/v3.5/op-guide/runtime-configuration/