You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I was going to ask why do you do it in this way, but I've just seen that you're following the CF doc.
It's better to update the labels in the deployment than the labels in the service. By doing it your way, there might be some downtime (when the selector changes and the endpoints are updated).
1.- Deploy version A, labels as color=Blue,access=public
2.- Create a service PUBLIC with selector access=public
3.- Deploy version B, label as color=Green
4.- Create a service INTERNAL with selector color=Green
This is the setup until Step 2 in the CF page. Now we have to update the access, this is, version B has to be accessed via the PUBLIC service. To do so, you have to add the access label to version B
kubectl label pods -l color=green access=public
Now you have Step 3, public traffic is sent to both pods: green and blue. Then we remove the blue pod (version A)
kubectl label pods -l color=blue access-
etc... Using labels at the right level will guarantee no issues 😄
The text was updated successfully, but these errors were encountered:
@ipedrazas can you elaborate on why you think there would be a downtime with the current approach? To my understanding, a Service selector change should be reflected in kubeproxy by a single, consistent update operation on the corresponding iptables rules, leaving no room for downtime (as long as the new deployment has completed successfully).
Swapping selectors is basically replacing an endpoint by another one. You're right: the update of the iptables happens in a single operation, however, the update of the endpoints does not happen atomically. You're updating the selectors, that update the iptables, that update the endpoints.
To me, it feels safer to add them both, and then remove the other one than swapping one by another one.
I was going to ask why do you do it in this way, but I've just seen that you're following the CF doc.
It's better to update the labels in the deployment than the labels in the service. By doing it your way, there might be some downtime (when the selector changes and the endpoints are updated).
1.- Deploy version A, labels as color=Blue,access=public
2.- Create a service PUBLIC with selector access=public
3.- Deploy version B, label as color=Green
4.- Create a service INTERNAL with selector color=Green
This is the setup until
Step 2
in the CF page. Now we have to update the access, this is, version B has to be accessed via the PUBLIC service. To do so, you have to add the access label to version BNow you have
Step 3
, public traffic is sent to both pods: green and blue. Then we remove the blue pod (version A)etc... Using labels at the right level will guarantee no issues 😄
The text was updated successfully, but these errors were encountered: