-
-
Notifications
You must be signed in to change notification settings - Fork 97
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Is it possible to update credentials of an existing context? #1022
Comments
As far as I know, each time the token is updated, the context in the corresponding cluster kubeconfig will be automatically updated. Kubecm only reads the current kubeconfig and will not affect its usage. |
Thanks for the quick reply. On second thought I think you're right. I just started using kubecm so I'll test it out further starting monday. |
Hopefully this tool will be of help to you. If you have any questions, please open an issue for discussion. |
Yeah so whenever I perform the login again, it creates a new context, which means i do have to rename the new one etc. But not sure what kubecm can do about this since it just uses the kubeconfig |
@Stef16Robbe I'm not using SSO. But as far as I know, it's the authentication information part (such as token, auth-provider) that changes each time, while other parts (such as cluster configuration, context, etc.) usually remain unchanged. So you can use all the features of kubecm normally. |
I can give you an example if you're interested. What happens is, let's say I have this kubeconfig, and have named my only context apiVersion: v1
kind: Config
current-context: ...
preferences: {}
clusters:
- cluster:
server: https://dev.com:6443
name: api-dev-com:6443
contexts:
- context:
cluster: api-dev-com:6443
namespace: default
user: stef16robbe/api-dev-com:6443
name: dev-cluster
users:
- name: stef16robbe/api-dev-com:6443
user:
token: sha256~xxx This uses an OpenShift token as you can see that's valid for 24 hours. When the token expires, it needs to be refreshed. How I have always done it is go to my OpenShift console and request a token, and I copy the following command:
After, my kubeconfig looks like this: apiVersion: v1
kind: Config
current-context: ...
preferences: {}
clusters:
- cluster:
server: https://dev.com:6443
name: api-dev-com:6443
contexts:
- context:
cluster: api-dev-com:6443
namespace: default
user: stef16robbe/api-dev-com:6443
name: dev-cluster
- context:
cluster: api-dev-com:6443
namespace: default
user: stef16robbe/api-dev-com:6443
name: long-default-name-blabla
users:
- name: stef16robbe/api-dev-com:6443
user:
token: sha256~xxx
- name: stef16robbe/api-dev-com:6443
user:
token: sha256~yyy It generates a new context without the easy-to-use name that I changed via kubecm. So I'd have to delete the old context and rename the new one, every time. Ideally I'd like to overwrite the token of the existing context/user. I'm actually writing a little CLI to do this myself because I don't think kubecm supports this and I think it's a very specific use case for myself. |
This is a very good example. If I have time, I'll go and take a look at the source code of OpenShift to see how it's implemented. |
Like the title says. I use SSO on my cluster which means i authenticate with tokens that are valid for 24 hours.
As far as I know, whenever it invalidates, I'd have to log in again, which adds another context, then rename that context and delete the old one.
Is there any way to preserve the context in kubecm, whilst updating the token?
The text was updated successfully, but these errors were encountered: