-
Notifications
You must be signed in to change notification settings - Fork 90
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
panic'ing on new node pool names for show #344
Comments
its also kind of odd that in the mids pool it lists two when kubectl, civo-ui, civo-cli only has 2 nodes total (1 other large one). I speculate thats an artifact leftover from prior nodes in that pool being there before I scaled them down maybe? civo k8s node-pool ls test01 && k get nodes
Node Pool 9897d03f-4e5c-4a8d-8107-61e12d318948:
+--------------------------------------+----------------+-------+--------+--------+
| Name | Size | Count | Labels | Taints |
+--------------------------------------+----------------+-------+--------+--------+
| 9897d03f-4e5c-4a8d-8107-61e12d318948 | g4s.kube.large | 1 | null | null |
+--------------------------------------+----------------+-------+--------+--------+
Node Pool mids:
+------+-----------------+-------+--------+--------+
| Name | Size | Count | Labels | Taints |
+------+-----------------+-------+--------+--------+
| mids | g4s.kube.medium | 1 | {} | [] |
+------+-----------------+-------+--------+--------+
| mids | g4s.kube.medium | 1 | {} | [] |
+------+-----------------+-------+--------+--------+
NAME STATUS ROLES AGE VERSION
k3s-test01-c25a-b952af-node-pool-50a8-xy15b Ready,SchedulingDisabled <none> 2d13h v1.27.1+k3s1
k3s-test01-c25a-b952af-node-pool-7807-iy4yw Ready <none> 11d v1.27.1+k3s1 trying to remove the node-pool it seems like an issue with the short name civo k8s node-pool delete test01 mids
Please check if you are using the latest version of CLI and retry the command
If you are still facing issues, please report it on our community slack or open a GitHub issue (https://github.com/civo/cli/issues)
Error: Please provide the node pool ID with at least 6 characters for mids |
seems like 6 character min was needed but somehow I created a 4 character named one at some point. civo k8s node-pool create test01 --name tv -s g4s.kube.xsmall
panic: runtime error: slice bounds out of range [:6] with length 2
goroutine 1 [running]:
github.com/civo/cli/cmd/kubernetes.glob..func15(0x1af45e0?, {0xc0007965f0, 0x1, 0x5?})
/home/runner/work/cli/cli/cmd/kubernetes/kubernetes_nodepool_create.go:78 +0x7d0
github.com/spf13/cobra.(*Command).execute(0x1af45e0, {0xc0007965a0, 0x5, 0x5})
/home/runner/go/pkg/mod/github.com/spf13/cobra@v1.1.1/command.go:854 +0x663
github.com/spf13/cobra.(*Command).ExecuteC(0x1ae6960)
/home/runner/go/pkg/mod/github.com/spf13/cobra@v1.1.1/command.go:958 +0x39c
github.com/spf13/cobra.(*Command).Execute(...)
/home/runner/go/pkg/mod/github.com/spf13/cobra@v1.1.1/command.go:895
github.com/civo/cli/cmd.Execute()
/home/runner/work/cli/cli/cmd/root.go:121 +0x25
main.main()
/home/runner/work/cli/cli/main.go:27 +0x17 |
ok actually the node does get created anyway even with a length of one character node-pool civo k8s node-pool create test01 --name x -s g4s.kube.xsmall -n 1
panic: runtime error: slice bounds out of range [:6] with length 1
goroutine 1 [running]:
github.com/civo/cli/cmd/kubernetes.glob..func15(0x1af45e0?, {0xc000278a10, 0x1, 0x7?})
/home/runner/work/cli/cli/cmd/kubernetes/kubernetes_nodepool_create.go:78 +0x7d0
github.com/spf13/cobra.(*Command).execute(0x1af45e0, {0xc0002789a0, 0x7, 0x7})
/home/runner/go/pkg/mod/github.com/spf13/cobra@v1.1.1/command.go:854 +0x663
github.com/spf13/cobra.(*Command).ExecuteC(0x1ae6960)
/home/runner/go/pkg/mod/github.com/spf13/cobra@v1.1.1/command.go:958 +0x39c
github.com/spf13/cobra.(*Command).Execute(...)
/home/runner/go/pkg/mod/github.com/spf13/cobra@v1.1.1/command.go:895
github.com/civo/cli/cmd.Execute()
/home/runner/work/cli/cli/cmd/root.go:121 +0x25
main.main()
/home/runner/work/cli/cli/main.go:27 +0x17 |
hey, this PR should have closed this issue. Feel free to re-open if you encounter the issue again |
I have a k3s cluster that had a large node and I later added a node-pool with the ability to name. There are issues when doing the show command its panic'ing it looks like when it iterates over the named pool
The text was updated successfully, but these errors were encountered: