You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have terraform module for creating RKE Cluster, everything is fine with the creation, but after that when I execute the plan, it's saying that it's going to re-create the kube_cluster_yaml ->
-/+ resource "local_file" "kube_cluster_yaml" {
~ content = (sensitive value) # forces replacement
~ content_base64sha256 = "O5/gq0ppoyYX6PaB61JZMMK7QDruPOxHAhblDDBocG8=" -> (known after apply)
~ content_base64sha512 = "/d7UFN+ZZvRhHwqI+NVVHI9G0Un2Ct3o1JnmT/Or85lI0wCk0zASmlPlVJt9VpQMs1PlUm34B7FPkGzAnOb1rg==" -> (known after apply)
~ content_md5 = "3551bcb3590523e5d065f889abe6ae3e" -> (known after apply)
~ content_sha1 = "032b78ac94907ea24655ca5602e66170dc2928a6" -> (known after apply)
~ content_sha256 = "3b9fe0ab4a69a32617e8f681eb525930c2bb403aee3cec470216e50c3068706f" -> (known after apply)
~ content_sha512 = "fdded414df9966f4611f0a88f8d5551c8f46d149f60adde8d499e64ff3abf39948d300a4d330129a53e5549b7d56940cb353e5526df807b14f906cc09ce6f5ae" -> (known after apply)
~ id = "032b78ac94907ea24655ca5602e66170dc2928a6" -> (known after apply)
# (3 unchanged attributes hidden)
}
# module.rke.rke_cluster.cluster will be updated in-place
~ resource "rke_cluster" "cluster" {
~ cluster_cidr = "10.42.0.0/16" -> (known after apply)
~ cluster_dns_server = "10.43.0.10" -> (known after apply)
~ cluster_domain = "cluster.local" -> (known after apply)
id = "92fa3c9c-8cc2-4f16-8779-659db5433548"
~ kube_config_yaml = (sensitive value)
~ rke_cluster_yaml = (sensitive value)
~ rke_state = (sensitive value)
# (22 unchanged attributes hidden)
# (12 unchanged blocks hidden)
}
Plan: 1 to add, 1 to change, 1 to destroy.
I saw that there are other issues related to that, but nothing is solving my problem.
This is the debug from the plan ->
time="2023-08-03T09:48:05+03:00" level=info msg="Reading RKE cluster 92fa3c9c-8cc2-4f16-8779-659db5433548 ..."
time="2023-08-03T09:48:05+03:00" level=debug msg="audit log policy found in cluster.yml"
time="2023-08-03T09:48:05+03:00" level=debug msg="Checking if cluster version [1.26.4-rancher2-1] needs to have kube-api audit log enabled"
time="2023-08-03T09:48:05+03:00" level=debug msg="Cluster version [1.26.4-rancher2-1] needs to have kube-api audit log enabled"
time="2023-08-03T09:48:05+03:00" level=debug msg="Enabling kube-api audit log for cluster version [v1.26.4-rancher2-1]"
time="2023-08-03T09:48:05+03:00" level=debug msg="Host: 172.16.16.11 has role: controlplane"
time="2023-08-03T09:48:05+03:00" level=debug msg="Host: 172.16.16.11 has role: worker"
time="2023-08-03T09:48:05+03:00" level=debug msg="Host: 172.16.16.11 has role: etcd"
time="2023-08-03T09:48:05+03:00" level=debug msg="Host: 172.16.16.12 has role: controlplane"
time="2023-08-03T09:48:05+03:00" level=debug msg="Host: 172.16.16.12 has role: worker"
time="2023-08-03T09:48:05+03:00" level=debug msg="Host: 172.16.16.12 has role: etcd"
time="2023-08-03T09:48:05+03:00" level=debug msg="Host: 172.16.16.13 has role: controlplane"
time="2023-08-03T09:48:05+03:00" level=debug msg="Host: 172.16.16.13 has role: worker"
time="2023-08-03T09:48:05+03:00" level=debug msg="Host: 172.16.16.13 has role: etcd"
time="2023-08-03T09:48:05+03:00" level=debug msg="Checking cri-dockerd for cluster version [v1.26.4-rancher2-1]"
time="2023-08-03T09:48:05+03:00" level=debug msg="cri-dockerd is enabled for cluster version [v1.26.4-rancher2-1]"
time="2023-08-03T09:48:05+03:00" level=debug msg="Checking PodSecurityPolicy for cluster version [v1.26.4-rancher2-1]"
time="2023-08-03T09:48:05+03:00" level=debug msg="Checking PodSecurity for cluster version [v1.26.4-rancher2-1]"
time="2023-08-03T09:48:05+03:00" level=debug msg="Checking if cluster version [1.26.4-rancher2-1] needs to have kube-api audit log enabled"
time="2023-08-03T09:48:05+03:00" level=debug msg="Cluster version [1.26.4-rancher2-1] needs to have kube-api audit log enabled"
time="2023-08-03T09:48:05+03:00" level=debug msg="Enabling kube-api audit log for cluster version [v1.26.4-rancher2-1]"
time="2023-08-03T09:48:05+03:00" level=debug msg="Host: 172.16.16.11 has role: controlplane"
time="2023-08-03T09:48:05+03:00" level=debug msg="Host: 172.16.16.11 has role: worker"
time="2023-08-03T09:48:05+03:00" level=debug msg="Host: 172.16.16.11 has role: etcd"
time="2023-08-03T09:48:05+03:00" level=debug msg="Host: 172.16.16.12 has role: controlplane"
time="2023-08-03T09:48:05+03:00" level=debug msg="Host: 172.16.16.12 has role: worker"
time="2023-08-03T09:48:05+03:00" level=debug msg="Host: 172.16.16.12 has role: etcd"
time="2023-08-03T09:48:05+03:00" level=debug msg="Host: 172.16.16.13 has role: controlplane"
time="2023-08-03T09:48:05+03:00" level=debug msg="Host: 172.16.16.13 has role: worker"
time="2023-08-03T09:48:05+03:00" level=debug msg="Host: 172.16.16.13 has role: etcd"
time="2023-08-03T09:48:05+03:00" level=debug msg="Checking cri-dockerd for cluster version [v1.26.4-rancher2-1]"
time="2023-08-03T09:48:05+03:00" level=debug msg="cri-dockerd is enabled for cluster version [v1.26.4-rancher2-1]"
time="2023-08-03T09:48:05+03:00" level=debug msg="Checking PodSecurityPolicy for cluster version [v1.26.4-rancher2-1]"
time="2023-08-03T09:48:05+03:00" level=debug msg="Checking PodSecurity for cluster version [v1.26.4-rancher2-1]"
Please, if you can advice where is the problem or is a bug???
Thanks!
The text was updated successfully, but these errors were encountered:
Hello, I'm looking into this a bit and was only able to reproduce it after editing the kubeconfig (e.g. kubectx/kubens) or moving the file from the expected path.
It seems to only re-download the file when the local file doesn't match the state. So if the file gets modified and/or moved out of the directory terraform re-creates it which is expected behavior and not a bug.
Does that sound like what you're running into? Or is it for some reason always re-downloading the local file after consecutive terraform apply runs?
Terraform: v1.5.3
RKE Provider: 1.4.2
RKE Cluster: v1.26.4-rancher2-1
I have terraform module for creating RKE Cluster, everything is fine with the creation, but after that when I execute the plan, it's saying that it's going to re-create the kube_cluster_yaml ->
I saw that there are other issues related to that, but nothing is solving my problem.
This is the debug from the plan ->
Please, if you can advice where is the problem or is a bug???
Thanks!
The text was updated successfully, but these errors were encountered: