You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi all! I've searched for hours and hours and tried a lot of stuff but unfortunately without success. Maybe someone here has the same problem and can help me :)
Terraform Version, Provider Version and Kubernetes Version
I've deployed an Azure Kubernetes Service Cluster via Terraform and initialized the Kubernetes Provider on the same level. With the help of the initialized Kubernetes Provider I'm creating multiple resources inside the cluster itself. Additionally, I'm passing the provider down to a module that contains a FluxCD deployment. The initialization of the Provider only happens on the same level where the Azure Kubernetes Service Cluster is created. The Azure Kubernetes Service Cluster itself resides in a module (and is called three times), but above this, no Kubernetes Provider or anything like it is created/initialized.
The Terraform deployment runs inside a Gitlab Terraform Container Image with the help of the gitlab-terraform wrapper script.
The initial deployment works without problems, but after this, I'm not able to establish a connection to one of the clusters and I'm getting multiple error messages when refreshing the resources. The resources in the other two clusters can be refreshed without errors or warnings.
Below you can see how I intialize the provider, create a resource and how I'm passing the provider down to the module. I've already tried the -parallelism=1 and -refresh=false parameters, but I definitely need the refresh and this isn't a solution.
Terraform should refresh cluster resources and update them, if needed.
Actual Behavior
Terraform plan/apply fails because it tries to (probably) authenticates against localhost. When I'm initalizing the state locally, it tries to connect against localhost.
│ Error: storageclasses.storage.k8s.io "test" is forbidden: User "system:serviceaccount:runner:default" cannot get resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
│
│ with module.aks_webshop.kubernetes_storage_class_v1.elastic,
│ on aks/main.tf line 375, in resource "kubernetes_storage_class_v1" "elastic":
│ 375: resource "kubernetes_storage_class_v1" "elastic" {
Important Factoids
Azure Kubernetes Service
References
Don't know for sure if it is related, but the behavior is almost the same:
mkambeck
changed the title
Refresh of resources inside AKS cluster stops after initial deployment (probably connection against localhost after this)
Refresh of resources inside AKS cluster stops after initial deployment - Connection against localhost?
May 31, 2023
Marking this issue as stale due to inactivity. If this issue receives no comments in the next 30 days it will automatically be closed. If this issue was automatically closed and you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. This helps our maintainers find and focus on the active issues. Maintainers may also remove the stale label at their discretion. Thank you!
Hi all! I've searched for hours and hours and tried a lot of stuff but unfortunately without success. Maybe someone here has the same problem and can help me :)
Terraform Version, Provider Version and Kubernetes Version
Affected Resource(s)
Terraform Configuration Files
I've deployed an Azure Kubernetes Service Cluster via Terraform and initialized the Kubernetes Provider on the same level. With the help of the initialized Kubernetes Provider I'm creating multiple resources inside the cluster itself. Additionally, I'm passing the provider down to a module that contains a FluxCD deployment. The initialization of the Provider only happens on the same level where the Azure Kubernetes Service Cluster is created. The Azure Kubernetes Service Cluster itself resides in a module (and is called three times), but above this, no Kubernetes Provider or anything like it is created/initialized.
The Terraform deployment runs inside a Gitlab Terraform Container Image with the help of the gitlab-terraform wrapper script.
The initial deployment works without problems, but after this, I'm not able to establish a connection to one of the clusters and I'm getting multiple error messages when refreshing the resources. The resources in the other two clusters can be refreshed without errors or warnings.
Below you can see how I intialize the provider, create a resource and how I'm passing the provider down to the module. I've already tried the
-parallelism=1
and-refresh=false
parameters, but I definitely need the refresh and this isn't a solution.Debug Output
N/A
Panic Output
N/A
Steps to Reproduce
terraform plan
--> Plan is createdterraform apply
--> Resources are createdterraform plan
--> Plan fails with errorsExpected Behavior
Terraform should refresh cluster resources and update them, if needed.
Actual Behavior
Terraform plan/apply fails because it tries to (probably) authenticates against localhost. When I'm initalizing the state locally, it tries to connect against localhost.
Important Factoids
Azure Kubernetes Service
References
Don't know for sure if it is related, but the behavior is almost the same:
The text was updated successfully, but these errors were encountered: