You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Existing resources that use the kubernetes provider (and by implication, the helm provider) do not use upstream cluster details if the upstream cluster is referenced indirectly.
Which is to say:
The initial deployment succeeds always
Subsequent "apply"s work for as long as "endpoint", "cluster_ca_certificate" or "token" remain known.
Using "data" resources to look up cluster details does NOT work even with depends_on directives.
However, using the upstream resources directly does work.
Terraform Version, Provider Version and Kubernetes Version
Set authentication_mode = "API_AND_CONFIG_MAP" in terraform.tfvars to trigger a change to the cluster.
terraform plan and observe the error
remove "data" from the host= directive
terraform plan and observe the error
remove "data" from the cluster_ca_certificate= directive
terraform plan and observe the error
delete "token=" and use "exec" to generate the cluster authentication.
terraform plan and observe success.
Expected Behavior
Using this provider configuration we use the upstream resource directly for the endpoint and certificate, and use exec to retrieve the token rather than relying on either of the data objects.
│ Error: Get "https://5FE1EB3BB63BA7C813D2DDA68E593F88.gr7.eu-west-1.eks.amazonaws.com/api/v1/namespaces/test-ns": tls: failed to verify certificate: x509: “kube-apiserver” certificate is not trusted
│
│ with kubernetes_namespace.ns,
│ on main.tf line 110, in resource "kubernetes_namespace" "ns":
│ 110: resource "kubernetes_namespace" "ns" {
with aws_eks_cluster.cluster.certificate_authority in place of data.aws_eks_cluster.cluster.certificate_authority it uses the correct endpoint and cert:
│ Error: namespaces "test-ns" is forbidden: User "system:anonymous" cannot get resource "namespaces" in API group "" in the namespace "test-ns"
│
│ with kubernetes_namespace.ns,
│ on main.tf line 111, in resource "kubernetes_namespace" "ns":
│ 111: resource "kubernetes_namespace" "ns" {
With exec in place of data.aws_eks_cluster_auth.eks_auth.token, it can resolve the correct token.
References
The following tickets reference the "localhost" fallback but don't mention how to fix the certificate or token errors.
This is a common problem of needing the output of one apply to configure another provider. Unfortunately at this time the prescribed advice is to break your workspace into separate steps and apply "progressively". It is something we are trying to address in the future but it's a complex problem.
The weird thing is, it actually works, as long as you use the upstream objects directly to initialise the provider. If you use data objects that merely "depends_on" the upstream, then it fails.
The fact it works at all, but fails with data objects, which should honour depends_on, seemingly indicates it's a bug not a feature.
Existing resources that use the kubernetes provider (and by implication, the helm provider) do not use upstream cluster details if the upstream cluster is referenced indirectly.
Which is to say:
Terraform Version, Provider Version and Kubernetes Version
Terraform Configuration Files
Steps to Reproduce
Assuming an AWS account:
terraform init
terraform apply
authentication_mode = "API_AND_CONFIG_MAP"
in terraform.tfvars to trigger a change to the cluster.terraform plan
and observe the errorterraform plan
and observe the errorterraform plan
and observe the errorterraform plan
and observe success.Expected Behavior
Using this provider configuration we use the upstream resource directly for the endpoint and certificate, and use exec to retrieve the token rather than relying on either of the data objects.
Actual Behavior
With the given code:
with aws_eks_cluster.cluster.endpoint in place of data.aws_eks_cluster.cluster.endpoint, it now finds the correct endpoint:
The following results
with aws_eks_cluster.cluster.certificate_authority in place of data.aws_eks_cluster.cluster.certificate_authority it uses the correct endpoint and cert:
This error results
With exec in place of data.aws_eks_cluster_auth.eks_auth.token, it can resolve the correct token.
References
The following tickets reference the "localhost" fallback but don't mention how to fix the certificate or token errors.
Community Note
The text was updated successfully, but these errors were encountered: