Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

data.kubernetes_service should fetch values in "plan" phase rather than "apply" phase #353

Closed
heidemn opened this issue Mar 6, 2019 · 4 comments

Comments

@heidemn
Copy link

heidemn commented Mar 6, 2019

Terraform Version

Terraform v0.11.11

  • provider.aws v1.60.0
  • provider.external v1.0.0
  • provider.helm v0.8.0
  • provider.kubernetes v1.5.1

Affected Resource(s)

  • data.kubernetes_service

Expected Behavior

"terraform plan" should first contact Kubernetes to fetch the data values, then identify that nothing has changed, and report "no changes".
Similar for "terraform apply": Only report that changes will be made if there are actually any changes.

This seems to work fine for AWS data sources, but not for data.kubernetes_service.
This is bad, because when you run terraform apply, you have no idea if changes will be made or not.

Actual Behavior

Instead, "terraform apply" says, "We don't know yet what we'll do, we first have to talk to Kubernetes,
Please confirm.
We'll then apply the changes immediately, if any, and tell you afterwards..."

More specific, Terraform shows a section "<= read (data resources)".
This section is useless to me; rather the data resources should be read first, and the result should be immediately used when calculating the plan.

See below:

Plan: 0 to add, 2 to change, 0 to destroy.
...
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.

Steps to Reproduce

Please list the steps required to reproduce the issue, for example:
1, Run terraform apply
2. Don't make any changes.
2. Run terraform apply again, or run terraform plan

Important Factoids

Setting up DNS for EKS cluster.

Code

# Get DNS name of the ELBv1 created by the Ingress Controller.
data "kubernetes_service" "load_balancer" {
  metadata {
    namespace = "default"
    name = "nginx-ingress-controller"
  }

  depends_on = ["helm_release.ingress_l7_elbv1"]
}

locals {
  #    "00000000000000000000000000000000-1111111111.us-east-1.elb.amazonaws.com"
  # -> "00000000000000000000000000000000"  (unique name of the Load Balacer)
  load_balancer_name = "${substr(data.kubernetes_service.load_balancer.load_balancer_ingress.0.hostname, 0, 32)}"
}

# Get additional attributes of the ELBv1: We need the "Zone ID" for the DNS record.
data "aws_elb" "elbv1" {
  name = "${local.load_balancer_name}"
}

# DNS records pointing to Load Balancer:
resource "aws_route53_record" "elbv1" {
  count   = "${length(var.dns_zone_ids)}"

  zone_id = "${var.dns_zone_ids[count.index]}"
  name    = "${var.dns_names[count.index]}"
  type    = "A"

  alias {
    zone_id = "${data.aws_elb.elbv1.zone_id}"
    name    = "${data.aws_elb.elbv1.dns_name}"
    evaluate_target_health = false
  }
}


Result of "terraform apply"

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  ~ update in-place
 <= read (data resources)

Terraform will perform the following actions:

 <= module.shared-eks.data.aws_elb.elbv1
      id:                                      <computed>
      access_logs.#:                           <computed>
      availability_zones.#:                    <computed>
      connection_draining:                     <computed>
      connection_draining_timeout:             <computed>
      cross_zone_load_balancing:               <computed>
      dns_name:                                <computed>
      health_check.#:                          <computed>
      idle_timeout:                            <computed>
      instances.#:                             <computed>
      internal:                                <computed>
      listener.#:                              <computed>
      name:                                    "${local.load_balancer_name}"
      security_groups.#:                       <computed>
      source_security_group:                   <computed>
      source_security_group_id:                <computed>
      subnets.#:                               <computed>
      tags.%:                                  <computed>
      zone_id:                                 <computed>

 <= module.shared-eks.data.kubernetes_service.load_balancer
      id:                                      <computed>
      load_balancer_ingress.#:                 <computed>
      metadata.#:                              "1"
      metadata.0.generation:                   <computed>
      metadata.0.name:                         "nginx-ingress-controller"
      metadata.0.namespace:                    "default"
      metadata.0.resource_version:             <computed>
      metadata.0.self_link:                    <computed>
      metadata.0.uid:                          <computed>
      spec.#:                                  <computed>

  ~ module.shared-eks.aws_route53_record.elbv1[0]
      alias.1827061976.evaluate_target_health: "false" => "false"
      alias.1827061976.name:                   "REMOVED-REMOVED.us-east-1.elb.amazonaws.com" => ""
      alias.1827061976.zone_id:                "REMOVED" => ""
      alias.~321608504.evaluate_target_health: "" => "false"
      alias.~321608504.name:                   "" => "${data.aws_elb.elbv1.dns_name}"
      alias.~321608504.zone_id:                "" => "${data.aws_elb.elbv1.zone_id}"

  ~ module.shared-eks.aws_route53_record.elbv1[1]
      alias.1827061976.evaluate_target_health: "false" => "false"
      alias.1827061976.name:                   "REMOVED-REMOVED.us-east-1.elb.amazonaws.com" => ""
      alias.1827061976.zone_id:                "REMOVED" => ""
      alias.~321608504.evaluate_target_health: "" => "false"
      alias.~321608504.name:                   "" => "${data.aws_elb.elbv1.dns_name}"
      alias.~321608504.zone_id:                "" => "${data.aws_elb.elbv1.zone_id}"


Plan: 0 to add, 2 to change, 0 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

module.shared-eks.data.kubernetes_service.load_balancer: Refreshing state...
module.shared-eks.data.aws_elb.elbv1: Refreshing state...

Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
Releasing state lock. This may take a few moments...
@heidemn
Copy link
Author

heidemn commented Mar 6, 2019

Please note that really data.kubernetes_service is causing my problem here.
It has nothing to do with the aws_route53_record resources.

If I replace
load_balancer_name = "${substr(data.kubernetes_service.load_balancer.load_balancer_ingress.0.hostname, 0, 32)}"
by
load_balancer_name = "HARDCODED_32_DIGIT_NAME"
then the DNS records are correctly identified as unchanged in the plan phase.
Still Terraform tells me that it wants to read the value from Kubernetes.

@heidemn
Copy link
Author

heidemn commented Mar 6, 2019

This might rather be a general limitation of depends_on... Will investigate tomorrow.

hashicorp/terraform-provider-archive#11
hashicorp/terraform#17034
hashicorp/terraform#11806

Yes, this is actually the problem. Fixed by replacing depends_on with an implicit dependency:

resource "helm_release" "ingress_l7_elbv1" {
  depends_on = ["kubernetes_cluster_role_binding.tiller"]

  name   = "nginx-ingress"
  chart  = "stable/nginx-ingress"
  values = ["${local.nginx_ingress_config}"]
}

# Get DNS name of the ELBv1 created by the Ingress Controller.
data "kubernetes_service" "nginx" {
  metadata {
    namespace = "${helm_release.ingress_l7_elbv1.namespace}"
    name = "nginx-ingress-controller"
  }

  # https://github.com/terraform-providers/terraform-provider-kubernetes/issues/353
  # Data sources together with depends_on delay the evaluation, so we rather introduce an implicit dependency
  # by using the namespace attribute.
  #depends_on = ["helm_release.ingress_l7_elbv1"]
}

@heidemn heidemn closed this as completed Mar 7, 2019
@brockoffdev
Copy link

@heidemn thanks for that post, was driving myself crazy with something similar. This fixed it for me, as well.

@chris-brace
Copy link

@heidemn lovely. this was killing me. i dunno why i didnt think about making an implicit dep!

@ghost ghost locked and limited conversation to collaborators Apr 21, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants