Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Overly aggressive change detection on AWS subnet_id property with count #4768

Closed
sethp-jive opened this issue Jan 20, 2016 · 3 comments
Closed
Labels

Comments

@sethp-jive
Copy link
Contributor

Hello! When I have a terraform config defined as:

resource "aws_vpc" "r" {
  cidr_block = "172.16.0.0/16"
}

resource "aws_subnet" "r" {
  vpc_id = "${aws_vpc.r.id}"
  cidr_block = "${cidrsubnet(aws_vpc.r.cidr_block, 8, count.index)}"

  count = 1  // With the intention of setting this to 3 "later"
}

resource "aws_route_table" "r" {
  vpc_id = "${aws_vpc.r.id}"
}

resource "aws_route_table_association" "r" {
  subnet_id = "${element(aws_subnet.r.*.id, count.index)}"
  route_table_id = "${aws_route_table.r.id}"
}

resource "aws_instance" "r" {
  ami = "ami-ad40329d"
  instance_type = "t1.micro"
  subnet_id = "${element(aws_subnet.r.*.id, count.index)}"
}

I have found that changing the count parameter to 3 produces an undesirable result:

Refreshing Terraform state prior to plan...

aws_vpc.r: Refreshing state... (ID: vpc-XXXXXXXX)
aws_route_table.r: Refreshing state... (ID: rtb-XXXXXXXX)
aws_subnet.r.0: Refreshing state... (ID: subnet-XXXXXXXX)
aws_instance.r: Refreshing state... (ID: i-XXXXXXXX)
aws_route_table_association.r: Refreshing state... (ID: rtbassoc-XXXXXXXX)

The Terraform execution plan has been generated and is shown below.
Resources are shown in alphabetical order for quick scanning. Green resources
will be created (or destroyed and then created if an existing resource
exists), yellow resources are being changed in-place, and red resources
will be destroyed.

Note: You didn't specify an "-out" parameter to save this plan, so when
"apply" is called, Terraform can't guarantee this is what will execute.

-/+ aws_instance.r
    ami:                      "ami-ad40329d" => "ami-ad40329d"
    availability_zone:        "us-west-2b" => "<computed>"
    ebs_block_device.#:       "0" => "<computed>"
    ephemeral_block_device.#: "0" => "<computed>"
    instance_type:            "t1.micro" => "t1.micro"
    key_name:                 "" => "<computed>"
    placement_group:          "" => "<computed>"
    private_dns:              "ip-172-16-0-241.us-west-2.compute.internal" => "<computed>"
    private_ip:               "172.16.0.241" => "<computed>"
    public_dns:               "" => "<computed>"
    public_ip:                "" => "<computed>"
    root_block_device.#:      "1" => "<computed>"
    security_groups.#:        "0" => "<computed>"
    source_dest_check:        "true" => "1"
    subnet_id:                "subnet-XXXXXXXX" => "${element(aws_subnet.r.*.id, count.index)}" (forces new resource)
    tenancy:                  "default" => "<computed>"
    vpc_security_group_ids.#: "1" => "<computed>"

-/+ aws_route_table_association.r
    route_table_id: "rtb-XXXXXXXX" => "rtb-XXXXXXXX"
    subnet_id:      "subnet-XXXXXXXX" => "${element(aws_subnet.r.*.id, count.index)}" (forces new resource)

+ aws_subnet.r.1
    availability_zone:       "" => "<computed>"
    cidr_block:              "" => "172.16.1.0/24"
    map_public_ip_on_launch: "" => "0"
    vpc_id:                  "" => "vpc-XXXXXXXX"

+ aws_subnet.r.2
    availability_zone:       "" => "<computed>"
    cidr_block:              "" => "172.16.2.0/24"
    map_public_ip_on_launch: "" => "0"
    vpc_id:                  "" => "vpc-XXXXXXXX"


Plan: 4 to add, 0 to change, 2 to destroy.

Note the changes to targets "aws_route_table_association.r" and "aws_instance.r" – both of these resources will go through a create/destroy cycle to end up right back in the same subnet.

I would have expected a plan "2 to add, 0 to change, 0 to destroy" for the addition of the 2 as-yet-unreferenced subnets.

A valid workaround appears to be:

  1. Change the element(..., count.index) references to aws_subnet.r.0.id like so:

    resource "aws_route_table_association" "r" {
      // subnet_id = "${element(aws_subnet.r.*.id, count.index)}"
      subnet_id = "${aws_subnet.r.0.id}"
      route_table_id = "${aws_route_table.r.id}"
    }
    
    resource "aws_instance" "r" {
      ami = "ami-ad40329d"
      instance_type = "t1.micro"
      // subnet_id = "${element(aws_subnet.r.*.id, count.index)}"
      subnet_id = "${aws_subnet.r.0.id}"
    }
    
  2. Run terraform apply

  3. Change the aws_subnet.r.0.id back to element(..., count.index)

At this point, terraform plan claims my infrastructure is up to date without requiring the destruction & recreation of my route table association or instance.

Also of note is that the reverse, changing 3 back to 1, produces the expected result:

Refreshing Terraform state prior to plan...

aws_subnet.r.2: Refreshing state... (ID: subnet-XXXXXXXX)
aws_subnet.r.1: Refreshing state... (ID: subnet-XXXXXXXX)
aws_vpc.r: Refreshing state... (ID: vpc-XXXXXXXX)
aws_subnet.r: Refreshing state... (ID: subnet-XXXXXXXX)
aws_route_table.r: Refreshing state... (ID: rtb-XXXXXXXX)
aws_route_table_association.r: Refreshing state... (ID: rtbassoc-XXXXXXXX)
aws_instance.r: Refreshing state... (ID: i-XXXXXXXX)

The Terraform execution plan has been generated and is shown below.
Resources are shown in alphabetical order for quick scanning. Green resources
will be created (or destroyed and then created if an existing resource
exists), yellow resources are being changed in-place, and red resources
will be destroyed.

Note: You didn't specify an "-out" parameter to save this plan, so when
"apply" is called, Terraform can't guarantee this is what will execute.

- aws_subnet.r.1

- aws_subnet.r.2


Plan: 0 to add, 0 to change, 2 to destroy.

I haven't conducted an exhaustive survey of what resource types exhibit this behavior, but for what it's worth I have not observed similar problems with properties whose schemas are not flagged as ForceNew: true.

@apparentlymart
Copy link
Contributor

Sorry to see you're experiencing over-zealous diffing.

I think this might be the same thing being discussed in #3449, but I'm not sure.

In your first example, do you have count set on the aws_route_table_association and aws_instance resources too? It's strange to see count.index used in a resource that doesn't have count set, but maybe that was just something you simplified to post this example?

The issue for #3449 is that a * interpolation "depends on" all of the matched resources, even if you subsequently select only one via element; Terraform's dependency resolver isn't smart enough to understand and record that element picks only one item from the list:

  • When you add the new instance, you add a new "computed" item to the end of that list.
  • If a list contains any items that are computed, then Terraform considers the entire list to be computed.
  • If a function is called with a computed value as an argument then the function's result is always considered computed, regardless of the function.

Does that seem like a plausible explanation for what you're seeing? If so, I think this is a duplicate and I'd like to consolidate the discussion in the other ticket.

In the mean time, a workaround for this issue would be to plan with -target=aws_instance.r[1] -target=aws_instance.r[2] to encourage Terraform to create the two new instances without modifying anything else. Once Terraform has finished creating the two new instances it will no longer consider the list to be computed, so it should be able to understand that the subnet id didn't change and not recreate the route table association and subnet.

On a longer ark, the new feature discussed in #4149 could ultimately (with some further architecture changes to how interpolations are resolved) help Terraform handle this situation better by automatically detecting that it doesn't have enough information to apply this whole config in a single step.

@sethp-jive
Copy link
Contributor Author

Ah ha – there it is! Indeed, this behavior looks to be a dupe of #3449 (or, at least, would seem to be addressed by the fix there proposed).

Thanks again for the prompt and thorough reply! :)

@ghost
Copy link

ghost commented Apr 28, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost locked and limited conversation to collaborators Apr 28, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

No branches or pull requests

2 participants