-
Notifications
You must be signed in to change notification settings - Fork 9.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Apparent regression for providers in child modules in 0.6.8 #4218
Comments
Hi @apparentlymart! This does look nasty. I've tried to reproduce the provider part of it using this configuration: main.tfprovider "aws" {
region = "us-east-1"
}
module "child" {
source = "./child"
}
resource "aws_vpc" "parent_test" {
cidr_block = "10.0.0.0/16"
} child/main.tfprovider "aws" {
region = "us-west-2"
}
resource "aws_vpc" "test" {
cidr_block = "10.0.0.0/16"
} This results in a successful application and the VPCs get created as expected. Changing the child module to use I think I must be misunderstanding your description of what is happening here? Are you able to share the graph output indicating which providers are enabled/disabled and/or some failing config? |
@jen20 I am currently stuck into some other work so I've not been able to dig in any deeper yet but when I get some spare moments I will try to reduce to a smaller example that exhibits the problem. But maybe one crucial detail is that I had this problem with a pre-existing infrastructure that already had state entries for all of the resources, vs. a fresh create from an empty state. The graph for the particular infrastructure I was using here is quite large and full of distractions so I won't post it here but I will try to reduce to a smaller example and share its graph at some point in the next few weeks. |
Looks like this is a dupe of #3114. |
@apparentlymart I believe an orphaned (removed from config, present in state) module is the key detail in #3114 - did your scenario involve one as well? |
@phinze I'm not sure but it seems pretty likely since we were doing some quite drastic restructuring of the configuration when it occured. |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further. |
Since upgrading to v0.6.8 some of my configurations no longer work, and fail in this manner:
I happened to still have a v0.6.7 binary on one of my systems and verified that these errors do not show up with that version.
I have not yet investigated the provisioner-related errors, but the provider-related errors seem to be caused by child modules instantiating their own providers.
We have two different patterns here:
aws
provider, the root module has aprovider "aws"
block but some of our child modules also haveprovider "aws"
blocks that override the root one, e.g. to choose a different region. (In practice all of the modules here happen to set the region the same as the root, so I'm not sure if this was working before or just failing silently and inheriting from the root.)chef
provider (an local build of the provider from Chef provider #3084), the root module doesn't configure the provider at all but multiple different child modules use it.From my initial light investigation it seems that Terraform is now instantiating each provider just once but then trying to initialize it once per module, rather than the (presumed?) previous behavior of creating an entirely separate instance for each module where an overridden configuration was present.
I've not yet had time to dive in deep and try to trace through the code. I may do that later, but I wanted to share what I have so far in case someone is able to quickly identify a recent change that would explain this new behavior.
The text was updated successfully, but these errors were encountered: