Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Apparent regression for providers in child modules in 0.6.8 #4218

Closed
apparentlymart opened this issue Dec 8, 2015 · 6 comments
Closed

Apparent regression for providers in child modules in 0.6.8 #4218

apparentlymart opened this issue Dec 8, 2015 · 6 comments
Assignees

Comments

@apparentlymart
Copy link
Contributor

Since upgrading to v0.6.8 some of my configurations no longer work, and fail in this manner:

$ terraform plan -input=false 
There are warnings and/or errors related to your configuration. Please
fix these before continuing.

Errors:

  * Provisioner 'local-exec' already initialized
  * Provisioner 'local-exec' already initialized
  * Provisioner 'file' already initialized
  * Provider 'aws' already initialized
  * Provider 'chef' already initialized
  * Provisioner 'remote-exec' already initialized
  * Provider 'aws' already initialized
  * Provisioner 'chef' already initialized
  * Provider 'chef' already initialized
  * Provisioner 'chef' already initialized
  * Provisioner 'file' already initialized
  * Provisioner 'file' already initialized
  * Provisioner 'remote-exec' already initialized
  * Provider 'aws' already initialized
  * Provisioner 'chef' already initialized
  * Provisioner 'local-exec' already initialized
  * Provisioner 'local-exec' already initialized
  * Provisioner 'file' already initialized
  * Provisioner 'remote-exec' already initialized
  * Provider 'chef' already initialized
  * Provisioner 'file' already initialized
  * Provisioner 'remote-exec' already initialized
  * Provisioner 'local-exec' already initialized
  * Provisioner 'remote-exec' already initialized
  * Provisioner 'chef' already initialized
  * Provider 'chef' already initialized
  * Provisioner 'file' already initialized
  * Provider 'aws' already initialized
  * Provisioner 'remote-exec' already initialized
  * Provisioner 'chef' already initialized
  * Provisioner 'chef' already initialized
  * Provider 'aws' already initialized
  * Provisioner 'remote-exec' already initialized
  * Provisioner 'local-exec' already initialized
  * Provider 'aws' already initialized
  * Provisioner 'file' already initialized
  * Provisioner 'local-exec' already initialized
  * Provisioner 'local-exec' already initialized
  * Provisioner 'chef' already initialized
  * Provisioner 'chef' already initialized
  * Provisioner 'remote-exec' already initialized
  * Provisioner 'file' already initialized

I happened to still have a v0.6.7 binary on one of my systems and verified that these errors do not show up with that version.

I have not yet investigated the provisioner-related errors, but the provider-related errors seem to be caused by child modules instantiating their own providers.

We have two different patterns here:

  • In the case of the aws provider, the root module has a provider "aws" block but some of our child modules also have provider "aws" blocks that override the root one, e.g. to choose a different region. (In practice all of the modules here happen to set the region the same as the root, so I'm not sure if this was working before or just failing silently and inheriting from the root.)
  • In the case of the chef provider (an local build of the provider from Chef provider #3084), the root module doesn't configure the provider at all but multiple different child modules use it.

From my initial light investigation it seems that Terraform is now instantiating each provider just once but then trying to initialize it once per module, rather than the (presumed?) previous behavior of creating an entirely separate instance for each module where an overridden configuration was present.

I've not yet had time to dive in deep and try to trace through the code. I may do that later, but I wanted to share what I have so far in case someone is able to quickly identify a recent change that would explain this new behavior.

@jen20
Copy link
Contributor

jen20 commented Dec 9, 2015

Hi @apparentlymart! This does look nasty. I've tried to reproduce the provider part of it using this configuration:

main.tf

provider "aws" {
    region = "us-east-1"
}

module "child" {
    source = "./child"
}

resource "aws_vpc" "parent_test" {
    cidr_block = "10.0.0.0/16"
}

child/main.tf

provider "aws" {
    region = "us-west-2"
}

resource "aws_vpc" "test" {
    cidr_block = "10.0.0.0/16"
}

This results in a successful application and the VPCs get created as expected. Changing the child module to use us-east-1 also succeeds.

I think I must be misunderstanding your description of what is happening here? Are you able to share the graph output indicating which providers are enabled/disabled and/or some failing config?

@jen20 jen20 self-assigned this Dec 9, 2015
@apparentlymart
Copy link
Contributor Author

@jen20 I am currently stuck into some other work so I've not been able to dig in any deeper yet but when I get some spare moments I will try to reduce to a smaller example that exhibits the problem.

But maybe one crucial detail is that I had this problem with a pre-existing infrastructure that already had state entries for all of the resources, vs. a fresh create from an empty state.

The graph for the particular infrastructure I was using here is quite large and full of distractions so I won't post it here but I will try to reduce to a smaller example and share its graph at some point in the next few weeks.

@apparentlymart
Copy link
Contributor Author

Looks like this is a dupe of #3114.

@phinze
Copy link
Contributor

phinze commented Dec 15, 2015

@apparentlymart I believe an orphaned (removed from config, present in state) module is the key detail in #3114 - did your scenario involve one as well?

@apparentlymart
Copy link
Contributor Author

@phinze I'm not sure but it seems pretty likely since we were doing some quite drastic restructuring of the configuration when it occured.

@ghost
Copy link

ghost commented Apr 29, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost locked and limited conversation to collaborators Apr 29, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

3 participants