Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to build new eks cluster from scratch #1782

Closed
jacobball11 opened this issue Jan 14, 2022 · 3 comments
Closed

Unable to build new eks cluster from scratch #1782

jacobball11 opened this issue Jan 14, 2022 · 3 comments

Comments

@jacobball11
Copy link

Description

I am trying to create an EKS platform for launching our application code. Currently we have nothing up and running in AWS that is managed by Terraform. Ideally we want to spin up and tear down entire infrastructures using a single set of terraform config. Currently I cannot get the EKS module to build a successful plan.

When trying to plan or apply, this module will fail due to an internal dependency issue with the for_each statement in the main.tf file on line 207 as well as the sub-module eks-managed-node-group\main.tf line 427. (See the error output below)

╷
│ Error: Invalid for_each argument
│
│   on <PATH TO EKS MODULE>\main.tf line 207, in resource "aws_iam_role_policy_attachment" "this":
│  207:   for_each = var.create && var.create_iam_role ? toset(compact(distinct(concat([
│  208:     "${local.policy_arn_prefix}/AmazonEKSClusterPolicy",
│  209:     "${local.policy_arn_prefix}/AmazonEKSVPCResourceController",
│  210:   ], var.iam_role_additional_policies)))) : toset([])
│     ├────────────────
│     │ local.policy_arn_prefix is a string, known only after apply
│     │ var.create is true
│     │ var.create_iam_role is true
│     │ var.iam_role_additional_policies is empty list of string
│
│ The "for_each" value depends on resource attributes that cannot be determined until apply, so Terraform cannot
│ predict how many instances will be created. To work around this, use the -target argument to first apply only the
│ resources that the for_each depends on.
╵
╷
│ Error: Invalid for_each argument
│
│   on <PATH TO EKS MODULE>\modules\eks-managed-node-group\main.tf line 427, in resource "aws_iam_role_policy_attachment" "this":
│  427:   for_each = var.create && var.create_iam_role ? toset(compact(distinct(concat([
│  428:     "${local.policy_arn_prefix}/AmazonEKSWorkerNodePolicy",
│  429:     "${local.policy_arn_prefix}/AmazonEC2ContainerRegistryReadOnly",
│  430:     "${local.policy_arn_prefix}/AmazonEKS_CNI_Policy",
│  431:   ], var.iam_role_additional_policies)))) : toset([])
│     ├────────────────
│     │ local.policy_arn_prefix is a string, known only after apply
│     │ var.create is true
│     │ var.create_iam_role is true
│     │ var.iam_role_additional_policies is empty list of string
│
│ The "for_each" value depends on resource attributes that cannot be determined until apply, so Terraform cannot
│ predict how many instances will be created. To work around this, use the -target argument to first apply only the
│ resources that the for_each depends on.
╵

Versions

  • Terraform:
 Terraform v1.1.2
on windows_amd64
  • Provider(s):
+ provider registry.terraform.io/hashicorp/aws v3.72.0
+ provider registry.terraform.io/hashicorp/cloudinit v2.2.0
+ provider registry.terraform.io/hashicorp/helm v2.4.1
+ provider registry.terraform.io/hashicorp/kubernetes v2.7.0
+ provider registry.terraform.io/hashicorp/tls v3.1.0
  • Module:
module "heliponix-eks" {
  source = "terraform-aws-modules/eks/aws"

  cluster_name          = var.cluster-name
  cluster_version       = var.k8s-version
  cluster_enabled_log_types = [
    "api",
    "audit",
    "authenticator",
    "controllerManager",
    "scheduler",
  ]

  create_iam_role = true
  iam_role_name = "${var.cluster-name}ClusterRole"

  vpc_id = module.heliponix-vpc[var.vpc-network-type].vpc_id
  subnet_ids = concat(
    module.heliponix-vpc[var.vpc-network-type].public_subnet_ids,
    module.heliponix-vpc[var.vpc-network-type].private_subnet_ids,
  )

  cluster_endpoint_private_access = true
  cluster_endpoint_public_access  = true
  cluster_endpoint_public_access_cidrs = [
    "0.0.0.0/0",
  ]
  cluster_security_group_id = module.heliponix-vpc[var.vpc-network-type].control_plane_security_group_id
  cluster_service_ipv4_cidr = "172.16.0.0/12"

  enable_irsa                 = true # IAM Role for Service Accounts (IRSA)
  openid_connect_audiences    = ["sts.amazonaws.com"]

  eks_managed_node_groups = {
    managed-data-plane = {
      iam_role_arn            = aws_iam_role.node-instance.arn
      desired_capacity        = var.desired-node-count
      max_capacity            = var.max-node-count
      min_capacity            = var.min-node-count
      launch_template_id      = aws_launch_template.worker-node.id
      launch_template_version = "1"
      subnets                 = module.heliponix-vpc[var.vpc-network-type].private_subnet_ids
      update_config = {
        max_unavailable_percentage = 50
      }
    }
  }
  depends_on = [
    module.heliponix-vpc,
    aws_iam_role.node-instance,
    aws_launch_template.worker-node,
  ]
}

Reproduction

Steps to reproduce the behavior:

Currently there are no AWS resources created in the region

  1. terraform init
  2. terraform plan

Expected behavior

No internal files should have dependency issues

Actual behavior

The IAM role policy attachment uses local.policy_arn_prefix which uses data.aws_partition.current.partition which is known only after apply

Terminal Output

(in the description)

@bryantbiggs
Copy link
Member

Duplicate of #1753 - see thread for more details

@bryantbiggs
Copy link
Member

Note - you shouldn't need your depends_on block, you already have those resources specified in the cluster definition and they will create the appropriate dependencies. You can double check by comparing graph outputs

@github-actions
Copy link

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Nov 15, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants