Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow EKS cluster creation when multiple AWS profile are defined #13

Merged
merged 2 commits into from
Dec 6, 2024

Conversation

keyvaann
Copy link
Collaborator

Aiming to fix #5.
Also added the cluster prefix to the node groups so they'd be easier to distinguish EC2 servers list. Note: This will delete and recreate node groups, might cause slight disruptions.

@keyvaann keyvaann requested a review from baixiac December 11, 2023 15:21
@@ -11,7 +12,7 @@ provider "kubernetes" {

exec {
api_version = "client.authentication.k8s.io/v1beta1"
args = ["eks", "get-token", "--cluster-name", module.eks.cluster_name, "--region", var.AWS_REGION]
args = ["eks", "get-token", "--cluster-name", module.eks.cluster_name, "--region", var.AWS_REGION, "--profile", var.AWS_PROFILE]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This means the profile will take precedence over AWS_*s which were used by Terraform for creating AWS resources. Two approaches could well be using different users/roles. Is that possible to make the profile optional?

FYI, https://registry.terraform.io/providers/hashicorp/aws/latest/docs

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Further tested the PR today and found the profile will also take precedence in provider "aws" so that's probably fine. But if TF_VAR_AWS_REGION is set to eu-west-1 whilst the default profile has region set to eu-west-2, the following error will occur:

│ Error: Have got the following error while validating the existence of the ConfigMap "aws-auth": Unauthorized
│ 
│   with module.eks.kubernetes_config_map_v1_data.aws_auth[0],
│   on .terraform/modules/eks/main.tf line 553, in resource "kubernetes_config_map_v1_data" "aws_auth":
│  553: resource "kubernetes_config_map_v1_data" "aws_auth" {

This makes me think the configuration is better to be set via either env vars alone or a profile alone but not both. Any thoughts?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure what the best practice to handle terraform variables but I'm using tfvar files at the moment to store the configuration which also includes AWS variables, I think as long as it's possible to specify a profile via tfvar files the rest shouldn't be an issue and we can implement it differently.

@baixiac
Copy link
Member

baixiac commented Dec 14, 2023

Another finding is the limit on the length of the name_prefix for a node group. If the cluster name is set to "dev-radar-base-cluster" which used to work, the following errors will occur:

│ Error: expected length of name_prefix to be in the range (1 - 38), got dev-radar-base-cluster-dmz-eks-node-group-
│ 
│   with module.eks.module.eks_managed_node_group["dmz"].aws_iam_role.this[0],
│   on .terraform/modules/eks/modules/eks-managed-node-group/main.tf line 421, in resource "aws_iam_role" "this":
│  421:   name_prefix = var.iam_role_use_name_prefix ? "${local.iam_role_name}-" : null
│ 
│ Error: expected length of name_prefix to be in the range (1 - 38), got dev-radar-base-cluster-worker-eks-node-group-
│ 
│   with module.eks.module.eks_managed_node_group["worker"].aws_iam_role.this[0],
│   on .terraform/modules/eks/modules/eks-managed-node-group/main.tf line 421, in resource "aws_iam_role" "this":
│  421:   name_prefix = var.iam_role_use_name_prefix ? "${local.iam_role_name}-" : null

@keyvaann
Copy link
Collaborator Author

I don't know where eks-node-group part is coming from since I couldn't find it in the code but if we change it to eks-ng it should cover most names.

@baixiac
Copy link
Member

baixiac commented Dec 15, 2023

That part must have come from the eks module and longer names could violate the AWS rule on naming the Node IAM role. I can see the "Security group name" column does contain the cluster name although that's probably not ideal.

@keyvaann keyvaann force-pushed the aws-profiles-for-eks branch from 7fd2490 to b4939d8 Compare December 4, 2024 20:11
@keyvaann
Copy link
Collaborator Author

keyvaann commented Dec 4, 2024

Fixed the conflicts and things seems to be working now.
Had to make access key and secret key optional to make Terraform read the AWS profile.

@keyvaann
Copy link
Collaborator Author

keyvaann commented Dec 6, 2024

@baixiac this PR is also ready for review

Copy link
Member

@baixiac baixiac left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@keyvaann keyvaann merged commit 8fd4433 into main Dec 6, 2024
2 checks passed
@keyvaann keyvaann deleted the aws-profiles-for-eks branch December 6, 2024 17:04
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Getting credentials: exec: executable aws failed with exit code 255
2 participants