From 398b8ba01696d45183fba4342d0dd6157fdcb78e Mon Sep 17 00:00:00 2001 From: Xia Zhao Date: Mon, 28 Oct 2024 22:34:05 -0700 Subject: [PATCH 01/25] Initial RFC Draft WIP --- text/0605-eks-rewrite.md | 303 +++++++++++++++++++++++++++++++++++++++ 1 file changed, 303 insertions(+) create mode 100644 text/0605-eks-rewrite.md diff --git a/text/0605-eks-rewrite.md b/text/0605-eks-rewrite.md new file mode 100644 index 000000000..589c01267 --- /dev/null +++ b/text/0605-eks-rewrite.md @@ -0,0 +1,303 @@ +# EKS L2 Re-write + +* **Original Author(s)**: @xazhao +* **Tracking Issue**: [#605](https://github.com/aws/aws-cdk-rfcs/issues/605) +* **API Bar Raiser**: + +eks-v2-alpha is a re-write of existing aws-eks module. It uses native L1 CFN resource instead of custom resource to create EKS cluster and Fargate Profile. This re-write provides a better customer experience including faster deployment, less complexity/cost and more features like escape hatching and better VPC support. It also require less maintenance compare to the existing module. + +## Working Backwards +This construct library is a re-write of `aws-eks` library. The new construct library uses the native L1 resource AWS::EKS::Cluster to provision a EKS cluster instead of relying on custom resource in old aws-eks library. + +This RFC is focus on difference between the new module and original EKS L2. Detailed use case will be published in the README of new eks-alpha-v2 module. If a feature of existing EKS construct is not included in this RFC, it will be same as existing EKS construct. + +## Architecture +``` + +-----------------------------------------------+ + | EKS Cluster | kubectl | | + | -----------------|<--------+| Kubectl Handler | + | AWS::EKS::Cluster | + | +--------------------+ +-----------------+ | + | | | | | | + | | Managed Node Group | | Fargate Profile | | + | | | | | | + | +--------------------+ +-----------------+ | + +-----------------------------------------------+ + ^ + | connect self managed capacity + + + +--------------------+ + | Auto Scaling Group | + +--------------------+ + ``` + + In a nutshell: + +* EKS Cluster - The cluster endpoint created by EKS. +* Managed Node Group - EC2 worker nodes managed by EKS. +* Fargate Profile - Fargate worker nodes managed by EKS. +* Auto Scaling Group - EC2 worker nodes managed by the user. +* Kubectl Handler - Lambda function for invoking kubectl commands on the cluster - created by CDK + +### Difference from original EKS L2 + +ClusterHandler is removed in the new implementation because it uses native L1 resource AWS::EKS::Cluster to create the EKS cluster resource. Along with resource change, following properties on Cluster construct are removed: + +* clusterHandlerEnvironment +* clusterHandlerSecurityGroup +* clusterHandlerSecurityGroup +* onEventLayer + +## Resource Provisioning +This change is not directly visible in API or construct props, but in implementation details. Two custom resources will be replaced with native CFN L1 resources: + +* `Custom::AWSCDK-EKS-Cluster` will be replaced with `AWS::EKS::Cluster` +* `Custom::AWSCDK-EKS-FargateProfile` will be replaced with `AWS::EKS::FargateProfile` + +The resource type change will be reflected in cdk synth output template. + +## Authentication +`ConfigMap` authentication mode has been deprecated by EKS and the recommend mode is API. The new EKS L2 will go a step further and only support API authentication mode. All grant functions in EKS will use Access Entry to grant permissions to an IAM role/user. + +`AwsAuth` construct was developed to manage mappings between IAM users/roles to Kubernetes RBAC configuration through ConfigMap. It’s exposed with awsAuth attribute of cluster construct. With the deprecation of `ConfigMap` mode, AwsAuth construct and the attribute are removed in the new EKS module. + +`grant()` function are introduced to replace the awsAuth. It’s implemented using Access Entry. + +### Difference from original EKS L2 +Before using awsAuth +``` +cluster.awsAuth.addMastersRole(role); +``` +After using Access Entry +``` +cluster.grantAdmin('adminAccess', roleArn, eks.AccessScopeType.CLUSTER); + +# A general grant function is also provided +cluster.grantAccess('adminAccess', roleArn, [ + eks.AccessPolicy.fromAccessPolicyName('AmazonEKSClusterAdminPolicy', { + accessScopeType: eks.AccessScopeType.CLUSTER, + }), +]); +``` +## Cluster Configuration + +### Logging Configuration +Logging property is renamed from clusterLogging to logging since there is only one logging property in the construct. + +Before +``` +const cluster = new eks.Cluster(this, 'Cluster', { + // ... + version: eks.KubernetesVersion.V1_31, + clusterLogging: [ + eks.ClusterLoggingTypes.API, + eks.ClusterLoggingTypes.AUTHENTICATOR, + ], +}); +``` +After +``` +const cluster = new eks.Cluster(this, 'Cluster', { + version: eks.KubernetesVersion.V1_31, + logging: [ + eks.ClusterLoggingTypes.API, + eks.ClusterLoggingTypes.AUTHENTICATOR, + ], +}); +``` + +### Output Configuration +A new property `outputInfo` will replace the current 3 output properties. Although 3 separate output properties provide customization on output configuration, it increased the cognitive load and doesn’t provide a clean usage. The proposal here is to have one single flag to control all of them. + +Before +``` +const cluster = new eks.Cluster(this, 'Cluster', { + version: eks.KubernetesVersion.V1_31, + outputMastersRoleArn: true, + outputClusterName: true, + outputConfigCommand: true, +}); +``` +After +``` +const cluster = new eks.Cluster(this, 'Cluster', { + version: eks.KubernetesVersion.V1_31, + outputInfo: true, +}); +``` + +### Kubectl Handler Configuration +KubectlProvider is a lambda function that CDK deploys alongside the EKS cluster in order to execute kubectl commands against the cluster. + +A common scenarios is that users create a CDK app that deploys the EKS cluster, which is then imported in other apps in order to deploy resources onto the cluster. + +Difference + +Before +``` +const kubectlProvider = eks.KubectlProvider.fromKubectlProviderAttributes(this, 'KubectlProvider', { + functionArn, + kubectlRoleArn: 'arn:aws:iam::123456789012:role/kubectl-role', + handlerRole, +}); +``` +After +``` +const kubectlProvider = eks.KubectlProvider.fromKubectlProviderArn(this, 'KubectlProvider', { + functionArn: // Required. ARN of the original kubectl function +}); +``` +Following parameters are removed: + +* kubectlRoleArn +* handlerRole + + `fromKubectlProviderAttributes()` is renamed to `fromKubectlProviderFunctionArn()`. + +Reason: when the KubectlProvider was created in another stack, the lambda execution role already has permissions to access the cluster. + +Besides that, KubectlProvider specific properties are moved into KubectlProviderOptions to better organize properties. +``` +export interface KubectlProviderOptions { + readonly role?: iam.IRole; + readonly awscliLayer?: lambda.ILayerVersion; + readonly kubectlLayer?: lambda.ILayerVersion; + readonly memory?: Size; + readonly environment?: { [key: string]: string }; + /** + * Wich subnets should the provider functions be placed in. + */ + readonly vpcSubnets?: ec2.SubnetSelection; +} +``` + +Before +``` +new eks.Cluster(this, 'MyCluster', { + version: eks.KubernetesVersion.V1_31, + kubectlMemory: Size.gibibytes(4), + kubectlLayer: new KubectlV31Layer(this, 'kubectl'), + kubectlEnvironment: { + 'http_proxy': 'http://proxy.myproxy.com', + }, + kubectlRole: iam.Role.fromRoleArn(this, 'MyRole', 'arn:aws:iam::123456789012:role/lambda-role'); +}); +``` +After +``` +new eks.Cluster(this, 'MyCluster', { + version: eks.KubernetesVersion.V1_31, + kubectlProviderOptions: { + memory: Size.gibibytes(4), + kubectlLayer: new KubectlV31Layer(this, 'kubectl'), + environment: { + 'http_proxy': 'http://proxy.myproxy.com', + }, + role: iam.Role.fromRoleArn(this, 'MyRole', 'arn:aws:iam::123456789012:role/lambda-role'); +}); +``` + +## Migration Path +Note: We can't guarantee it's a safe migration. + +Due to the fact that switching from a custom resource (Custom::AWSCDK-EKS-Cluster) to a native L1 (AWS::EKS::Cluster) resource requires cluster replacement, CDK users who need to preserve their cluster will have to take additional actions. + +1. Set the authentication mode of cluster from `AuthenticationMode.CONFIG_MAP` to `AuthenticationMode.API_AND_CONFIG_MAP` and deploy +2. Set the authentication mode of cluster from `AuthenticationMode.API_AND_CONFIG_MAP` to `AuthenticationMode.API` and deploy +3. Set removal policy to RETAIN on the existing cluster (and manifests) and deploy. +4. Remove cluster definition from their CDK app and deploy +5. Add new cluster definition using the new constructs(EKSV2). +6. Follow cdk import to import the existing cluster as the new definition. + 1. All relevant EKS resources support import. + 2. AWS::EKS::Cluster + 3. AWS::EKS::FargateProfile + 4. AWS::EKS::Nodegroup +7. Add Manifests. + +## Public FAQ + +### What are we launching today? + +We’re launching a new EKS module aws-eksv2-alpha. It's a rewrite of existing `aws-eks` module using native CFN L1 cluster resource instead of custom resource. + +### Why should I use this feature? + +The new EKS module provides faster deployment, less complexity, less cost and more features (e.g. isolated VPC and escape hatching). + +## Internal FAQ + +### Why are we doing this? +This feature has been highly requested by the community since Feb 2023. The current implementation using custom resource has some limitations and is harder to maintain. EKS L2 is a widely used module and we should rewrite it. + +### Why should we _not_ do this? +The migration for customers is not easy and we can't guarantee it's a safe migration without down time. + +### Is this a breaking change? + +Yes it's breaking change hence it's put into a new alpha module. A few other breaking changes are shipped together to make it more ergonomic and aligned with the new cluster implementation. + +### What is the high-level project plan? + +- [ ] Create prototype for design +- [ ] Gather feedback on the RFC +- [ ] Get bar raiser to sign off on RFC +- [ ] Implement the construct in a separate repository +- [ ] Make pull request to aws-cdk repository +- [ ] Iterate and respond to PR feedback +- [ ] Merge new construct and related changes + +### Are there any open issues that need to be addressed later? + +TBD + +## Appendix +#### EKS Cluster Props Difference +Same props + +``` +readonly version: KubernetesVersion; +readonly vpc: ec2.IVpc; +readonly vpcSubnets: ec2.SubnetSelection[]; +readonly albController?: AlbController; +readonly clusterName: string; +readonly coreDnsComputeType?: CoreDnsComputeType; +readonly defaultCapacity?: autoscaling.AutoScalingGroup; +readonly defaultCapacityInstance?: ec2.InstanceType; +readonly defaultCapacityType?: DefaultCapacityType; +readonly endpointAccess: EndpointAccess; +readonly ipFamily?: IpFamily; +readonly prune?: boolean; +readonly role?: iam.IRole; +readonly secretsEncryptionKey?: kms.IKey; +readonly securityGroup?: ec2.ISecurityGroup; +readonly serviceIpv4Cidr?: string; +readonly tags?: { [key: string]: string }; +readonly mastersRole?: iam.IRole; +readonly bootstrapClusterCreatorAdminPermissions?: boolean; +``` +Props only in old EKS +``` +readonly clusterLogging?: ClusterLoggingTypes[]; + +readonly awscliLayer?: lambda.ILayerVersion; +readonly kubectlEnvironment?: { [key: string]: string }; +readonly kubectlLambdaRole?: iam.IRole; +readonly kubectlLayer?: lambda.ILayerVersion; +readonly kubectlMemory?: Size; + +readonly outputMastersRoleArn?: boolean; +readonly outputClusterName?: boolean; +readonly outputConfigCommand?: boolean; + +readonly authenticationMode?: AuthenticationMode; +readonly clusterHandlerEnvironment?: { [key: string]: string }; +readonly clusterHandlerSecurityGroup?: ec2.ISecurityGroup; +readonly onEventLayer?: lambda.ILayerVersion; +readonly clusterHandlerSecurityGroup?: ec2.ISecurityGroup; +``` +Props only in new EKS +``` +readonly logging?: ClusterLoggingTypes[]; +readonly kubectlProviderOptions?: KubectlProviderOptions; +readonly outputInfo?: boolean; +``` From 5c672f8476faff23be443564623424792abf466e Mon Sep 17 00:00:00 2001 From: Xia Zhao Date: Wed, 30 Oct 2024 00:02:44 -0700 Subject: [PATCH 02/25] adding support for isolated VPC --- text/0605-eks-rewrite.md | 61 ++++++++++++++++++++++++++++++++++++---- 1 file changed, 55 insertions(+), 6 deletions(-) diff --git a/text/0605-eks-rewrite.md b/text/0605-eks-rewrite.md index 589c01267..44e90015d 100644 --- a/text/0605-eks-rewrite.md +++ b/text/0605-eks-rewrite.md @@ -81,6 +81,51 @@ cluster.grantAccess('adminAccess', roleArn, [ ``` ## Cluster Configuration +### New Feat: Create EKS Cluster in an isolated VPC +To create a EKS Cluster in an isolated VPC, vpc endpoints need to be set for different AWS services (EC2, S3, STS, ECR and anything the service needs). +``` +const vpc = new ec2.Vpc(this, 'vpc', { + subnetConfiguration: [ + { + cidrMask: 24, + name: 'Private', + subnetType: ec2.SubnetType.PRIVATE_ISOLATED, + }, + ], + gatewayEndpoints: { + S3: { + service: ec2.GatewayVpcEndpointAwsService.S3, + }, + }, +}); +vpc.addInterfaceEndpoint('stsEndpoint', { + service: ec2.InterfaceVpcEndpointAwsService.STS, + open: true, +}); + +vpc.addInterfaceEndpoint('ec2Endpoint', { + service: ec2.InterfaceVpcEndpointAwsService.EC2, + open: true, +}); + +vpc.addInterfaceEndpoint('ecrEndpoint', { + service: ec2.InterfaceVpcEndpointAwsService.ECR, + open: true, +}); + +vpc.addInterfaceEndpoint('ecrDockerEndpoint', { + service: ec2.InterfaceVpcEndpointAwsService.ECR_DOCKER, + open: true, +}); + +const cluster = new eks.Cluster(this, 'MyMycluster123', { + version: eks.KubernetesVersion.V1_31, + authenticationMode: eks.AuthenticationMode.API, + vpc, + vpcSubnets: [{ subnetType: ec2.SubnetType.PRIVATE_ISOLATED }] +}); +``` + ### Logging Configuration Logging property is renamed from clusterLogging to logging since there is only one logging property in the construct. @@ -218,11 +263,16 @@ Due to the fact that switching from a custom resource (Custom::AWSCDK-EKS-Cluste ### What are we launching today? -We’re launching a new EKS module aws-eksv2-alpha. It's a rewrite of existing `aws-eks` module using native CFN L1 cluster resource instead of custom resource. +We’re launching a new EKS module `aws-eksv2-alpha`. It's a rewrite of existing `aws-eks` module with some breaking changes based on community feedbacks. ### Why should I use this feature? -The new EKS module provides faster deployment, less complexity, less cost and more features (e.g. isolated VPC and escape hatching). +The new EKS module provides faster deployment, less complexity, less cost and more features (e.g. isolated VPC and escape hatching). + +### What's the future plan for existing `aws-eks` module? + +- When the new alpha module is published, `aws-eks` module will enter `maintainence` mode which means we will only work on bugs on `aws-eks` module. New features will only be added to the new `aws-eksv2-alpha` module. (Note: this is the general guideline and we might be flexible on this) +- When the new alpha module is stablized, `aws-eks` module will enter `deprecation` mode which means customers should migrate to the new module. They can till use the old module but we will not invest on features/bug fixes on it. ## Internal FAQ @@ -237,14 +287,13 @@ The migration for customers is not easy and we can't guarantee it's a safe migra Yes it's breaking change hence it's put into a new alpha module. A few other breaking changes are shipped together to make it more ergonomic and aligned with the new cluster implementation. ### What is the high-level project plan? - -- [ ] Create prototype for design +- [ ] Publish the RFC - [ ] Gather feedback on the RFC - [ ] Get bar raiser to sign off on RFC -- [ ] Implement the construct in a separate repository +- [ ] Create the new eksv2 alpha module and implementation - [ ] Make pull request to aws-cdk repository - [ ] Iterate and respond to PR feedback -- [ ] Merge new construct and related changes +- [ ] Merge new module ### Are there any open issues that need to be addressed later? From 43d95eed956472fca00c0d1224d585f1918977bc Mon Sep 17 00:00:00 2001 From: Xia Zhao Date: Tue, 5 Nov 2024 23:22:32 -0800 Subject: [PATCH 03/25] format and adding quick start --- text/0605-eks-rewrite.md | 258 +++++++++++++++++++++++++++++---------- 1 file changed, 191 insertions(+), 67 deletions(-) diff --git a/text/0605-eks-rewrite.md b/text/0605-eks-rewrite.md index 44e90015d..b00124c39 100644 --- a/text/0605-eks-rewrite.md +++ b/text/0605-eks-rewrite.md @@ -1,17 +1,49 @@ # EKS L2 Re-write -* **Original Author(s)**: @xazhao -* **Tracking Issue**: [#605](https://github.com/aws/aws-cdk-rfcs/issues/605) -* **API Bar Raiser**: - -eks-v2-alpha is a re-write of existing aws-eks module. It uses native L1 CFN resource instead of custom resource to create EKS cluster and Fargate Profile. This re-write provides a better customer experience including faster deployment, less complexity/cost and more features like escape hatching and better VPC support. It also require less maintenance compare to the existing module. +- **Original Author(s)**: @xazhao +- **Tracking Issue**: [#605](https://github.com/aws/aws-cdk-rfcs/issues/605) +- **API Bar Raiser**: + +The `eks-v2-alpha` module is a rewrite of the existing `aws-eks` module. This +new iteration leverages native L1 CFN resources, replacing the previous custom +resource approach for creating EKS clusters and Fargate Profiles. Beyond the +resource types change, the module introduces several other breaking changes +designed to better align the L2 construct with its updated implementation. These +modifications collectively enhance the module's functionality and integration +within the AWS ecosystem, providing a more robust and streamlined solution for +managing Elastic Kubernetes Service (EKS) resources. ## Working Backwards -This construct library is a re-write of `aws-eks` library. The new construct library uses the native L1 resource AWS::EKS::Cluster to provision a EKS cluster instead of relying on custom resource in old aws-eks library. -This RFC is focus on difference between the new module and original EKS L2. Detailed use case will be published in the README of new eks-alpha-v2 module. If a feature of existing EKS construct is not included in this RFC, it will be same as existing EKS construct. +This RFC primarily addresses the distinctions between the new module and the +original EKS L2 construct. Comprehensive use cases and examples will be +available in the README file of the forthcoming `eks-alpha-v2` module. + +It's important to note that any features of the existing EKS construct not +explicitly mentioned in this RFC will remain unchanged and function as they do +in the current implementation. This approach ensures clarity on the +modifications while maintaining continuity for unaffected features. + +## Quick start + +Here is the minimal example of defining an AWS EKS cluster + +``` +import { KubectlV31Layer } from '@aws-cdk/lambda-layer-kubectl-v31'; + +// provisioning a cluster +const cluster = new eksv2.Cluster(this, 'hello-eks', { + version: eks.KubernetesVersion.V1_31, + kubectlLayer: new KubectlV31Layer(this, 'kubectl'), +}); +``` + +Note: Compared to the previous L2, `kubectlLayer` property is required now The +reason is if we set a default version, that version will be outdated one day and +updating default version at that time will be a breaking change. ## Architecture + ``` +-----------------------------------------------+ | EKS Cluster | kubectl | | @@ -21,54 +53,73 @@ This RFC is focus on difference between the new module and original EKS L2. Deta | | | | | | | | Managed Node Group | | Fargate Profile | | | | | | | | - | +--------------------+ +-----------------+ | - +-----------------------------------------------+ - ^ - | connect self managed capacity - + - +--------------------+ - | Auto Scaling Group | + | +--------------------+ +-----------------+ | + +-----------------------------------------------+ + ^ + | connect self managed capacity + + +--------------------+ - ``` + | Auto Scaling Group | + +--------------------+ +``` - In a nutshell: +In a nutshell: -* EKS Cluster - The cluster endpoint created by EKS. -* Managed Node Group - EC2 worker nodes managed by EKS. -* Fargate Profile - Fargate worker nodes managed by EKS. -* Auto Scaling Group - EC2 worker nodes managed by the user. -* Kubectl Handler - Lambda function for invoking kubectl commands on the cluster - created by CDK +- EKS Cluster - The cluster endpoint created by EKS. +- Managed Node Group - EC2 worker nodes managed by EKS. +- Fargate Profile - Fargate worker nodes managed by EKS. +- Auto Scaling Group - EC2 worker nodes managed by the user. +- Kubectl Handler - Lambda function for invoking kubectl commands on the + cluster - created by CDK ### Difference from original EKS L2 -ClusterHandler is removed in the new implementation because it uses native L1 resource AWS::EKS::Cluster to create the EKS cluster resource. Along with resource change, following properties on Cluster construct are removed: +ClusterHandler is removed in the new implementation because it uses native L1 +resource AWS::EKS::Cluster to create the EKS cluster resource. Along with +resource change, following properties on Cluster construct are removed: -* clusterHandlerEnvironment -* clusterHandlerSecurityGroup -* clusterHandlerSecurityGroup -* onEventLayer +- clusterHandlerEnvironment +- clusterHandlerSecurityGroup +- clusterHandlerSecurityGroup +- onEventLayer ## Resource Provisioning -This change is not directly visible in API or construct props, but in implementation details. Two custom resources will be replaced with native CFN L1 resources: -* `Custom::AWSCDK-EKS-Cluster` will be replaced with `AWS::EKS::Cluster` -* `Custom::AWSCDK-EKS-FargateProfile` will be replaced with `AWS::EKS::FargateProfile` +This change is not directly visible in API or construct props, but in +implementation details. Two custom resources will be replaced with native CFN L1 +resources: + +- `Custom::AWSCDK-EKS-Cluster` will be replaced with `AWS::EKS::Cluster` +- `Custom::AWSCDK-EKS-FargateProfile` will be replaced with + `AWS::EKS::FargateProfile` The resource type change will be reflected in cdk synth output template. ## Authentication -`ConfigMap` authentication mode has been deprecated by EKS and the recommend mode is API. The new EKS L2 will go a step further and only support API authentication mode. All grant functions in EKS will use Access Entry to grant permissions to an IAM role/user. -`AwsAuth` construct was developed to manage mappings between IAM users/roles to Kubernetes RBAC configuration through ConfigMap. It’s exposed with awsAuth attribute of cluster construct. With the deprecation of `ConfigMap` mode, AwsAuth construct and the attribute are removed in the new EKS module. +`ConfigMap` authentication mode has been deprecated by EKS and the recommend +mode is API. The new EKS L2 will go a step further and only support API +authentication mode. All grant functions in EKS will use Access Entry to grant +permissions to an IAM role/user. + +`AwsAuth` construct was developed to manage mappings between IAM users/roles to +Kubernetes RBAC configuration through ConfigMap. It’s exposed with awsAuth +attribute of cluster construct. With the deprecation of `ConfigMap` mode, +AwsAuth construct and the attribute are removed in the new EKS module. -`grant()` function are introduced to replace the awsAuth. It’s implemented using Access Entry. +`grant()` function are introduced to replace the awsAuth. It’s implemented using +Access Entry. ### Difference from original EKS L2 + Before using awsAuth + ``` cluster.awsAuth.addMastersRole(role); ``` + After using Access Entry + ``` cluster.grantAdmin('adminAccess', roleArn, eks.AccessScopeType.CLUSTER); @@ -79,10 +130,14 @@ cluster.grantAccess('adminAccess', roleArn, [ }), ]); ``` + ## Cluster Configuration ### New Feat: Create EKS Cluster in an isolated VPC -To create a EKS Cluster in an isolated VPC, vpc endpoints need to be set for different AWS services (EC2, S3, STS, ECR and anything the service needs). + +To create a EKS Cluster in an isolated VPC, vpc endpoints need to be set for +different AWS services (EC2, S3, STS, ECR and anything the service needs). + ``` const vpc = new ec2.Vpc(this, 'vpc', { subnetConfiguration: [ @@ -127,9 +182,12 @@ const cluster = new eks.Cluster(this, 'MyMycluster123', { ``` ### Logging Configuration -Logging property is renamed from clusterLogging to logging since there is only one logging property in the construct. + +Logging property is renamed from clusterLogging to logging since there is only +one logging property in the construct. Before + ``` const cluster = new eks.Cluster(this, 'Cluster', { // ... @@ -140,7 +198,9 @@ const cluster = new eks.Cluster(this, 'Cluster', { ], }); ``` + After + ``` const cluster = new eks.Cluster(this, 'Cluster', { version: eks.KubernetesVersion.V1_31, @@ -152,18 +212,25 @@ const cluster = new eks.Cluster(this, 'Cluster', { ``` ### Output Configuration -A new property `outputInfo` will replace the current 3 output properties. Although 3 separate output properties provide customization on output configuration, it increased the cognitive load and doesn’t provide a clean usage. The proposal here is to have one single flag to control all of them. + +A new property `outputInfo` will replace the current 3 output properties. +Although 3 separate output properties provide customization on output +configuration, it increased the cognitive load and doesn’t provide a clean +usage. The proposal here is to have one single flag to control all of them. Before + ``` const cluster = new eks.Cluster(this, 'Cluster', { version: eks.KubernetesVersion.V1_31, - outputMastersRoleArn: true, + outputMastersRoleArn: true, outputClusterName: true, outputConfigCommand: true, }); ``` + After + ``` const cluster = new eks.Cluster(this, 'Cluster', { version: eks.KubernetesVersion.V1_31, @@ -172,13 +239,18 @@ const cluster = new eks.Cluster(this, 'Cluster', { ``` ### Kubectl Handler Configuration -KubectlProvider is a lambda function that CDK deploys alongside the EKS cluster in order to execute kubectl commands against the cluster. -A common scenarios is that users create a CDK app that deploys the EKS cluster, which is then imported in other apps in order to deploy resources onto the cluster. +KubectlProvider is a lambda function that CDK deploys alongside the EKS cluster +in order to execute kubectl commands against the cluster. + +A common scenarios is that users create a CDK app that deploys the EKS cluster, +which is then imported in other apps in order to deploy resources onto the +cluster. Difference Before + ``` const kubectlProvider = eks.KubectlProvider.fromKubectlProviderAttributes(this, 'KubectlProvider', { functionArn, @@ -186,22 +258,31 @@ const kubectlProvider = eks.KubectlProvider.fromKubectlProviderAttributes(this, handlerRole, }); ``` + After + ``` const kubectlProvider = eks.KubectlProvider.fromKubectlProviderArn(this, 'KubectlProvider', { functionArn: // Required. ARN of the original kubectl function }); ``` + Following parameters are removed: -* kubectlRoleArn -* handlerRole +- kubectlRoleArn +- handlerRole + +`fromKubectlProviderAttributes()` is renamed to +`fromKubectlProviderFunctionArn()`. + +Reason: when the KubectlProvider was created in another stack, the lambda +execution role already has permissions to access the cluster. - `fromKubectlProviderAttributes()` is renamed to `fromKubectlProviderFunctionArn()`. +Besides that, optional KubectlProvider specific properties are moved into +KubectlProviderOptions to better organize properties. -Reason: when the KubectlProvider was created in another stack, the lambda execution role already has permissions to access the cluster. +KubectlProviderOptions Definition -Besides that, KubectlProvider specific properties are moved into KubectlProviderOptions to better organize properties. ``` export interface KubectlProviderOptions { readonly role?: iam.IRole; @@ -210,83 +291,119 @@ export interface KubectlProviderOptions { readonly memory?: Size; readonly environment?: { [key: string]: string }; /** - * Wich subnets should the provider functions be placed in. + * Which subnets should the provider functions be placed in. */ readonly vpcSubnets?: ec2.SubnetSelection; } ``` -Before +Usage - Before + ``` new eks.Cluster(this, 'MyCluster', { version: eks.KubernetesVersion.V1_31, - kubectlMemory: Size.gibibytes(4), + kubectlMemory: Size.gigabytes(4), kubectlLayer: new KubectlV31Layer(this, 'kubectl'), kubectlEnvironment: { 'http_proxy': 'http://proxy.myproxy.com', }, - kubectlRole: iam.Role.fromRoleArn(this, 'MyRole', 'arn:aws:iam::123456789012:role/lambda-role'); + kubectlRole: iam.Role.fromRoleArn( + this, + 'MyRole', + 'arn:aws:iam::123456789012:role/lambda-role' + ); }); ``` -After + +Usage - After + ``` new eks.Cluster(this, 'MyCluster', { version: eks.KubernetesVersion.V1_31, + kubectlLayer: new KubectlV31Layer(this, 'kubectl'), kubectlProviderOptions: { - memory: Size.gibibytes(4), - kubectlLayer: new KubectlV31Layer(this, 'kubectl'), + memory: Size.gigabytes(4), environment: { 'http_proxy': 'http://proxy.myproxy.com', }, - role: iam.Role.fromRoleArn(this, 'MyRole', 'arn:aws:iam::123456789012:role/lambda-role'); + role: iam.Role.fromRoleArn( + this, + 'MyRole', + 'arn:aws:iam::123456789012:role/lambda-role' + ); }); ``` ## Migration Path -Note: We can't guarantee it's a safe migration. -Due to the fact that switching from a custom resource (Custom::AWSCDK-EKS-Cluster) to a native L1 (AWS::EKS::Cluster) resource requires cluster replacement, CDK users who need to preserve their cluster will have to take additional actions. +Note: We can't guarantee it's a safe migration. -1. Set the authentication mode of cluster from `AuthenticationMode.CONFIG_MAP` to `AuthenticationMode.API_AND_CONFIG_MAP` and deploy -2. Set the authentication mode of cluster from `AuthenticationMode.API_AND_CONFIG_MAP` to `AuthenticationMode.API` and deploy -3. Set removal policy to RETAIN on the existing cluster (and manifests) and deploy. +Due to the fact that switching from a custom resource +(Custom::AWSCDK-EKS-Cluster) to a native L1 (AWS::EKS::Cluster) resource +requires cluster replacement, CDK users who need to preserve their cluster will +have to take additional actions. + +1. Set the authentication mode of cluster from `AuthenticationMode.CONFIG_MAP` + to `AuthenticationMode.API_AND_CONFIG_MAP` and deploy +2. Set the authentication mode of cluster from + `AuthenticationMode.API_AND_CONFIG_MAP` to `AuthenticationMode.API` and + deploy +3. Set removal policy to RETAIN on the existing cluster (and manifests) and + deploy. 4. Remove cluster definition from their CDK app and deploy 5. Add new cluster definition using the new constructs(EKSV2). 6. Follow cdk import to import the existing cluster as the new definition. - 1. All relevant EKS resources support import. - 2. AWS::EKS::Cluster - 3. AWS::EKS::FargateProfile - 4. AWS::EKS::Nodegroup + 1. All relevant EKS resources support import. + 2. AWS::EKS::Cluster + 3. AWS::EKS::FargateProfile + 4. AWS::EKS::Nodegroup 7. Add Manifests. ## Public FAQ ### What are we launching today? -We’re launching a new EKS module `aws-eksv2-alpha`. It's a rewrite of existing `aws-eks` module with some breaking changes based on community feedbacks. +We’re launching a new EKS module `aws-eksv2-alpha`. It's a rewrite of existing +`aws-eks` module with some breaking changes based on community feedbacks. ### Why should I use this feature? -The new EKS module provides faster deployment, less complexity, less cost and more features (e.g. isolated VPC and escape hatching). +The new EKS module provides faster deployment, less complexity, less cost and +more features (e.g. isolated VPC and escape hatching). ### What's the future plan for existing `aws-eks` module? -- When the new alpha module is published, `aws-eks` module will enter `maintainence` mode which means we will only work on bugs on `aws-eks` module. New features will only be added to the new `aws-eksv2-alpha` module. (Note: this is the general guideline and we might be flexible on this) -- When the new alpha module is stablized, `aws-eks` module will enter `deprecation` mode which means customers should migrate to the new module. They can till use the old module but we will not invest on features/bug fixes on it. +- When the new alpha module is published, `aws-eks` module will enter + `maintenance` mode which means we will only work on bugs on `aws-eks` module. + New features will only be added to the new `aws-eksv2-alpha` module. (Note: + this is the general guideline and we might be flexible on this) +- When the new alpha module is stabilized, `aws-eks` module will transition into + a deprecation phase. This implies that customers should plan to migrate their + workloads to the new module. While they can continue using the old module for + the time being, CDK team will prioritize new features/bug fix on the new + module ## Internal FAQ ### Why are we doing this? -This feature has been highly requested by the community since Feb 2023. The current implementation using custom resource has some limitations and is harder to maintain. EKS L2 is a widely used module and we should rewrite it. + +This feature has been highly requested by the community since Feb 2023. The +current implementation using custom resource has some limitations and is harder +to maintain. EKS L2 is a widely used module and we should rewrite it. ### Why should we _not_ do this? -The migration for customers is not easy and we can't guarantee it's a safe migration without down time. + +The migration for customers is not easy and we can't guarantee it's a safe +migration without down time. ### Is this a breaking change? -Yes it's breaking change hence it's put into a new alpha module. A few other breaking changes are shipped together to make it more ergonomic and aligned with the new cluster implementation. +Yes it's breaking change hence it's put into a new alpha module. A few other +breaking changes are shipped together to make it more ergonomic and aligned with +the new cluster implementation. ### What is the high-level project plan? + - [ ] Publish the RFC - [ ] Gather feedback on the RFC - [ ] Get bar raiser to sign off on RFC @@ -300,7 +417,9 @@ Yes it's breaking change hence it's put into a new alpha module. A few other bre TBD ## Appendix + #### EKS Cluster Props Difference + Same props ``` @@ -324,7 +443,9 @@ readonly tags?: { [key: string]: string }; readonly mastersRole?: iam.IRole; readonly bootstrapClusterCreatorAdminPermissions?: boolean; ``` + Props only in old EKS + ``` readonly clusterLogging?: ClusterLoggingTypes[]; @@ -344,9 +465,12 @@ readonly clusterHandlerSecurityGroup?: ec2.ISecurityGroup; readonly onEventLayer?: lambda.ILayerVersion; readonly clusterHandlerSecurityGroup?: ec2.ISecurityGroup; ``` + Props only in new EKS + ``` readonly logging?: ClusterLoggingTypes[]; +readonly kubectlLayer: lambda.ILayerVersion; readonly kubectlProviderOptions?: KubectlProviderOptions; readonly outputInfo?: boolean; ``` From 51823a3b154a87651dff6b3f5454e029bb4b9eeb Mon Sep 17 00:00:00 2001 From: Xia Zhao Date: Tue, 5 Nov 2024 23:25:20 -0800 Subject: [PATCH 04/25] update heading size --- text/0605-eks-rewrite.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/text/0605-eks-rewrite.md b/text/0605-eks-rewrite.md index b00124c39..3086d528c 100644 --- a/text/0605-eks-rewrite.md +++ b/text/0605-eks-rewrite.md @@ -418,7 +418,7 @@ TBD ## Appendix -#### EKS Cluster Props Difference +### EKS Cluster Props Difference Same props From f45edee854d81c75f660c808225ce4f6d7e2e505 Mon Sep 17 00:00:00 2001 From: Xia Zhao Date: Wed, 13 Nov 2024 00:15:59 -0800 Subject: [PATCH 05/25] address some feedbacks --- text/0605-eks-rewrite.md | 198 +++++++++++---------------------------- 1 file changed, 57 insertions(+), 141 deletions(-) diff --git a/text/0605-eks-rewrite.md b/text/0605-eks-rewrite.md index 3086d528c..d4185cda5 100644 --- a/text/0605-eks-rewrite.md +++ b/text/0605-eks-rewrite.md @@ -34,7 +34,6 @@ import { KubectlV31Layer } from '@aws-cdk/lambda-layer-kubectl-v31'; // provisioning a cluster const cluster = new eksv2.Cluster(this, 'hello-eks', { version: eks.KubernetesVersion.V1_31, - kubectlLayer: new KubectlV31Layer(this, 'kubectl'), }); ``` @@ -74,11 +73,21 @@ In a nutshell: ### Difference from original EKS L2 -ClusterHandler is removed in the new implementation because it uses native L1 -resource AWS::EKS::Cluster to create the EKS cluster resource. Along with +1. `Kubectl Handler` will only be created when you pass in `kubectlProviderOptions` property. By default, it will not create the custom resource. +``` +const cluster = new eks.Cluster(this, 'hello-eks', { + version: eks.KubernetesVersion.V1_31, + kubectlProviderOptions: { + kubectlLayer: new KubectlV31Layer(this, 'kubectl'), + } +}); +``` +2. ClusterHandler is removed in the new implementation because it uses native L1 +resource `AWS::EKS::Cluster` to create the EKS cluster resource. + 3. Along with resource change, following properties on Cluster construct are removed: -- clusterHandlerEnvironment +- clusterHandlerEnvironment - clusterHandlerSecurityGroup - clusterHandlerSecurityGroup - onEventLayer @@ -136,7 +145,7 @@ cluster.grantAccess('adminAccess', roleArn, [ ### New Feat: Create EKS Cluster in an isolated VPC To create a EKS Cluster in an isolated VPC, vpc endpoints need to be set for -different AWS services (EC2, S3, STS, ECR and anything the service needs). +different AWS services (EC2, S3, STS, ECR and anything the service needs). See https://docs.aws.amazon.com/eks/latest/userguide/private-clusters.html for more details. ``` const vpc = new ec2.Vpc(this, 'vpc', { @@ -181,64 +190,7 @@ const cluster = new eks.Cluster(this, 'MyMycluster123', { }); ``` -### Logging Configuration - -Logging property is renamed from clusterLogging to logging since there is only -one logging property in the construct. - -Before - -``` -const cluster = new eks.Cluster(this, 'Cluster', { - // ... - version: eks.KubernetesVersion.V1_31, - clusterLogging: [ - eks.ClusterLoggingTypes.API, - eks.ClusterLoggingTypes.AUTHENTICATOR, - ], -}); -``` - -After - -``` -const cluster = new eks.Cluster(this, 'Cluster', { - version: eks.KubernetesVersion.V1_31, - logging: [ - eks.ClusterLoggingTypes.API, - eks.ClusterLoggingTypes.AUTHENTICATOR, - ], -}); -``` - -### Output Configuration - -A new property `outputInfo` will replace the current 3 output properties. -Although 3 separate output properties provide customization on output -configuration, it increased the cognitive load and doesn’t provide a clean -usage. The proposal here is to have one single flag to control all of them. - -Before - -``` -const cluster = new eks.Cluster(this, 'Cluster', { - version: eks.KubernetesVersion.V1_31, - outputMastersRoleArn: true, - outputClusterName: true, - outputConfigCommand: true, -}); -``` - -After - -``` -const cluster = new eks.Cluster(this, 'Cluster', { - version: eks.KubernetesVersion.V1_31, - outputInfo: true, -}); -``` - -### Kubectl Handler Configuration +### Use existing Cluster/Kubectl Handler KubectlProvider is a lambda function that CDK deploys alongside the EKS cluster in order to execute kubectl commands against the cluster. @@ -247,96 +199,43 @@ A common scenarios is that users create a CDK app that deploys the EKS cluster, which is then imported in other apps in order to deploy resources onto the cluster. -Difference - -Before +To deploy manifest on imported clusters, you can decide whether to create `kubectl Handler` by using `kubectlProviderOptions` property. +1. Import the cluster without kubectl Handler. It can't invoke AddManifest(). ``` -const kubectlProvider = eks.KubectlProvider.fromKubectlProviderAttributes(this, 'KubectlProvider', { - functionArn, - kubectlRoleArn: 'arn:aws:iam::123456789012:role/kubectl-role', - handlerRole, +const cluster = eks.Cluster.fromClusterAttributes(this, 'MyCluster', { + clusterName: 'my-cluster-name', }); +cluster.addManifest(); # X - not working ``` -After - +2. Import the cluster and create a new kubectl Handler ``` -const kubectlProvider = eks.KubectlProvider.fromKubectlProviderArn(this, 'KubectlProvider', { - functionArn: // Required. ARN of the original kubectl function +const cluster = eks.Cluster.fromClusterAttributes(this, 'MyCluster', { + clusterName: 'my-cluster-name', + kubectlProviderOptions: { + kubectlLayer: new KubectlV31Layer(this, 'kubectl'), + } }); -``` - -Following parameters are removed: - -- kubectlRoleArn -- handlerRole - -`fromKubectlProviderAttributes()` is renamed to -`fromKubectlProviderFunctionArn()`. - -Reason: when the KubectlProvider was created in another stack, the lambda -execution role already has permissions to access the cluster. - -Besides that, optional KubectlProvider specific properties are moved into -KubectlProviderOptions to better organize properties. - -KubectlProviderOptions Definition ``` -export interface KubectlProviderOptions { - readonly role?: iam.IRole; - readonly awscliLayer?: lambda.ILayerVersion; - readonly kubectlLayer?: lambda.ILayerVersion; - readonly memory?: Size; - readonly environment?: { [key: string]: string }; - /** - * Which subnets should the provider functions be placed in. - */ - readonly vpcSubnets?: ec2.SubnetSelection; -} -``` - -Usage - Before +3. Import the cluster/kubectl Handler ``` -new eks.Cluster(this, 'MyCluster', { - version: eks.KubernetesVersion.V1_31, - kubectlMemory: Size.gigabytes(4), - kubectlLayer: new KubectlV31Layer(this, 'kubectl'), - kubectlEnvironment: { - 'http_proxy': 'http://proxy.myproxy.com', - }, - kubectlRole: iam.Role.fromRoleArn( - this, - 'MyRole', - 'arn:aws:iam::123456789012:role/lambda-role' - ); +const kubectlProvider = eks.KubectlProvider.fromKubectlProviderArn(this, 'KubectlProvider', { + functionArn: '' }); -``` -Usage - After - -``` -new eks.Cluster(this, 'MyCluster', { - version: eks.KubernetesVersion.V1_31, - kubectlLayer: new KubectlV31Layer(this, 'kubectl'), - kubectlProviderOptions: { - memory: Size.gigabytes(4), - environment: { - 'http_proxy': 'http://proxy.myproxy.com', - }, - role: iam.Role.fromRoleArn( - this, - 'MyRole', - 'arn:aws:iam::123456789012:role/lambda-role' - ); +const cluster = eks.Cluster.fromClusterAttributes(this, 'MyCluster', { + clusterName: 'my-cluster-name', + kubectlProvider: kubectlProvider }); + +cluster.addManifest(); ``` -## Migration Path -Note: We can't guarantee it's a safe migration. +## Migration Path (TBD) Due to the fact that switching from a custom resource (Custom::AWSCDK-EKS-Cluster) to a native L1 (AWS::EKS::Cluster) resource @@ -404,13 +303,13 @@ the new cluster implementation. ### What is the high-level project plan? -- [ ] Publish the RFC -- [ ] Gather feedback on the RFC +- [X] Publish the RFC +- [X] Gather feedback on the RFC - [ ] Get bar raiser to sign off on RFC -- [ ] Create the new eksv2 alpha module and implementation -- [ ] Make pull request to aws-cdk repository -- [ ] Iterate and respond to PR feedback +- [ ] Implementation - [ ] Merge new module +- [ ] Publish migration guide/develop migration tool +- [ ] Stabilize the module once it's ready ### Are there any open issues that need to be addressed later? @@ -474,3 +373,20 @@ readonly kubectlLayer: lambda.ILayerVersion; readonly kubectlProviderOptions?: KubectlProviderOptions; readonly outputInfo?: boolean; ``` + + +KubectlProviderOptions Definition + +``` +export interface KubectlProviderOptions { + readonly role?: iam.IRole; + readonly awscliLayer?: lambda.ILayerVersion; + readonly kubectlLayer?: lambda.ILayerVersion; + readonly memory?: Size; + readonly environment?: { [key: string]: string }; + /** + * Which subnets should the provider functions be placed in. + */ + readonly vpcSubnets?: ec2.SubnetSelection; +} +``` From 81e056f3a12e2aef6585899b2f8f3e8f5b8c0a52 Mon Sep 17 00:00:00 2001 From: Xia Zhao Date: Thu, 14 Nov 2024 00:20:27 -0800 Subject: [PATCH 06/25] Refactor to readme style --- text/0605-eks-rewrite.md | 274 +++++++++++++++++++++++---------------- 1 file changed, 164 insertions(+), 110 deletions(-) diff --git a/text/0605-eks-rewrite.md b/text/0605-eks-rewrite.md index d4185cda5..5ecf59cee 100644 --- a/text/0605-eks-rewrite.md +++ b/text/0605-eks-rewrite.md @@ -13,41 +13,53 @@ modifications collectively enhance the module's functionality and integration within the AWS ecosystem, providing a more robust and streamlined solution for managing Elastic Kubernetes Service (EKS) resources. -## Working Backwards - This RFC primarily addresses the distinctions between the new module and the original EKS L2 construct. Comprehensive use cases and examples will be available in the README file of the forthcoming `eks-alpha-v2` module. -It's important to note that any features of the existing EKS construct not -explicitly mentioned in this RFC will remain unchanged and function as they do -in the current implementation. This approach ensures clarity on the -modifications while maintaining continuity for unaffected features. +Compared to the original EKS module, it has following major changes: + +- Use native L1 `AWS::EKS::Cluster` resource to replace custom resource `Custom::AWSCDK-EKS-Cluster` +- Use native L1 `AWS::EKS::FargateProfile` resource to replace custom resource `Custom::AWSCDK-EKS-FargateProfile` +- `Kubectl Handler` will not be created by default. It will only be created if users specify it. +- Deprecate `AwsAuth` construct. Permissions to the cluster will be managed by Access Entry. +- API changes to make them more ergonomic. +- Remove nested stacks + +## Working Backwards + +## Readme + +Note: Full readme is too long for this RFC. This readme is simplified version that only focus on +use cases that are different from the original EKS module. Full readme will be published in +the alpha module. + +This library is a rewrite of existing EKS module including breaking changes to +address some pain points on the existing EKS module. It allows you to define +Amazon Elastic Container Service for Kubernetes (EKS) clusters. In addition, +the library also supports defining Kubernetes resource manifests within EKS clusters. + ## Quick start Here is the minimal example of defining an AWS EKS cluster ``` -import { KubectlV31Layer } from '@aws-cdk/lambda-layer-kubectl-v31'; +import * as eks from '@aws-cdk/aws-eksv2-alpha'; // provisioning a cluster -const cluster = new eksv2.Cluster(this, 'hello-eks', { +const cluster = new eks.Cluster(this, 'hello-eks', { version: eks.KubernetesVersion.V1_31, }); ``` -Note: Compared to the previous L2, `kubectlLayer` property is required now The -reason is if we set a default version, that version will be outdated one day and -updating default version at that time will be a breaking change. - ## Architecture ``` +-----------------------------------------------+ | EKS Cluster | kubectl | | | -----------------|<--------+| Kubectl Handler | - | AWS::EKS::Cluster | + | AWS::EKS::Cluster (Optional) | | +--------------------+ +-----------------+ | | | | | | | | | Managed Node Group | | Fargate Profile | | @@ -68,70 +80,103 @@ In a nutshell: - Managed Node Group - EC2 worker nodes managed by EKS. - Fargate Profile - Fargate worker nodes managed by EKS. - Auto Scaling Group - EC2 worker nodes managed by the user. -- Kubectl Handler - Lambda function for invoking kubectl commands on the +- Kubectl Handler (Optional) - Lambda function for invoking kubectl commands on the cluster - created by CDK -### Difference from original EKS L2 +## Provisioning cluster +Creating a new cluster is done using the `Cluster` constructs. The only required property is the kubernetes version. +``` +new eks.Cluster(this, 'HelloEKS', { + version: eks.KubernetesVersion.V1_31, +}); +``` +You can also use `FargateCluster` to provision a cluster that uses only fargate workers. +``` +new eks.FargateCluster(this, 'HelloEKS', { + version: eks.KubernetesVersion.V1_31, +}); +``` -1. `Kubectl Handler` will only be created when you pass in `kubectlProviderOptions` property. By default, it will not create the custom resource. +**Note: Unlike the previous EKS cluster, `Kubectl Handler` will not +be created by default. It will only be deployed when `kubectlProviderOptions` +property is used.** ``` -const cluster = new eks.Cluster(this, 'hello-eks', { +new eks.Cluster(this, 'hello-eks', { version: eks.KubernetesVersion.V1_31, + # Using this property will create `Kubectl Handler` as custom resource handler kubectlProviderOptions: { kubectlLayer: new KubectlV31Layer(this, 'kubectl'), } }); ``` -2. ClusterHandler is removed in the new implementation because it uses native L1 -resource `AWS::EKS::Cluster` to create the EKS cluster resource. - 3. Along with -resource change, following properties on Cluster construct are removed: +### VPC Support -- clusterHandlerEnvironment -- clusterHandlerSecurityGroup -- clusterHandlerSecurityGroup -- onEventLayer +You can specify the VPC of the cluster using the vpc and vpcSubnets properties: +``` +declare const vpc: ec2.Vpc; +new eks.Cluster(this, 'HelloEKS', { + version: eks.KubernetesVersion.V1_31, + vpc, + vpcSubnets: [{ subnetType: ec2.SubnetType.PRIVATE_WITH_EGRESS }], +}); +``` +If you do not specify a VPC, one will be created on your behalf, which you can then access via cluster.vpc. The cluster VPC will be associated to any EKS managed capacity (i.e Managed Node Groups and Fargate Profiles). -## Resource Provisioning +The cluster can be placed inside an isolated VPC. The cluster’s VPC subnets must have a VPC interface endpoint for any AWS services that your Pods need access to. See https://docs.aws.amazon.com/eks/latest/userguide/private-clusters.html +``` +const vpc = new ec2.Vpc(this, 'vpc', { + subnetConfiguration: [ + { + cidrMask: 24, + name: 'Private', + subnetType: ec2.SubnetType.PRIVATE_ISOLATED, + }, + ] +}); -This change is not directly visible in API or construct props, but in -implementation details. Two custom resources will be replaced with native CFN L1 -resources: +const cluster = new eks.Cluster(this, 'MyMycluster123', { + version: eks.KubernetesVersion.V1_31, + vpc, + vpcSubnets: [{ subnetType: ec2.SubnetType.PRIVATE_ISOLATED }] +}); +``` -- `Custom::AWSCDK-EKS-Cluster` will be replaced with `AWS::EKS::Cluster` -- `Custom::AWSCDK-EKS-FargateProfile` will be replaced with - `AWS::EKS::FargateProfile` +### Kubectl Support +You can choose to have CDK create a `Kubectl Handler` - a Python Lambda Function to +apply k8s manifests using `kubectl apply`. This handler will not be created by default. -The resource type change will be reflected in cdk synth output template. +To create a `Kubectl Handler`, use `kubectlProviderOptions` when creating the cluster. +`kubectlLayer` is the only required property in `kubectlProviderOptions`. +``` +new eks.Cluster(this, 'hello-eks', { + version: eks.KubernetesVersion.V1_31, + # Using this property will create `Kubectl Handler` as custom resource handler + kubectlProviderOptions: { + kubectlLayer: new KubectlV31Layer(this, 'kubectl'), + } +}); -## Authentication +`Kubectl Handler` created along with the cluster will be granted admin permissions to the cluster. +``` -`ConfigMap` authentication mode has been deprecated by EKS and the recommend -mode is API. The new EKS L2 will go a step further and only support API -authentication mode. All grant functions in EKS will use Access Entry to grant +## Permissions and Security +Amazon EKS supports three modes of authentication: CONFIG_MAP, API_AND_CONFIG_MAP, and API. +`ConfigMap` authentication mode has been deprecated by EKS and the recommended mode is API. +The new EKS L2 will go a step further and only support API authentication mode. +All grant functions in EKS will use Access Entry to grant permissions to an IAM role/user. -`AwsAuth` construct was developed to manage mappings between IAM users/roles to -Kubernetes RBAC configuration through ConfigMap. It’s exposed with awsAuth -attribute of cluster construct. With the deprecation of `ConfigMap` mode, -AwsAuth construct and the attribute are removed in the new EKS module. - -`grant()` function are introduced to replace the awsAuth. It’s implemented using +As a result, `AwsAuth` construct in the previous EKS module will not be provided in the new module. +`grant()` functions are introduced to replace the awsAuth. It’s implemented using Access Entry. -### Difference from original EKS L2 - -Before using awsAuth - +Grant Admin Access to an IAM role ``` -cluster.awsAuth.addMastersRole(role); +cluster.grantAdmin('adminAccess', roleArn, eks.AccessScopeType.CLUSTER); ``` - -After using Access Entry - +You can also use general `grantAccess()` to attach a policy to an IAM role/user. +See https://docs.aws.amazon.com/eks/latest/userguide/access-policies.html for all access policies ``` -cluster.grantAdmin('adminAccess', roleArn, eks.AccessScopeType.CLUSTER); - # A general grant function is also provided cluster.grantAccess('adminAccess', roleArn, [ eks.AccessPolicy.fromAccessPolicyName('AmazonEKSClusterAdminPolicy', { @@ -140,76 +185,56 @@ cluster.grantAccess('adminAccess', roleArn, [ ]); ``` -## Cluster Configuration - -### New Feat: Create EKS Cluster in an isolated VPC +### Use existing Cluster/Kubectl Handler -To create a EKS Cluster in an isolated VPC, vpc endpoints need to be set for -different AWS services (EC2, S3, STS, ECR and anything the service needs). See https://docs.aws.amazon.com/eks/latest/userguide/private-clusters.html for more details. +This module allows defining Kubernetes resources such as Kubernetes manifests and Helm charts +on clusters that are not defined as part of your CDK app. +There are 2 scenarios here: +1. Import the cluster without creating a new kubectl Handler ``` -const vpc = new ec2.Vpc(this, 'vpc', { - subnetConfiguration: [ - { - cidrMask: 24, - name: 'Private', - subnetType: ec2.SubnetType.PRIVATE_ISOLATED, - }, - ], - gatewayEndpoints: { - S3: { - service: ec2.GatewayVpcEndpointAwsService.S3, - }, - }, -}); -vpc.addInterfaceEndpoint('stsEndpoint', { - service: ec2.InterfaceVpcEndpointAwsService.STS, - open: true, +const cluster = eks.Cluster.fromClusterAttributes(this, 'MyCluster', { + clusterName: 'my-cluster-name', }); +``` +This imported cluster is not associated with a `Kubectl Handler`. It means we won't be able to +invoke `addManifest()` function on the cluster. -vpc.addInterfaceEndpoint('ec2Endpoint', { - service: ec2.InterfaceVpcEndpointAwsService.EC2, - open: true, +To apply a manifest, you need to import the kubectl handler and attach it to the cluster +``` +const kubectlProvider = eks.KubectlProvider.fromKubectlProviderArn(this, 'KubectlProvider', { + functionArn: '' }); -vpc.addInterfaceEndpoint('ecrEndpoint', { - service: ec2.InterfaceVpcEndpointAwsService.ECR, - open: true, +const cluster = eks.Cluster.fromClusterAttributes(this, 'MyCluster', { + clusterName: 'my-cluster-name', + kubectlProvider: kubectlProvider }); -vpc.addInterfaceEndpoint('ecrDockerEndpoint', { - service: ec2.InterfaceVpcEndpointAwsService.ECR_DOCKER, - open: true, -}); +cluster.addManifest(); +``` -const cluster = new eks.Cluster(this, 'MyMycluster123', { - version: eks.KubernetesVersion.V1_31, - authenticationMode: eks.AuthenticationMode.API, - vpc, - vpcSubnets: [{ subnetType: ec2.SubnetType.PRIVATE_ISOLATED }] +2. Import the cluster and create a new kubectl Handler +``` +const cluster = eks.Cluster.fromClusterWithKubectlProvider(this, 'MyCluster', { + clusterName: 'my-cluster-name', + kubectlProviderOptions: { + kubectlLayer: new KubectlV31Layer(this, 'kubectl'), + } }); ``` +This import function will always create a new kubectl handler for the cluster. -### Use existing Cluster/Kubectl Handler - -KubectlProvider is a lambda function that CDK deploys alongside the EKS cluster -in order to execute kubectl commands against the cluster. +#### Alternative Solution +We can have one single `fromClusterAttributes()` and have different behaviors based on the input. -A common scenarios is that users create a CDK app that deploys the EKS cluster, -which is then imported in other apps in order to deploy resources onto the -cluster. - -To deploy manifest on imported clusters, you can decide whether to create `kubectl Handler` by using `kubectlProviderOptions` property. - -1. Import the cluster without kubectl Handler. It can't invoke AddManifest(). +- Import the cluster without kubectl Handler. It can't invoke AddManifest(). ``` const cluster = eks.Cluster.fromClusterAttributes(this, 'MyCluster', { clusterName: 'my-cluster-name', }); -cluster.addManifest(); # X - not working ``` - -2. Import the cluster and create a new kubectl Handler +- Import the cluster and create a new kubectl Handler ``` const cluster = eks.Cluster.fromClusterAttributes(this, 'MyCluster', { clusterName: 'my-cluster-name', @@ -217,23 +242,45 @@ const cluster = eks.Cluster.fromClusterAttributes(this, 'MyCluster', { kubectlLayer: new KubectlV31Layer(this, 'kubectl'), } }); - ``` - -3. Import the cluster/kubectl Handler +- Import the cluster/kubectl Handler ``` const kubectlProvider = eks.KubectlProvider.fromKubectlProviderArn(this, 'KubectlProvider', { functionArn: '' }); - const cluster = eks.Cluster.fromClusterAttributes(this, 'MyCluster', { clusterName: 'my-cluster-name', kubectlProvider: kubectlProvider }); +``` + +Note: `fromKubectlProviderAttributes()` is renamed to +`fromKubectlProviderFunctionArn()`. +Before -cluster.addManifest(); ``` +const kubectlProvider = eks.KubectlProvider.fromKubectlProviderAttributes(this, 'KubectlProvider', { + functionArn, + kubectlRoleArn: 'arn:aws:iam::123456789012:role/kubectl-role', + handlerRole, +}); +``` + +After + +``` +const kubectlProvider = eks.KubectlProvider.fromKubectlProviderArn(this, 'KubectlProvider', { + functionArn: // Required. ARN of the original kubectl function +}); +``` + +Following parameters are removed: + +- kubectlRoleArn +- handlerRole +Reason: when the KubectlProvider was created in another stack, the lambda +execution role already has permissions to access the cluster. ## Migration Path (TBD) @@ -288,7 +335,14 @@ more features (e.g. isolated VPC and escape hatching). This feature has been highly requested by the community since Feb 2023. The current implementation using custom resource has some limitations and is harder -to maintain. EKS L2 is a widely used module and we should rewrite it. +to maintain. We can also use this chance to solve some major pain points in the current EKS L2. + +Issues will be solved: +- https://github.com/aws/aws-cdk/issues/24059 +- https://github.com/aws/aws-cdk/issues/25544 +- https://github.com/aws/aws-cdk/issues/24174 +- https://github.com/aws/aws-cdk/issues/19753 +- https://github.com/aws/aws-cdk/issues/19218 ### Why should we _not_ do this? From 910afd765c00a7d2c8ba5244d984bbe31adbcc6a Mon Sep 17 00:00:00 2001 From: Xia Zhao Date: Thu, 14 Nov 2024 15:15:41 -0800 Subject: [PATCH 07/25] WIP --- text/0605-eks-rewrite.md | 137 +++++++++++++++++---------------------- 1 file changed, 61 insertions(+), 76 deletions(-) diff --git a/text/0605-eks-rewrite.md b/text/0605-eks-rewrite.md index 5ecf59cee..dccf283be 100644 --- a/text/0605-eks-rewrite.md +++ b/text/0605-eks-rewrite.md @@ -2,7 +2,7 @@ - **Original Author(s)**: @xazhao - **Tracking Issue**: [#605](https://github.com/aws/aws-cdk-rfcs/issues/605) -- **API Bar Raiser**: +- **API Bar Raiser**: @iliapolo The `eks-v2-alpha` module is a rewrite of the existing `aws-eks` module. This new iteration leverages native L1 CFN resources, replacing the previous custom @@ -24,6 +24,7 @@ Compared to the original EKS module, it has following major changes: - `Kubectl Handler` will not be created by default. It will only be created if users specify it. - Deprecate `AwsAuth` construct. Permissions to the cluster will be managed by Access Entry. - API changes to make them more ergonomic. +- Remove the limit of 1 cluster per stack - Remove nested stacks ## Working Backwards @@ -254,34 +255,6 @@ const cluster = eks.Cluster.fromClusterAttributes(this, 'MyCluster', { }); ``` -Note: `fromKubectlProviderAttributes()` is renamed to -`fromKubectlProviderFunctionArn()`. -Before - -``` -const kubectlProvider = eks.KubectlProvider.fromKubectlProviderAttributes(this, 'KubectlProvider', { - functionArn, - kubectlRoleArn: 'arn:aws:iam::123456789012:role/kubectl-role', - handlerRole, -}); -``` - -After - -``` -const kubectlProvider = eks.KubectlProvider.fromKubectlProviderArn(this, 'KubectlProvider', { - functionArn: // Required. ARN of the original kubectl function -}); -``` - -Following parameters are removed: - -- kubectlRoleArn -- handlerRole - -Reason: when the KubectlProvider was created in another stack, the lambda -execution role already has permissions to access the cluster. - ## Migration Path (TBD) Due to the fact that switching from a custom resource @@ -310,12 +283,14 @@ have to take additional actions. ### What are we launching today? We’re launching a new EKS module `aws-eksv2-alpha`. It's a rewrite of existing -`aws-eks` module with some breaking changes based on community feedbacks. +`aws-eks` module with some breaking changes to address pain points in `aws-eks` module. ### Why should I use this feature? -The new EKS module provides faster deployment, less complexity, less cost and -more features (e.g. isolated VPC and escape hatching). +The new EKS module has following benefits: +- faster deployment +- option to not use custom resource +- remove limitations on the previous EKS module (isolated VPC, 1 cluster limit per stack etc) ### What's the future plan for existing `aws-eks` module? @@ -337,17 +312,17 @@ This feature has been highly requested by the community since Feb 2023. The current implementation using custom resource has some limitations and is harder to maintain. We can also use this chance to solve some major pain points in the current EKS L2. -Issues will be solved: -- https://github.com/aws/aws-cdk/issues/24059 -- https://github.com/aws/aws-cdk/issues/25544 -- https://github.com/aws/aws-cdk/issues/24174 -- https://github.com/aws/aws-cdk/issues/19753 -- https://github.com/aws/aws-cdk/issues/19218 +Issues will be solved with the new module: +- https://github.com/aws/aws-cdk/issues/24059 (Custom Resource) +- https://github.com/aws/aws-cdk/issues/25544 (Custom Resource related) +- https://github.com/aws/aws-cdk/issues/24174 (Custom Resource related) +- https://github.com/aws/aws-cdk/issues/19753 (ConfigMap) +- https://github.com/aws/aws-cdk/issues/19218 (ConfigMap) ### Why should we _not_ do this? -The migration for customers is not easy and we can't guarantee it's a safe -migration without down time. +Some customer might be happy with the current EKS module and don't need to migrate to the new module. +Therefore, we should write a blog post/tool to help the migration. ### Is this a breaking change? @@ -361,20 +336,18 @@ the new cluster implementation. - [X] Gather feedback on the RFC - [ ] Get bar raiser to sign off on RFC - [ ] Implementation -- [ ] Merge new module +- [ ] Merge new alpha module - [ ] Publish migration guide/develop migration tool -- [ ] Stabilize the module once it's ready +- [ ] Prioritize make the module stable after 3 months bake time ### Are there any open issues that need to be addressed later? -TBD +N/A ## Appendix ### EKS Cluster Props Difference -Same props - ``` readonly version: KubernetesVersion; readonly vpc: ec2.IVpc; @@ -395,42 +368,26 @@ readonly serviceIpv4Cidr?: string; readonly tags?: { [key: string]: string }; readonly mastersRole?: iam.IRole; readonly bootstrapClusterCreatorAdminPermissions?: boolean; -``` - -Props only in old EKS - -``` readonly clusterLogging?: ClusterLoggingTypes[]; -readonly awscliLayer?: lambda.ILayerVersion; -readonly kubectlEnvironment?: { [key: string]: string }; -readonly kubectlLambdaRole?: iam.IRole; -readonly kubectlLayer?: lambda.ILayerVersion; -readonly kubectlMemory?: Size; - -readonly outputMastersRoleArn?: boolean; -readonly outputClusterName?: boolean; -readonly outputConfigCommand?: boolean; +readonly kubectlProviderOptions?: KubectlProviderOptions; # new property -readonly authenticationMode?: AuthenticationMode; -readonly clusterHandlerEnvironment?: { [key: string]: string }; -readonly clusterHandlerSecurityGroup?: ec2.ISecurityGroup; -readonly onEventLayer?: lambda.ILayerVersion; -readonly clusterHandlerSecurityGroup?: ec2.ISecurityGroup; -``` - -Props only in new EKS +readonly outputMastersRoleArn?: boolean; # will be removed +readonly outputClusterName?: boolean; # will be removed +readonly outputConfigCommand?: boolean; # will be removed +readonly authenticationMode?: AuthenticationMode; # will be removed +readonly clusterHandlerEnvironment?: { [key: string]: string }; # will be removed +readonly clusterHandlerSecurityGroup?: ec2.ISecurityGroup; # will be removed +readonly onEventLayer?: lambda.ILayerVersion; # will be removed +readonly clusterHandlerSecurityGroup?: ec2.ISecurityGroup; # will be removed +readonly awscliLayer?: lambda.ILayerVersion; # move to kubectlProviderOptions +readonly kubectlEnvironment?: { [key: string]: string }; # move to kubectlProviderOptions +readonly kubectlLambdaRole?: iam.IRole; # move to kubectlProviderOptions +readonly kubectlLayer?: lambda.ILayerVersion; # move to kubectlProviderOptions +readonly kubectlMemory?: Size; # move to kubectlProviderOptions ``` -readonly logging?: ClusterLoggingTypes[]; -readonly kubectlLayer: lambda.ILayerVersion; -readonly kubectlProviderOptions?: KubectlProviderOptions; -readonly outputInfo?: boolean; -``` - - -KubectlProviderOptions Definition - +### KubectlProviderOptions Definition ``` export interface KubectlProviderOptions { readonly role?: iam.IRole; @@ -444,3 +401,31 @@ export interface KubectlProviderOptions { readonly vpcSubnets?: ec2.SubnetSelection; } ``` + +Note: `fromKubectlProviderAttributes()` is renamed to +`fromKubectlProviderFunctionArn()`. +Before + +``` +const kubectlProvider = eks.KubectlProvider.fromKubectlProviderAttributes(this, 'KubectlProvider', { + functionArn, + kubectlRoleArn: 'arn:aws:iam::123456789012:role/kubectl-role', + handlerRole, +}); +``` + +After + +``` +const kubectlProvider = eks.KubectlProvider.fromKubectlProviderArn(this, 'KubectlProvider', { + functionArn: // Required. ARN of the original kubectl function +}); +``` + +Following parameters are removed: + +- kubectlRoleArn +- handlerRole + +Reason: when the KubectlProvider was created in another stack, the lambda +execution role already has permissions to access the cluster. From d44ff262bd7933af9e53cc6ed3c00cf225b3c283 Mon Sep 17 00:00:00 2001 From: Xia Zhao Date: Sun, 17 Nov 2024 19:41:56 -0800 Subject: [PATCH 08/25] wip --- text/0605-eks-rewrite.md | 44 +++++++++++++++++++++++++++------------- 1 file changed, 30 insertions(+), 14 deletions(-) diff --git a/text/0605-eks-rewrite.md b/text/0605-eks-rewrite.md index dccf283be..606108af0 100644 --- a/text/0605-eks-rewrite.md +++ b/text/0605-eks-rewrite.md @@ -23,9 +23,11 @@ Compared to the original EKS module, it has following major changes: - Use native L1 `AWS::EKS::FargateProfile` resource to replace custom resource `Custom::AWSCDK-EKS-FargateProfile` - `Kubectl Handler` will not be created by default. It will only be created if users specify it. - Deprecate `AwsAuth` construct. Permissions to the cluster will be managed by Access Entry. -- API changes to make them more ergonomic. - Remove the limit of 1 cluster per stack - Remove nested stacks +- API changes to make them more ergonomic. + +With the new EKS module, customers can deploy an EKS cluster without custom resources. ## Working Backwards @@ -48,7 +50,6 @@ Here is the minimal example of defining an AWS EKS cluster ``` import * as eks from '@aws-cdk/aws-eksv2-alpha'; -// provisioning a cluster const cluster = new eks.Cluster(this, 'hello-eks', { version: eks.KubernetesVersion.V1_31, }); @@ -255,28 +256,43 @@ const cluster = eks.Cluster.fromClusterAttributes(this, 'MyCluster', { }); ``` -## Migration Path (TBD) +## Migration Guide +**This is a draft plan. The migration guide will not be included in the alpha module release at the beginning. +Instead we should test the migration thoroughly and publish with a blog post before stabilizing the module. +We can potentially provide a tool to help the migration.** + +**Prerequisite:** Exposed removal policy in the current EKS Cluster construct so that customers can remove +the cluster definition from CDK app without actual deleting the cluster. + +Tracking issue: https://github.com/aws/aws-cdk/issues/25544 Due to the fact that switching from a custom resource (Custom::AWSCDK-EKS-Cluster) to a native L1 (AWS::EKS::Cluster) resource requires cluster replacement, CDK users who need to preserve their cluster will have to take additional actions. -1. Set the authentication mode of cluster from `AuthenticationMode.CONFIG_MAP` - to `AuthenticationMode.API_AND_CONFIG_MAP` and deploy -2. Set the authentication mode of cluster from - `AuthenticationMode.API_AND_CONFIG_MAP` to `AuthenticationMode.API` and - deploy -3. Set removal policy to RETAIN on the existing cluster (and manifests) and - deploy. -4. Remove cluster definition from their CDK app and deploy -5. Add new cluster definition using the new constructs(EKSV2). -6. Follow cdk import to import the existing cluster as the new definition. +1. Change the authentication mode of cluster from `CONFIG_MAP` to `API`. + + This is a two steps change. First you need to change `CONFIG_MAP` to `API_AND_CONFIG_MAP` to enable access entry. + Then for all mappings in aws-auth ConfigMap, you can migrate to access entries. + After this migration is done, change `API_AND_CONFIG_MAP` to `API` to disable `ConfigMap`. + +2. Set removal policy to RETAIN on the existing cluster (and manifests) and deploy. + + This is to make sure the cluster won't be deleted when we clean up CDK app definitions in the next step. + +3. Remove cluster definition from their CDK app and deploy. After cleaning up cluster definition in CDK, + EKS resources will still be there instead of being deleted. + +4. Add new cluster definition using the new constructs(EKSV2). + +5. Follow cdk import to import the existing cluster as the new definition. 1. All relevant EKS resources support import. 2. AWS::EKS::Cluster 3. AWS::EKS::FargateProfile 4. AWS::EKS::Nodegroup -7. Add Manifests. + +6. After `cdk import`, running `cdk diff` to see if there's any unexpected changes. ## Public FAQ From bca02bee7b127be114ce1384c9641bad643668fb Mon Sep 17 00:00:00 2001 From: Xia Zhao Date: Mon, 18 Nov 2024 08:59:55 -0800 Subject: [PATCH 09/25] push --- text/0605-eks-rewrite.md | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/text/0605-eks-rewrite.md b/text/0605-eks-rewrite.md index 606108af0..e868150aa 100644 --- a/text/0605-eks-rewrite.md +++ b/text/0605-eks-rewrite.md @@ -334,6 +334,7 @@ Issues will be solved with the new module: - https://github.com/aws/aws-cdk/issues/24174 (Custom Resource related) - https://github.com/aws/aws-cdk/issues/19753 (ConfigMap) - https://github.com/aws/aws-cdk/issues/19218 (ConfigMap) +- https://github.com/aws/aws-cdk/issues/31942 (One cluster per stack limit) ### Why should we _not_ do this? @@ -352,8 +353,8 @@ the new cluster implementation. - [X] Gather feedback on the RFC - [ ] Get bar raiser to sign off on RFC - [ ] Implementation -- [ ] Merge new alpha module -- [ ] Publish migration guide/develop migration tool +- [ ] Publish new alpha module +- [ ] Publish migration guide/blog post/develop migration tool - [ ] Prioritize make the module stable after 3 months bake time ### Are there any open issues that need to be addressed later? From 685b454a528c8008288dffe4ba2c62a8a6bda0ea Mon Sep 17 00:00:00 2001 From: Xia Zhao Date: Mon, 18 Nov 2024 16:10:08 -0800 Subject: [PATCH 10/25] lint --- text/0605-eks-rewrite.md | 75 ++++++++++++++++++++++++++++++---------- 1 file changed, 56 insertions(+), 19 deletions(-) diff --git a/text/0605-eks-rewrite.md b/text/0605-eks-rewrite.md index e868150aa..3cd339785 100644 --- a/text/0605-eks-rewrite.md +++ b/text/0605-eks-rewrite.md @@ -37,18 +37,22 @@ Note: Full readme is too long for this RFC. This readme is simplified version th use cases that are different from the original EKS module. Full readme will be published in the alpha module. -This library is a rewrite of existing EKS module including breaking changes to -address some pain points on the existing EKS module. It allows you to define -Amazon Elastic Container Service for Kubernetes (EKS) clusters. In addition, -the library also supports defining Kubernetes resource manifests within EKS clusters. +This library is a rewrite of existing EKS module including breaking changes to +address some pain points on the existing EKS module including: + +- Can't deploy EKS cluster without custom resources +- The stack uses nested stacks +- Can't create multiple cluster per stack +It allows you to define Amazon Elastic Container Service for Kubernetes (EKS) clusters. In addition, +the library also supports defining Kubernetes resource manifests within EKS clusters. ## Quick start Here is the minimal example of defining an AWS EKS cluster ``` -import * as eks from '@aws-cdk/aws-eksv2-alpha'; +import * as eks from '@aws-cdk/aws-eks-v2-alpha'; const cluster = new eks.Cluster(this, 'hello-eks', { version: eks.KubernetesVersion.V1_31, @@ -86,13 +90,17 @@ In a nutshell: cluster - created by CDK ## Provisioning cluster + Creating a new cluster is done using the `Cluster` constructs. The only required property is the kubernetes version. + ``` new eks.Cluster(this, 'HelloEKS', { version: eks.KubernetesVersion.V1_31, }); ``` + You can also use `FargateCluster` to provision a cluster that uses only fargate workers. + ``` new eks.FargateCluster(this, 'HelloEKS', { version: eks.KubernetesVersion.V1_31, @@ -102,6 +110,7 @@ new eks.FargateCluster(this, 'HelloEKS', { **Note: Unlike the previous EKS cluster, `Kubectl Handler` will not be created by default. It will only be deployed when `kubectlProviderOptions` property is used.** + ``` new eks.Cluster(this, 'hello-eks', { version: eks.KubernetesVersion.V1_31, @@ -111,9 +120,11 @@ new eks.Cluster(this, 'hello-eks', { } }); ``` + ### VPC Support You can specify the VPC of the cluster using the vpc and vpcSubnets properties: + ``` declare const vpc: ec2.Vpc; new eks.Cluster(this, 'HelloEKS', { @@ -122,9 +133,13 @@ new eks.Cluster(this, 'HelloEKS', { vpcSubnets: [{ subnetType: ec2.SubnetType.PRIVATE_WITH_EGRESS }], }); ``` -If you do not specify a VPC, one will be created on your behalf, which you can then access via cluster.vpc. The cluster VPC will be associated to any EKS managed capacity (i.e Managed Node Groups and Fargate Profiles). -The cluster can be placed inside an isolated VPC. The cluster’s VPC subnets must have a VPC interface endpoint for any AWS services that your Pods need access to. See https://docs.aws.amazon.com/eks/latest/userguide/private-clusters.html +If you do not specify a VPC, one will be created on your behalf, which you can then access via cluster.vpc. +The cluster VPC will be associated to any EKS managed capacity (i.e Managed Node Groups and Fargate Profiles). + +The cluster can be placed inside an isolated VPC. The cluster’s VPC subnets must have a VPC interface endpoint +for any AWS services that your Pods need access to. See https://docs.aws.amazon.com/eks/latest/userguide/private-clusters.html + ``` const vpc = new ec2.Vpc(this, 'vpc', { subnetConfiguration: [ @@ -136,7 +151,7 @@ const vpc = new ec2.Vpc(this, 'vpc', { ] }); -const cluster = new eks.Cluster(this, 'MyMycluster123', { +const cluster = new eks.Cluster(this, 'Mycluster', { version: eks.KubernetesVersion.V1_31, vpc, vpcSubnets: [{ subnetType: ec2.SubnetType.PRIVATE_ISOLATED }] @@ -144,11 +159,13 @@ const cluster = new eks.Cluster(this, 'MyMycluster123', { ``` ### Kubectl Support + You can choose to have CDK create a `Kubectl Handler` - a Python Lambda Function to apply k8s manifests using `kubectl apply`. This handler will not be created by default. To create a `Kubectl Handler`, use `kubectlProviderOptions` when creating the cluster. `kubectlLayer` is the only required property in `kubectlProviderOptions`. + ``` new eks.Cluster(this, 'hello-eks', { version: eks.KubernetesVersion.V1_31, @@ -162,8 +179,9 @@ new eks.Cluster(this, 'hello-eks', { ``` ## Permissions and Security -Amazon EKS supports three modes of authentication: CONFIG_MAP, API_AND_CONFIG_MAP, and API. -`ConfigMap` authentication mode has been deprecated by EKS and the recommended mode is API. + +Amazon EKS supports three modes of authentication: CONFIG_MAP, API_AND_CONFIG_MAP, and API. +`ConfigMap` authentication mode has been deprecated by EKS and the recommended mode is API. The new EKS L2 will go a step further and only support API authentication mode. All grant functions in EKS will use Access Entry to grant permissions to an IAM role/user. @@ -173,11 +191,14 @@ As a result, `AwsAuth` construct in the previous EKS module will not be provided Access Entry. Grant Admin Access to an IAM role + ``` cluster.grantAdmin('adminAccess', roleArn, eks.AccessScopeType.CLUSTER); ``` + You can also use general `grantAccess()` to attach a policy to an IAM role/user. See https://docs.aws.amazon.com/eks/latest/userguide/access-policies.html for all access policies + ``` # A general grant function is also provided cluster.grantAccess('adminAccess', roleArn, [ @@ -193,16 +214,20 @@ This module allows defining Kubernetes resources such as Kubernetes manifests an on clusters that are not defined as part of your CDK app. There are 2 scenarios here: + 1. Import the cluster without creating a new kubectl Handler + ``` const cluster = eks.Cluster.fromClusterAttributes(this, 'MyCluster', { clusterName: 'my-cluster-name', }); ``` -This imported cluster is not associated with a `Kubectl Handler`. It means we won't be able to + +This imported cluster is not associated with a `Kubectl Handler`. It means we won't be able to invoke `addManifest()` function on the cluster. To apply a manifest, you need to import the kubectl handler and attach it to the cluster + ``` const kubectlProvider = eks.KubectlProvider.fromKubectlProviderArn(this, 'KubectlProvider', { functionArn: '' @@ -217,6 +242,7 @@ cluster.addManifest(); ``` 2. Import the cluster and create a new kubectl Handler + ``` const cluster = eks.Cluster.fromClusterWithKubectlProvider(this, 'MyCluster', { clusterName: 'my-cluster-name', @@ -225,18 +251,23 @@ const cluster = eks.Cluster.fromClusterWithKubectlProvider(this, 'MyCluster', { } }); ``` + This import function will always create a new kubectl handler for the cluster. #### Alternative Solution + We can have one single `fromClusterAttributes()` and have different behaviors based on the input. - Import the cluster without kubectl Handler. It can't invoke AddManifest(). + ``` const cluster = eks.Cluster.fromClusterAttributes(this, 'MyCluster', { clusterName: 'my-cluster-name', }); ``` + - Import the cluster and create a new kubectl Handler + ``` const cluster = eks.Cluster.fromClusterAttributes(this, 'MyCluster', { clusterName: 'my-cluster-name', @@ -245,7 +276,9 @@ const cluster = eks.Cluster.fromClusterAttributes(this, 'MyCluster', { } }); ``` + - Import the cluster/kubectl Handler + ``` const kubectlProvider = eks.KubectlProvider.fromKubectlProviderArn(this, 'KubectlProvider', { functionArn: '' @@ -257,9 +290,9 @@ const cluster = eks.Cluster.fromClusterAttributes(this, 'MyCluster', { ``` ## Migration Guide -**This is a draft plan. The migration guide will not be included in the alpha module release at the beginning. -Instead we should test the migration thoroughly and publish with a blog post before stabilizing the module. -We can potentially provide a tool to help the migration.** + +**This is a general guideline. After migrating to the new construct, run `cdk diff` to make sure no unexpected changes.** +**If you encounter any issues during the migration, please open a Github issue at [CDK repo](https://github.com/aws/aws-cdk)** **Prerequisite:** Exposed removal policy in the current EKS Cluster construct so that customers can remove the cluster definition from CDK app without actual deleting the cluster. @@ -274,17 +307,17 @@ have to take additional actions. 1. Change the authentication mode of cluster from `CONFIG_MAP` to `API`. This is a two steps change. First you need to change `CONFIG_MAP` to `API_AND_CONFIG_MAP` to enable access entry. - Then for all mappings in aws-auth ConfigMap, you can migrate to access entries. + Then for all mappings in aws-auth ConfigMap, you can migrate to access entries. After this migration is done, change `API_AND_CONFIG_MAP` to `API` to disable `ConfigMap`. 2. Set removal policy to RETAIN on the existing cluster (and manifests) and deploy. This is to make sure the cluster won't be deleted when we clean up CDK app definitions in the next step. -3. Remove cluster definition from their CDK app and deploy. After cleaning up cluster definition in CDK, +3. Remove cluster definition from their CDK app and deploy. After cleaning up cluster definition in CDK, EKS resources will still be there instead of being deleted. -4. Add new cluster definition using the new constructs(EKSV2). +4. Add new cluster definition using the new constructs(EKS-V2). 5. Follow cdk import to import the existing cluster as the new definition. 1. All relevant EKS resources support import. @@ -298,12 +331,13 @@ have to take additional actions. ### What are we launching today? -We’re launching a new EKS module `aws-eksv2-alpha`. It's a rewrite of existing +We’re launching a new EKS module `aws-eks-v2-alpha`. It's a rewrite of existing `aws-eks` module with some breaking changes to address pain points in `aws-eks` module. ### Why should I use this feature? The new EKS module has following benefits: + - faster deployment - option to not use custom resource - remove limitations on the previous EKS module (isolated VPC, 1 cluster limit per stack etc) @@ -329,6 +363,7 @@ current implementation using custom resource has some limitations and is harder to maintain. We can also use this chance to solve some major pain points in the current EKS L2. Issues will be solved with the new module: + - https://github.com/aws/aws-cdk/issues/24059 (Custom Resource) - https://github.com/aws/aws-cdk/issues/25544 (Custom Resource related) - https://github.com/aws/aws-cdk/issues/24174 (Custom Resource related) @@ -354,7 +389,7 @@ the new cluster implementation. - [ ] Get bar raiser to sign off on RFC - [ ] Implementation - [ ] Publish new alpha module -- [ ] Publish migration guide/blog post/develop migration tool +- [ ] Publish migration guide/blog post - [ ] Prioritize make the module stable after 3 months bake time ### Are there any open issues that need to be addressed later? @@ -404,7 +439,9 @@ readonly kubectlLambdaRole?: iam.IRole; # move to kubectlProviderOptions readonly kubectlLayer?: lambda.ILayerVersion; # move to kubectlProviderOptions readonly kubectlMemory?: Size; # move to kubectlProviderOptions ``` + ### KubectlProviderOptions Definition + ``` export interface KubectlProviderOptions { readonly role?: iam.IRole; From 8dfe946edb83df92b13269bfb6d3345ca217e4c1 Mon Sep 17 00:00:00 2001 From: Xia Zhao Date: Tue, 19 Nov 2024 00:13:49 -0800 Subject: [PATCH 11/25] refine RFC --- text/0605-eks-rewrite.md | 38 ++++++++++++++++++++++++-------------- 1 file changed, 24 insertions(+), 14 deletions(-) diff --git a/text/0605-eks-rewrite.md b/text/0605-eks-rewrite.md index 3cd339785..3a90c0a20 100644 --- a/text/0605-eks-rewrite.md +++ b/text/0605-eks-rewrite.md @@ -304,28 +304,38 @@ Due to the fact that switching from a custom resource requires cluster replacement, CDK users who need to preserve their cluster will have to take additional actions. -1. Change the authentication mode of cluster from `CONFIG_MAP` to `API`. +1. **Set removal policy to RETAIN on the existing cluster and deploy (this feature will be added later).** + + To make sure the cluster is not being deleted, set the removal policy to `RETAIN` on the cluster. + It will keep EKS related resources from being deleted when we clean up previous EKS constructs in the stack. + + ``` + new eks.Cluster(this, 'hello-eks', { + ... + removalPolicy: RemovalPolicy.RETAIN, + }); + ``` + +2. **Change the authentication mode of cluster from `CONFIG_MAP` to `API`.** + + Since the new EKS module will only support `API` authentication mode, you will need to migrate your cluster to `API` mode. This is a two steps change. First you need to change `CONFIG_MAP` to `API_AND_CONFIG_MAP` to enable access entry. Then for all mappings in aws-auth ConfigMap, you can migrate to access entries. After this migration is done, change `API_AND_CONFIG_MAP` to `API` to disable `ConfigMap`. -2. Set removal policy to RETAIN on the existing cluster (and manifests) and deploy. - - This is to make sure the cluster won't be deleted when we clean up CDK app definitions in the next step. - -3. Remove cluster definition from their CDK app and deploy. After cleaning up cluster definition in CDK, - EKS resources will still be there instead of being deleted. +3. **Remove cluster definition from their CDK app and deploy.** -4. Add new cluster definition using the new constructs(EKS-V2). +4. **Add new cluster definition using the new constructs(eks-v2-alpha).** -5. Follow cdk import to import the existing cluster as the new definition. - 1. All relevant EKS resources support import. - 2. AWS::EKS::Cluster - 3. AWS::EKS::FargateProfile - 4. AWS::EKS::Nodegroup +5. **User `cdk import` to import the existing cluster as the new definition.** + `cdk import` will ask you for id/arn/name for EKS related resources. It may include following: + - `AWS::EKS::Cluster` + - `AWS::EKS::FargateProfile` + - `AWS::EKS::Nodegroup` + - `AWS::EKS::AccessEntry` -6. After `cdk import`, running `cdk diff` to see if there's any unexpected changes. +6. **After `cdk import`, running `cdk diff` to see if there's any unexpected changes.** ## Public FAQ From b1da241f4af576715c6ac8c00c95cfcbc64feab1 Mon Sep 17 00:00:00 2001 From: Xia Zhao Date: Tue, 19 Nov 2024 00:22:34 -0800 Subject: [PATCH 12/25] nit --- text/0605-eks-rewrite.md | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/text/0605-eks-rewrite.md b/text/0605-eks-rewrite.md index 3a90c0a20..a2350bf66 100644 --- a/text/0605-eks-rewrite.md +++ b/text/0605-eks-rewrite.md @@ -289,6 +289,11 @@ const cluster = eks.Cluster.fromClusterAttributes(this, 'MyCluster', { }); ``` +With this solution, there are two mutual exclusive properties `kubectlProvider` and `kubectlProviderOptions`. +- `kubectlProvider` means we will pass in a kubectl provider so don't create one. +- `kubectlProviderOptions` means please create a kubectl provider for the cluster. +This solution utilize a single API for importing cluster but could possibly cause some confusions. + ## Migration Guide **This is a general guideline. After migrating to the new construct, run `cdk diff` to make sure no unexpected changes.** From 43f64ebca89639adc1c5c99b1e724fe22a8c1db6 Mon Sep 17 00:00:00 2001 From: Xia Zhao <78883180+xazhao@users.noreply.github.com> Date: Tue, 19 Nov 2024 14:15:57 -0800 Subject: [PATCH 13/25] Update text/0605-eks-rewrite.md Co-authored-by: Eli Polonsky --- text/0605-eks-rewrite.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/text/0605-eks-rewrite.md b/text/0605-eks-rewrite.md index a2350bf66..677461d5b 100644 --- a/text/0605-eks-rewrite.md +++ b/text/0605-eks-rewrite.md @@ -22,7 +22,7 @@ Compared to the original EKS module, it has following major changes: - Use native L1 `AWS::EKS::Cluster` resource to replace custom resource `Custom::AWSCDK-EKS-Cluster` - Use native L1 `AWS::EKS::FargateProfile` resource to replace custom resource `Custom::AWSCDK-EKS-FargateProfile` - `Kubectl Handler` will not be created by default. It will only be created if users specify it. -- Deprecate `AwsAuth` construct. Permissions to the cluster will be managed by Access Entry. +- Remove `AwsAuth` construct. Permissions to the cluster will be managed by Access Entry. - Remove the limit of 1 cluster per stack - Remove nested stacks - API changes to make them more ergonomic. From 7ea339a1f605a880028da441e46ea9dc58ec4c59 Mon Sep 17 00:00:00 2001 From: Xia Zhao <78883180+xazhao@users.noreply.github.com> Date: Tue, 19 Nov 2024 14:16:12 -0800 Subject: [PATCH 14/25] Update text/0605-eks-rewrite.md Co-authored-by: Eli Polonsky --- text/0605-eks-rewrite.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/text/0605-eks-rewrite.md b/text/0605-eks-rewrite.md index 677461d5b..b1b00800f 100644 --- a/text/0605-eks-rewrite.md +++ b/text/0605-eks-rewrite.md @@ -86,7 +86,7 @@ In a nutshell: - Managed Node Group - EC2 worker nodes managed by EKS. - Fargate Profile - Fargate worker nodes managed by EKS. - Auto Scaling Group - EC2 worker nodes managed by the user. -- Kubectl Handler (Optional) - Lambda function for invoking kubectl commands on the +- Kubectl Handler (Optional) - Custom resource (i.e Lambda Function) for invoking kubectl commands on the cluster - created by CDK ## Provisioning cluster From cf62c80ed1fe53b5ad5472e3ef995b3f603bb283 Mon Sep 17 00:00:00 2001 From: Xia Zhao Date: Tue, 19 Nov 2024 23:11:10 -0800 Subject: [PATCH 15/25] address feedback --- text/0605-eks-rewrite.md | 62 ++++++---------------------------------- 1 file changed, 9 insertions(+), 53 deletions(-) diff --git a/text/0605-eks-rewrite.md b/text/0605-eks-rewrite.md index b1b00800f..4b287f0cf 100644 --- a/text/0605-eks-rewrite.md +++ b/text/0605-eks-rewrite.md @@ -213,7 +213,7 @@ cluster.grantAccess('adminAccess', roleArn, [ This module allows defining Kubernetes resources such as Kubernetes manifests and Helm charts on clusters that are not defined as part of your CDK app. -There are 2 scenarios here: +There are 3 scenarios here: 1. Import the cluster without creating a new kubectl Handler @@ -226,7 +226,7 @@ const cluster = eks.Cluster.fromClusterAttributes(this, 'MyCluster', { This imported cluster is not associated with a `Kubectl Handler`. It means we won't be able to invoke `addManifest()` function on the cluster. -To apply a manifest, you need to import the kubectl handler and attach it to the cluster +2. Import the cluster and the kubectl Handler ``` const kubectlProvider = eks.KubectlProvider.fromKubectlProviderArn(this, 'KubectlProvider', { @@ -234,66 +234,22 @@ const kubectlProvider = eks.KubectlProvider.fromKubectlProviderArn(this, 'Kubect }); const cluster = eks.Cluster.fromClusterAttributes(this, 'MyCluster', { - clusterName: 'my-cluster-name', - kubectlProvider: kubectlProvider -}); - -cluster.addManifest(); -``` - -2. Import the cluster and create a new kubectl Handler - -``` -const cluster = eks.Cluster.fromClusterWithKubectlProvider(this, 'MyCluster', { - clusterName: 'my-cluster-name', - kubectlProviderOptions: { - kubectlLayer: new KubectlV31Layer(this, 'kubectl'), - } -}); -``` - -This import function will always create a new kubectl handler for the cluster. - -#### Alternative Solution - -We can have one single `fromClusterAttributes()` and have different behaviors based on the input. - -- Import the cluster without kubectl Handler. It can't invoke AddManifest(). - -``` -const cluster = eks.Cluster.fromClusterAttributes(this, 'MyCluster', { - clusterName: 'my-cluster-name', + clusterName: 'my-cluster', + kubectlProvider }); ``` -- Import the cluster and create a new kubectl Handler - +3. Import the cluster and create a new kubectl Handler ``` -const cluster = eks.Cluster.fromClusterAttributes(this, 'MyCluster', { - clusterName: 'my-cluster-name', - kubectlProviderOptions: { - kubectlLayer: new KubectlV31Layer(this, 'kubectl'), - } +const kubectlProvider = new eks.KubectlProvider(this, 'KubectlProvier', { + clusterName: 'my-cluster', }); -``` - -- Import the cluster/kubectl Handler -``` -const kubectlProvider = eks.KubectlProvider.fromKubectlProviderArn(this, 'KubectlProvider', { - functionArn: '' -}); const cluster = eks.Cluster.fromClusterAttributes(this, 'MyCluster', { - clusterName: 'my-cluster-name', - kubectlProvider: kubectlProvider + clusterName: 'my-cluster', + kubectlProvider }); ``` - -With this solution, there are two mutual exclusive properties `kubectlProvider` and `kubectlProviderOptions`. -- `kubectlProvider` means we will pass in a kubectl provider so don't create one. -- `kubectlProviderOptions` means please create a kubectl provider for the cluster. -This solution utilize a single API for importing cluster but could possibly cause some confusions. - ## Migration Guide **This is a general guideline. After migrating to the new construct, run `cdk diff` to make sure no unexpected changes.** From 9a04d99c1f0848ecec253d26b7d11a2b32f05649 Mon Sep 17 00:00:00 2001 From: Xia Zhao Date: Wed, 27 Nov 2024 17:09:33 -0800 Subject: [PATCH 16/25] lint --- text/0605-eks-rewrite.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/text/0605-eks-rewrite.md b/text/0605-eks-rewrite.md index 4b287f0cf..6fbeff759 100644 --- a/text/0605-eks-rewrite.md +++ b/text/0605-eks-rewrite.md @@ -240,6 +240,7 @@ const cluster = eks.Cluster.fromClusterAttributes(this, 'MyCluster', { ``` 3. Import the cluster and create a new kubectl Handler + ``` const kubectlProvider = new eks.KubectlProvider(this, 'KubectlProvier', { clusterName: 'my-cluster', @@ -250,6 +251,7 @@ const cluster = eks.Cluster.fromClusterAttributes(this, 'MyCluster', { kubectlProvider }); ``` + ## Migration Guide **This is a general guideline. After migrating to the new construct, run `cdk diff` to make sure no unexpected changes.** From 531a47bbd1f9be0a97da6a73ef2970df0a2d1a9f Mon Sep 17 00:00:00 2001 From: Xia Zhao Date: Mon, 2 Dec 2024 16:02:07 -0800 Subject: [PATCH 17/25] remove the project plan and only focus on API changes --- text/0605-eks-rewrite.md | 31 +++++-------------------------- 1 file changed, 5 insertions(+), 26 deletions(-) diff --git a/text/0605-eks-rewrite.md b/text/0605-eks-rewrite.md index 6fbeff759..620a8e994 100644 --- a/text/0605-eks-rewrite.md +++ b/text/0605-eks-rewrite.md @@ -13,9 +13,10 @@ modifications collectively enhance the module's functionality and integration within the AWS ecosystem, providing a more robust and streamlined solution for managing Elastic Kubernetes Service (EKS) resources. -This RFC primarily addresses the distinctions between the new module and the -original EKS L2 construct. Comprehensive use cases and examples will be -available in the README file of the forthcoming `eks-alpha-v2` module. +This RFC primarily focus on API changes in the new module. The project plan about the new module +e.g. how does it exists with existing EKS module is out of scope and will be decided later. + +Comprehensive use cases and examples will be available in the README file of the forthcoming `eks-alpha-v2` module. Compared to the original EKS module, it has following major changes: @@ -309,24 +310,12 @@ We’re launching a new EKS module `aws-eks-v2-alpha`. It's a rewrite of existin ### Why should I use this feature? -The new EKS module has following benefits: +The new EKS alpha module has following benefits: - faster deployment - option to not use custom resource - remove limitations on the previous EKS module (isolated VPC, 1 cluster limit per stack etc) -### What's the future plan for existing `aws-eks` module? - -- When the new alpha module is published, `aws-eks` module will enter - `maintenance` mode which means we will only work on bugs on `aws-eks` module. - New features will only be added to the new `aws-eksv2-alpha` module. (Note: - this is the general guideline and we might be flexible on this) -- When the new alpha module is stabilized, `aws-eks` module will transition into - a deprecation phase. This implies that customers should plan to migrate their - workloads to the new module. While they can continue using the old module for - the time being, CDK team will prioritize new features/bug fix on the new - module - ## Internal FAQ ### Why are we doing this? @@ -355,16 +344,6 @@ Yes it's breaking change hence it's put into a new alpha module. A few other breaking changes are shipped together to make it more ergonomic and aligned with the new cluster implementation. -### What is the high-level project plan? - -- [X] Publish the RFC -- [X] Gather feedback on the RFC -- [ ] Get bar raiser to sign off on RFC -- [ ] Implementation -- [ ] Publish new alpha module -- [ ] Publish migration guide/blog post -- [ ] Prioritize make the module stable after 3 months bake time - ### Are there any open issues that need to be addressed later? N/A From 265a837f0e23b1f508a8de9a54ff051811fafbdb Mon Sep 17 00:00:00 2001 From: Xia Zhao <78883180+xazhao@users.noreply.github.com> Date: Tue, 3 Dec 2024 22:40:22 -0800 Subject: [PATCH 18/25] Update text/0605-eks-rewrite.md Co-authored-by: Eli Polonsky --- text/0605-eks-rewrite.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/text/0605-eks-rewrite.md b/text/0605-eks-rewrite.md index 620a8e994..9ffd49c18 100644 --- a/text/0605-eks-rewrite.md +++ b/text/0605-eks-rewrite.md @@ -14,7 +14,7 @@ within the AWS ecosystem, providing a more robust and streamlined solution for managing Elastic Kubernetes Service (EKS) resources. This RFC primarily focus on API changes in the new module. The project plan about the new module -e.g. how does it exists with existing EKS module is out of scope and will be decided later. +e.g. how does it co-exists with the current EKS module is out of scope and will be decided later. Comprehensive use cases and examples will be available in the README file of the forthcoming `eks-alpha-v2` module. From c8208b0a2fcc2b307d3ee97a8628355e6fc3cb5e Mon Sep 17 00:00:00 2001 From: Xia Zhao <78883180+xazhao@users.noreply.github.com> Date: Tue, 3 Dec 2024 22:40:53 -0800 Subject: [PATCH 19/25] Update text/0605-eks-rewrite.md Co-authored-by: Eli Polonsky --- text/0605-eks-rewrite.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/text/0605-eks-rewrite.md b/text/0605-eks-rewrite.md index 9ffd49c18..462300385 100644 --- a/text/0605-eks-rewrite.md +++ b/text/0605-eks-rewrite.md @@ -18,7 +18,7 @@ e.g. how does it co-exists with the current EKS module is out of scope and will Comprehensive use cases and examples will be available in the README file of the forthcoming `eks-alpha-v2` module. -Compared to the original EKS module, it has following major changes: +Compared to the original EKS module, it has the following major changes: - Use native L1 `AWS::EKS::Cluster` resource to replace custom resource `Custom::AWSCDK-EKS-Cluster` - Use native L1 `AWS::EKS::FargateProfile` resource to replace custom resource `Custom::AWSCDK-EKS-FargateProfile` From ad664059ec9d89e8612af31f4127e4a1d8214c0e Mon Sep 17 00:00:00 2001 From: Xia Zhao <78883180+xazhao@users.noreply.github.com> Date: Tue, 3 Dec 2024 22:41:12 -0800 Subject: [PATCH 20/25] Update text/0605-eks-rewrite.md Co-authored-by: Eli Polonsky --- text/0605-eks-rewrite.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/text/0605-eks-rewrite.md b/text/0605-eks-rewrite.md index 462300385..b55c7b629 100644 --- a/text/0605-eks-rewrite.md +++ b/text/0605-eks-rewrite.md @@ -16,7 +16,7 @@ managing Elastic Kubernetes Service (EKS) resources. This RFC primarily focus on API changes in the new module. The project plan about the new module e.g. how does it co-exists with the current EKS module is out of scope and will be decided later. -Comprehensive use cases and examples will be available in the README file of the forthcoming `eks-alpha-v2` module. +Comprehensive use cases and examples will be available in the README file of the forthcoming `eks-v2-alpha` module. Compared to the original EKS module, it has the following major changes: From e1579c4e835c0676b5eab421d73fc50c916131be Mon Sep 17 00:00:00 2001 From: Xia Zhao <78883180+xazhao@users.noreply.github.com> Date: Tue, 3 Dec 2024 22:41:29 -0800 Subject: [PATCH 21/25] Update text/0605-eks-rewrite.md Co-authored-by: Eli Polonsky --- text/0605-eks-rewrite.md | 1 + 1 file changed, 1 insertion(+) diff --git a/text/0605-eks-rewrite.md b/text/0605-eks-rewrite.md index b55c7b629..1d14a9ab3 100644 --- a/text/0605-eks-rewrite.md +++ b/text/0605-eks-rewrite.md @@ -44,6 +44,7 @@ address some pain points on the existing EKS module including: - Can't deploy EKS cluster without custom resources - The stack uses nested stacks - Can't create multiple cluster per stack +- Can't use escape hatches It allows you to define Amazon Elastic Container Service for Kubernetes (EKS) clusters. In addition, the library also supports defining Kubernetes resource manifests within EKS clusters. From 42743601fc628e6ee1e2cfa9c0e89a50ab6ce75b Mon Sep 17 00:00:00 2001 From: Xia Zhao <78883180+xazhao@users.noreply.github.com> Date: Tue, 3 Dec 2024 22:41:41 -0800 Subject: [PATCH 22/25] Update text/0605-eks-rewrite.md Co-authored-by: Eli Polonsky --- text/0605-eks-rewrite.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/text/0605-eks-rewrite.md b/text/0605-eks-rewrite.md index 1d14a9ab3..8fdc39f5a 100644 --- a/text/0605-eks-rewrite.md +++ b/text/0605-eks-rewrite.md @@ -306,7 +306,7 @@ have to take additional actions. ### What are we launching today? -We’re launching a new EKS module `aws-eks-v2-alpha`. It's a rewrite of existing +We’re launching a new EKS alpha module `aws-eks-v2-alpha`. It's a rewrite of existing `aws-eks` module with some breaking changes to address pain points in `aws-eks` module. ### Why should I use this feature? From 3164549a89c9d84ef85473abf723b5fa9f963b7e Mon Sep 17 00:00:00 2001 From: Xia Zhao <78883180+xazhao@users.noreply.github.com> Date: Tue, 3 Dec 2024 23:14:35 -0800 Subject: [PATCH 23/25] Update text/0605-eks-rewrite.md Co-authored-by: Eli Polonsky --- text/0605-eks-rewrite.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/text/0605-eks-rewrite.md b/text/0605-eks-rewrite.md index 8fdc39f5a..fccf0d44e 100644 --- a/text/0605-eks-rewrite.md +++ b/text/0605-eks-rewrite.md @@ -202,7 +202,7 @@ You can also use general `grantAccess()` to attach a policy to an IAM role/user. See https://docs.aws.amazon.com/eks/latest/userguide/access-policies.html for all access policies ``` -# A general grant function is also provided +# A general grant function is also provided, where you can explicitly set policies. cluster.grantAccess('adminAccess', roleArn, [ eks.AccessPolicy.fromAccessPolicyName('AmazonEKSClusterAdminPolicy', { accessScopeType: eks.AccessScopeType.CLUSTER, From f8189e5049a5dc8dc0bfc607c44f103da7039ca6 Mon Sep 17 00:00:00 2001 From: Xia Zhao Date: Tue, 3 Dec 2024 23:20:04 -0800 Subject: [PATCH 24/25] address comments --- text/0605-eks-rewrite.md | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/text/0605-eks-rewrite.md b/text/0605-eks-rewrite.md index fccf0d44e..97eee4132 100644 --- a/text/0605-eks-rewrite.md +++ b/text/0605-eks-rewrite.md @@ -231,9 +231,7 @@ invoke `addManifest()` function on the cluster. 2. Import the cluster and the kubectl Handler ``` -const kubectlProvider = eks.KubectlProvider.fromKubectlProviderArn(this, 'KubectlProvider', { - functionArn: '' -}); +const kubectlProvider = eks.KubectlProvider.fromKubectlProviderArn(this, 'KubectlProvider', 'function-arn'); const cluster = eks.Cluster.fromClusterAttributes(this, 'MyCluster', { clusterName: 'my-cluster', From ee682d027d2ce623e443ad470e4d1a02f4ac20e8 Mon Sep 17 00:00:00 2001 From: Xia Zhao Date: Wed, 4 Dec 2024 09:39:39 -0800 Subject: [PATCH 25/25] address comments --- text/0605-eks-rewrite.md | 18 ++++++++++-------- 1 file changed, 10 insertions(+), 8 deletions(-) diff --git a/text/0605-eks-rewrite.md b/text/0605-eks-rewrite.md index 97eee4132..47c4521af 100644 --- a/text/0605-eks-rewrite.md +++ b/text/0605-eks-rewrite.md @@ -192,22 +192,24 @@ As a result, `AwsAuth` construct in the previous EKS module will not be provided `grant()` functions are introduced to replace the awsAuth. It’s implemented using Access Entry. -Grant Admin Access to an IAM role +Grant Cluster Admin Access to an IAM role ``` -cluster.grantAdmin('adminAccess', roleArn, eks.AccessScopeType.CLUSTER); +cluster.grantClusterAdmin('adminAccess', role); ``` -You can also use general `grantAccess()` to attach a policy to an IAM role/user. +You can also use general `grant()` to attach a policy to an IAM role/user. See https://docs.aws.amazon.com/eks/latest/userguide/access-policies.html for all access policies +For example, grant Admin Access to a namespace using general `grant()` method + ``` -# A general grant function is also provided, where you can explicitly set policies. -cluster.grantAccess('adminAccess', roleArn, [ - eks.AccessPolicy.fromAccessPolicyName('AmazonEKSClusterAdminPolicy', { - accessScopeType: eks.AccessScopeType.CLUSTER, +cluster.grant('namespaceAdmin', role, [ + eks.AccessPolicy.fromAccessPolicyName('AmazonEKSAdminPolicy', { + accessScopeType: eks.AccessScopeType.NAMESPACE, + namespaces: ['foo', 'bar'], }), -]); +]) ``` ### Use existing Cluster/Kubectl Handler