Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature - Support for Pod-Level Identity Using AWS Credentials #334

Open
aminmr opened this issue Jan 11, 2025 · 3 comments
Open

Feature - Support for Pod-Level Identity Using AWS Credentials #334

aminmr opened this issue Jan 11, 2025 · 3 comments
Labels
enhancement New feature or request

Comments

@aminmr
Copy link

aminmr commented Jan 11, 2025

/feature

Is your feature request related to a problem? Please describe.
The Mountpoint for Amazon S3 CSI Driver currently supports pod-level credentials only through IRSA. This creates limitations for users of S3-compatible storage solutions, such as Ceph. These users cannot utilize pod-level credentials effectively and are restricted to driver-level credential configurations.

Describe the solution you'd like in detail
I would like the ability to define pod-level credentials using the following AWS environment variables:

AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
AWS_SESSION_TOKEN

For security purposes, these variables could be passed to the pods via Kubernetes secrets. This feature would improve flexibility for S3-compatible storage while allowing credentials to be managed at the pod level.

Describe alternatives you've considered
The only current alternative is to configure credentials at the driver level. However, this approach lacks the granularity needed for pod-level configurations and does not meet the requirements for non-EKS Kubernetes environments or S3-compatible backends.

I am also willing to contribute to the development of this feature if guidance and support from the maintainers are available.

@unexge unexge added the enhancement New feature or request label Jan 13, 2025
@vladem
Copy link
Contributor

vladem commented Jan 14, 2025

Hi @aminmr, thanks for the feature request! We'll be happy to gather more feedback on it before starting the implementation. Generally, the use of static credentials is not recommended, but we do recognize that there are use cases where their usage is required.

With IRSA, service accounts are annotated with the role they need to assume. If we decide to proceed with the implementation, we'll need to define a way of associating static credentials with the service account.

@aminmr
Copy link
Author

aminmr commented Jan 19, 2025

Hi @vladem,

Thanks for the response! I understand the concerns about static credentials. If you decide to move forward, I'm happy to help define how to associate them with the service account through annotations.

Thanks!

@pawanpnvda
Copy link

pawanpnvda commented Jan 24, 2025

We have successfully used the pod-level IRSA to mount S3 buckets on EKS clusters. But as mentioned before, some S3-compatible storage solutions, such as Ceph, or OCI buckets can not use this approach.

Instead of configuring a single CSI driver level AWS credentials, an alternative could be configuring the AWS credential per Persistent Volume. I am proposing the use of "nodePublishSecretRef" field in the Persistent Volume object to specify the K8S secret which stores the required access key and secret key. We can introduce a new authenticationSource (other than driver and pod) as "persistentVolume".

Workflow

  1. User creates a Kubernetes Secret containing S3 credentials
  2. User creates a PV specification referencing the Secret via nodePublishSecretRef
  3. CSI Driver reads the Secret during volume mount operations (by parsing the "Secrets" map in NodePublishVolumeRequest)
  4. CSI Driver uses the credentials from the Secret to authenticate with S3 compatible storage system.
kubectl create secret generic s3-compatible-storage-secret \
    --namespace <NAMESPACE> \
    --from-literal "key_id=${AWS_ACCESS_KEY_ID}" \
    --from-literal "access_key=${AWS_SECRET_ACCESS_KEY}"
apiVersion: v1
kind: PersistentVolume
metadata:
  name: s3-compatible-pv
spec:
  capacity:
    storage: 5Gi
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  csi:
    driver: s3.csi.aws.com
    volumeHandle: <unique-volume-id>
    volumeAttributes:
      bucketName: <S3_COMPATIBLE_BUCKET_NAME>
      authenticationSource: persistentVolume # A new authentication source
    nodePublishSecretRef:
      name:  s3-compatible-storage-secret  # The name of the K8S secret created above
      namespace: <NAMESPACE> # The namespace in which K8S secret was created above

CSI Driver Changes

  1. Modify the NodePublishVolume function to parse the Secrets field in NodePublishVolumeRequest. This field will be populated with the K8S secret configured in nodePublishSecretRef.
  2. Extract the access key and secret key from the Secret.
  3. Use these credentials for S3 authentication instead of driver-level secrets.
  4. Note that the K8S service account associated with the CSI driver needs to have the ClusterRole with permissions to read Secrets from a given namespace.

Proposed Patch

diff --git a/pkg/driver/node/mounter/credential_provider.go b/pkg/driver/node/mounter/credential_provider.go
index 12cc4a8..3e37e2b 100644
--- a/pkg/driver/node/mounter/credential_provider.go
+++ b/pkg/driver/node/mounter/credential_provider.go
@@ -31,9 +31,10 @@ type AuthenticationSource = string
 const (
 	// This is when users don't provide a `authenticationSource` option in their volume attributes.
 	// We're defaulting to `driver` in this case.
-	AuthenticationSourceUnspecified AuthenticationSource = ""
-	AuthenticationSourceDriver      AuthenticationSource = "driver"
-	AuthenticationSourcePod         AuthenticationSource = "pod"
+	AuthenticationSourceUnspecified      AuthenticationSource = ""
+	AuthenticationSourceDriver           AuthenticationSource = "driver"
+	AuthenticationSourcePod              AuthenticationSource = "pod"
+	AuthenticationSourcePersistentVolume AuthenticationSource = "persistentVolume"
 )
 
 const (
@@ -50,6 +51,13 @@ const serviceAccountRoleAnnotation = "eks.amazonaws.com/role-arn"
 const podLevelCredentialsDocsPage = "https://github.com/awslabs/mountpoint-s3-csi-driver/blob/main/docs/CONFIGURATION.md#pod-level-credentials"
 const stsConfigDocsPage = "https://github.com/awslabs/mountpoint-s3-csi-driver/blob/main/docs/CONFIGURATION.md#configuring-the-sts-region"
 
+const (
+	// The keys used to store credentials in the secret map provided as part of PV secret map in NodePublishVolumeRequest.
+	// We are using the same keys as used by the AWS to accept access key and secret for driver level credentials.
+	keyId           string = "key_id"
+	secretAccessKey string = "access_key"
+)
+
 var errUnknownRegion = errors.New("NodePublishVolume: Pod-level: unknown region")
 
 type Token struct {
@@ -84,7 +92,7 @@ func (c *CredentialProvider) CleanupToken(volumeID string, podID string) error {
 
 // Provide provides mount credentials for given volume and volume context.
 // Depending on the configuration, it either returns driver-level or pod-level credentials.
-func (c *CredentialProvider) Provide(ctx context.Context, volumeID string, volumeCtx map[string]string, mountpointArgs []string) (*MountCredentials, error) {
+func (c *CredentialProvider) Provide(ctx context.Context, volumeID string, volumeCtx map[string]string, mountpointArgs []string, secrets map[string]string) (*MountCredentials, error) {
 	if volumeCtx == nil {
 		return nil, status.Error(codes.InvalidArgument, "Missing volume context")
 	}
@@ -93,6 +101,8 @@ func (c *CredentialProvider) Provide(ctx context.Context, volumeID string, volum
 	switch authenticationSource {
 	case AuthenticationSourcePod:
 		return c.provideFromPod(ctx, volumeID, volumeCtx, mountpointArgs)
+	case AuthenticationSourcePersistentVolume:
+		return c.provideFromPeristentVolume(secrets)
 	case AuthenticationSourceUnspecified, AuthenticationSourceDriver:
 		return c.provideFromDriver()
 	default:
@@ -119,6 +129,25 @@ func (c *CredentialProvider) provideFromDriver() (*MountCredentials, error) {
 	}, nil
 }
 
+// provideFromPeristentVolume parses credentials from the persistent volume secret map.
+// If the required keys are not present, it returns an error.
+func (c *CredentialProvider) provideFromPeristentVolume(secrets map[string]string) (*MountCredentials, error) {
+	klog.V(4).Infof("NodePublishVolume: Using secrets from persistent volume secret map ")
+	accessKeyID, keyIDPresent := secrets[keyId]
+	secretAccessKey, secretKeyPresent := secrets[secretAccessKey]
+	if !keyIDPresent || !secretKeyPresent {
+		return nil, fmt.Errorf("missing access key or secret access key in persistent volume secret map")
+	}
+	region := os.Getenv(regionEnv)
+
+	return &MountCredentials{
+		AuthenticationSource: AuthenticationSourcePersistentVolume,
+		AccessKeyID:          accessKeyID,
+		SecretAccessKey:      secretAccessKey,
+		Region:               region,
+	}, nil
+}
+
 func (c *CredentialProvider) provideFromPod(ctx context.Context, volumeID string, volumeCtx map[string]string, mountpointArgs []string) (*MountCredentials, error) {
 	klog.V(4).Infof("NodePublishVolume: Using pod identity")
 
diff --git a/pkg/driver/node/node.go b/pkg/driver/node/node.go
index b26670e..8f3ed4e 100644
--- a/pkg/driver/node/node.go
+++ b/pkg/driver/node/node.go
@@ -140,7 +140,7 @@ func (ns *S3NodeServer) NodePublishVolume(ctx context.Context, req *csi.NodePubl
 		mountpointArgs = compileMountOptions(mountpointArgs, mountFlags)
 	}
 
-	credentials, err := ns.credentialProvider.Provide(ctx, req.VolumeId, req.VolumeContext, mountpointArgs)
+	credentials, err := ns.credentialProvider.Provide(ctx, req.VolumeId, req.VolumeContext, mountpointArgs, req.GetSecrets())
 	if err != nil {
 		klog.Errorf("NodePublishVolume: failed to provide credentials: %v", err)
 		return nil, err

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

4 participants