-
Notifications
You must be signed in to change notification settings - Fork 30
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adding Networkpolicy to ClusterSets #125
Conversation
Adds types for cluster sets, which allows constraining a few elements of clusters including: overall resource usage, and which nodes it can use.
Signed-off-by: galal-hussein <[email protected]>
Signed-off-by: galal-hussein <[email protected]>
Signed-off-by: galal-hussein <[email protected]>
charts/k3k/values.yaml
Outdated
@@ -11,6 +11,10 @@ imagePullSecrets: [] | |||
nameOverride: "" | |||
fullnameOverride: "" | |||
|
|||
host: | |||
# clusterCIDR specifies the clusterCIDR that will be added to the default networkpolicy for clustersets |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Might be good to comment/define the behavior for cases where people don't supply the clusterCIDR but do try to enable network policies (i.e. do we then try to extract information from the available nodes)?
Also I think it would be good to provide more information to help people connect this to the specific values in the k8s distribution that they are using. Meaning, this is a pod CIDR - so it would be good to state that specifically to make it easier for people to look it up on the cluster.
main.go
Outdated
|
||
klog.Info("adding networkpolicy node controller") | ||
if err := clusterset.AddNodeController(ctx, mgr, clusterCIDR); err != nil { | ||
klog.Fatalf("Failed to add the clusterset controller: %v", err) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
klog.Fatalf("Failed to add the clusterset controller: %v", err) | |
klog.Fatalf("Failed to add the clusterset node controller: %v", err) |
main.go
Outdated
if clusterCIDR == "" { | ||
clusterCIDR = os.Getenv(clusterCIDREnvVar) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The CLI uses urfave/cli to allow us to read values either from the env var or the CLI. This part of the code doesn't do that, and because of that needs to more manually parse from both places. I think that we should do what the CLI does here - to unify the approaches and avoid duplicating the code.
} | ||
|
||
func (c *ClusterSetReconciler) Reconcile(ctx context.Context, req reconcile.Request) (reconcile.Result, error) { | ||
klog.Infof("%#v", req) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: I think that we can remove this debug statement now.
return reconcile.Result{}, fmt.Errorf("unable to get the clusterset: %w", err) | ||
} | ||
|
||
klog.Infof("got a clusterset: %v", clusterSet) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: I think that we can remove this debug statement now.
if clusterSet.Spec.MaxLimits != nil { | ||
quota := v1.ResourceQuota{ | ||
ObjectMeta: metav1.ObjectMeta{ | ||
Name: "clusterset-quota", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right now there's no limit on the number of clustersets that user can make for a given namespace. However, each clusterset will try to make a clusterset with the same name, which could cause problems.
}, | ||
} | ||
quota.Spec.Hard = clusterSet.Spec.MaxLimits | ||
if err := c.Client.Create(ctx, "a); err != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that this needs to handle the case where the quota already exists but we wanted to update the limits (i.e. check for conflict, and on conflict update). Do see the above note though on the name - if we don't fix the name to be exclusive to one clusterset that could cause problems.
TypeMeta: metav1.TypeMeta{ | ||
Kind: "NetworkPolicy", | ||
APIVersion: "networking.k8s.io/v1", | ||
}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: Is this necessary?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I copied that convention from all wrangler controller code, not really sure if its different in controller-runtime client, but in wrangler they always write the GVK in the resources they create programmatically
Signed-off-by: galal-hussein <[email protected]>
Signed-off-by: galal-hussein <[email protected]>
Signed-off-by: galal-hussein <[email protected]>
Signed-off-by: galal-hussein <[email protected]>
* Adding cluster set types Adds types for cluster sets, which allows constraining a few elements of clusters including: overall resource usage, and which nodes it can use. * Add networkpolicy to clustersets Signed-off-by: galal-hussein <[email protected]> * Fix comments Signed-off-by: galal-hussein <[email protected]> * Fix linting issues Signed-off-by: galal-hussein <[email protected]> * fixing node controller logic and nit fixes Signed-off-by: galal-hussein <[email protected]> * more fixes Signed-off-by: galal-hussein <[email protected]> * fix main cli Signed-off-by: galal-hussein <[email protected]> * Comment the resource quota for clustersets Signed-off-by: galal-hussein <[email protected]> --------- Signed-off-by: galal-hussein <[email protected]> Co-authored-by: Michael Bolot <[email protected]>
The PR:
Issue: