-
Notifications
You must be signed in to change notification settings - Fork 20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: use hive clusterdeployment for creating spoke clusters #472
feat: use hive clusterdeployment for creating spoke clusters #472
Conversation
eda03ab
to
11fb95a
Compare
Co-authored-by: Alejandro Villegas <[email protected]> Signed-off-by: Tomer Figenblat <[email protected]>
Co-authored-by: Alejandro Villegas <[email protected]> Signed-off-by: Tomer Figenblat <[email protected]>
30f7aae
to
f9bf1f7
Compare
Co-authored-by: Alejandro Villegas <[email protected]> Signed-off-by: Tomer Figenblat <[email protected]>
277860f
to
5b4e903
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM ;)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
name: {{ $deploymentName }} | ||
namespace: {{ $deploymentName }} | ||
labels: | ||
vendor: OpenShift |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should add the following here:
annotations:
argocd.argoproj.io/sync-options: SkipDryRunOnMissingResource=true
because if you do a deployment from scratch (i.e. one where ACM gets installed at the same time), the ACM app might be stuck because the multiclusterhub (which provides these CRDs) has not been installed yet and Argo will be stuck with:
The Kubernetes API could not find hive.openshift.io/ClusterDeployment for requested resource first-spoke-deployments/first-spoke-deployments. Make sure the "ClusterDeployment" CRD is installed on the destination cluster.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
good to know - thanks!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Resolved in a073b8c.
--- | ||
apiVersion: cluster.open-cluster-management.io/v1 | ||
kind: ManagedCluster | ||
metadata: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same as above with ClusterDeployment. We should probably add the annotation:
annotations:
argocd.argoproj.io/sync-options: SkipDryRunOnMissingResource=true
otherwise during an install from scratch we get:
The Kubernetes API could not find cluster.open-cluster-management.io/ManagedCluster for requested resource open-cluster-management/first-spoke-deployments. Make sure the "ManagedCluster" CRD is installed on the
destination cluster.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Resolved in a073b8c.
{{- $group := . }} | ||
|
||
{{- range $group.clusterDeployments}} | ||
{{ $cluster := . }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One thing that we could add here (because I am an idiot an I got bitten by it) is the following:
--- a/common/acm/templates/provision/clusterdeployment.yaml
+++ b/common/acm/templates/provision/clusterdeployment.yaml
@@ -3,6 +3,12 @@
{{- range $group.clusterDeployments}}
{{ $cluster := . }}
+{{- if (eq $cluster.name nil) }}
+{{- fail (printf "managedClusterGroup clusterDeployment cluster name is empty: %s" $cluster) }}
+{{- end }}
+{{- if (eq $group.name nil) }}
+{{- fail (printf "managedClusterGroup clusterDeployment group name is empty: %s" $cluster) }}
+{{- end }}
{{- $deploymentName := print $cluster.name "-" $group.name }}
{{- $cloud := "None" }}
Reason was that I did not add the name attribute under the group and when running make preview-acm
I saw a bunch of <nil>
in the object names and it surprised me a bit ;)
We can also do it on top later, just writing it here so I don't forget
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ohh good catch! we'll add it here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Resolved in a073b8c.
Should we do the same for clusterpool.yaml?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah let's do later it in another PR maybe
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
name: {{ .name }} | ||
spec: | ||
clusterSelector: | ||
selectorType: LegacyClusterSetLabel |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Now my deployment is stuck on the following error:
ManagedClusterSet.cluster.open-cluster-management.io "deployments" is invalid: spec.clusterSelector.selectorType: Unsupported value: "LegacyClusterSetLabel": supported values: "ExclusiveClusterSetLabel", "LabelSelector"
I suspect this selectorType is only valid when we use clusterPools and not clusterDeployments?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I couldn't find any documentation on that one. I suspect it was deprecated for a while and removed in v1beta2. I think we accidentally modified the version from v1beta1 but didn't modify the spec. I'll look around for some docs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@mbaldessari I did some investigation.
This issue describes adding v1beta2 depercating LegacyClusterSetLabel (previous default) and adding ExclusiveClusterSetLabel (current default). Later on, v1beta1 was removed which explains our issue.
Currently available options for SelectorType:
- ExclusiveClusterSetLabel, the default value, means that ManagedCluster resources will be selected based on the label:
cluster.open-cluster-management.io/clusterset:<ManagedClusterSet Name>
. - LabelSelector requires speicying a LabelSelector spec key specifying a custom label for selection.
Considering both LegacyClusterSetLabel and ExclusiveClusterSetLabel are the default for their respective versions, we should delete this spec for backward compatibility.
What do you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agree, it seems we should just drop this part for the time being
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And thanks for the excellent investigation on this btw ;)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My pleasure!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Removed in c6ffd0e.
186a25e
to
656e213
Compare
…for clusterdeployments Signed-off-by: Tomer Figenblat <[email protected]>
656e213
to
a073b8c
Compare
@mbaldessari, can you please take a look at the failed workflow? Tests are passing on my end. Based on the failure message, I tried aligning the |
Aye we can worry / fix this one up later on. We fixed a bug in there in the meantime I suspect that is why the test is failing. I did try this PR + the drop of legacyClusterSelector and for some reason the additional cluster could not be spun up by ACM. I might have done something wrong. I'll poke you next week about it if I can't figure it out |
Signed-off-by: Tomer Figenblat <[email protected]>
8fd42b7
to
c6ffd0e
Compare
Have a great weekend! :-) |
0471ed5
to
2ef395d
Compare
…er-namespace Co-authored-by: Michele Baldessari <[email protected]> Co-authored-by: Alejandro Villegas <[email protected]> Signed-off-by: Tomer Figenblat <[email protected]>
2ef395d
to
6cd4e85
Compare
@mbaldessari @r2dedios |
Thanks! |
Thanks @mbaldessari |
Thank you @mbaldessari and @r2dedios ! |
clusterDeployments
for a group, usedclusterPools
as a template.clusterDeployments
on the same multiclustergroup withclusterPools
and on its own.Example configuration: