-
Notifications
You must be signed in to change notification settings - Fork 36
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Refactor Ceilometer services' status objects #561
Conversation
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: paramite The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Removing required status fields that exist in older APIs and adding new required fields will break OLM updates. So we shouldn't be doing what is proposed in the CR, I think. |
Unless I missed something, this is only removing required fields, it's not adding new required fields. Is removing required fields also an issue for OLM? |
Build failed (check pipeline). Post https://softwarefactory-project.io/zuul/t/rdoproject.org/buildset/924588bcc2944382af315ff10227e06f ✔️ openstack-k8s-operators-content-provider SUCCESS in 2h 01m 30s |
Ah, I missed the |
Build failed (check pipeline). Post https://softwarefactory-project.io/zuul/t/rdoproject.org/buildset/8d2b9ccb738140f292e671ccf1194121 ✔️ openstack-k8s-operators-content-provider SUCCESS in 2h 37m 11s |
Some of the conditions in ceilometer CR are in "unknown" state, which is why the CI fails. |
This patch moves status data about kube-state-metrics to CeilometerStatus, because currently KSMStatus does not update properly and if helper.PatchInstance is made to update KSMStatus then Ceilometer objects ends in never ending reconciliation loop.
CeilometerStatus was created for the sake of consistency with KSMStatus, which is now deprecated and unused because of the functionality issues. Keeping the name makes for Ceilometer does not make sense any more.
Build failed (check pipeline). Post https://softwarefactory-project.io/zuul/t/rdoproject.org/buildset/76fac7385261481d9b28677828fed632 ✔️ openstack-k8s-operators-content-provider SUCCESS in 2h 06m 03s |
recheck Could not find any reason related to this patch and from [1] it seems that deployments succeeded. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
otherwise this lgtm
api/v1beta1/ceilometer_types.go
Outdated
} | ||
|
||
// KSMStatus defines the observed state of kube-state-metrics | ||
// NOTE(mmagr): remove with API version increment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This comment will show in a description of the ksmStatus. For example when executingoc explain ceilometer
. Not sure if that was intentional. An empty line between the 2 comments sholud hide the top line from users.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, good catch. That is certainly not the intention. Will fix.
The issue is with operators restarting. This is happening everywhere across the CI downstream and upstream. I tried to point out the issue to people, but I didn't have much luck, guess I just don't know who to talk to and personally I don't know what to do with it. IMO it's not good that the operators are restarting, but we're the only ones actually testing it. |
Build failed (check pipeline). Post https://softwarefactory-project.io/zuul/t/rdoproject.org/buildset/00f1a0a2af014bfaa2e3c1ecea719792 ❌ openstack-k8s-operators-content-provider FAILURE in 14m 56s |
recheck |
Build failed (check pipeline). Post https://softwarefactory-project.io/zuul/t/rdoproject.org/buildset/5b03e81ea6ac48328e17c03b08c2cbd3 ✔️ openstack-k8s-operators-content-provider SUCCESS in 2h 05m 41s |
recheck |
/retest |
/test telemetry-operator-build-deploy-kuttl |
Seems like image pull is failing in the kuttl job: 13m Warning FailedToRetrieveImagePullSecret pod/keystone8594-account-delete-fg842 Unable to retrieve some image pull secrets (galera-openstack-dockercfg-8b97z); attempting to pull the image may not succeed. |
/test telemetry-operator-build-deploy-kuttl |
/lgtm |
This patch joins status objects and uses prefixed contitions instead and splits hash object for each service.
This change is required as it fixes malfunctioning KSMStatus and adds missing hashing of KSM configuration.`