-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[plugin-chart-test] Improve Helm Test Cleanup and Logging #589
[plugin-chart-test] Improve Helm Test Cleanup and Logging #589
Comments
@richardtief and myself aligned on introducing a separate status field in the plugin for the helm chart test results. status:
helmChartTestSucceeded:
lastTransitionTime: "2025-02-13T21:28:47Z"
status: "False"
conditions:
- message: Verify that there is a service named perses
succeeded: true
- message: Verify that there is a configmap named perses
succeeded: true
- message: Verify failure deployment and running status of the perses pod
succeeded: false @IvoGoman Do you have any comments on this structure? |
I am not sure about adding an additional field to the Plugin status. We already have this condition as part of the I would prefer reusing the already existing condition in the StatusConditions. Ideally, the message could contain a comma separated list of the failed test cases? This would fit in with the current dashboard and also make the condition easier to parse, as it only lists the failed tests. The nested structure requires to read through all conditions and look for the one(s) that failed. Depending on the amount of tests this list could also be potentially quite long and for debugging only the failed once are relevant. |
I have created a draft PR and successfully retrieved the logs of the failed test pod from Below is an example of the captured logs from the controller: {
"controller": "plugin",
"controllerGroup": "greenhouse.sap",
"controllerKind": "Plugin",
"Plugin": {
"name": "prometheus",
"namespace": "demo"
},
"namespace": "demo",
"name": "prometheus",
"reconcileID": "4bb73e73-9ba7-47e1-ab96-1d1dc5e5dd58",
"logs": "POD LOGS: prometheus-test\n1..9\nnot ok 1 Verify successful deployment and running status of the prometheus-operator pod\n# (from function `verify' in file /usr/lib/bats/bats-detik/detik.bash, line 193,\n# in test file /tests/run.sh, line 10)\n# `verify \"there is 1 deployment named 'prometheus'\"' failed with status 3\n# Valid expression. Verification in progress...\n# Found 0 deployment named prometheus (instead of 1 expected).\nok 2 Verify successful deployment and running status of the prometheus-prometheus service and pod\nok 3 Verify successful deployment and running status of the prometheus-node-exporter pod\nok 4 Verify successful deployment and running status of the prometheus-kube-state-metrics pod\nok 5 Verify successful creation and bound status of prometheus persistent volume claims\nok 6 Verify successful creation and available replicas of prometheus Prometheus resource\nok 7 Verify creation of the prometheus-prometheus statefulset\nok 8 Verify creation of the prometheus-pod-sd PodMonitor\nok 9 Verify creation of required custom resource definitions (CRDs) for prometheus\n\n",
"error": "test failed"
} One more suggestion from my side: A Mix of
|
Priority
None
Is your feature request related to a problem?
Currently, when a Helm chart test fails:
helm test
in remote clusters to debug test failures, leading to delays in troubleshooting.Acceptance Criteria
Test Pod Logs in Plugin CR Status Field:
status
field of the plugin custom resource (CR).Resource Cleanup:
Update Documentation
Additional context.
No response
Reference Issues
Depends on cloudoperators/greenhouse#585
The text was updated successfully, but these errors were encountered: