Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add namespace declaration to manifests #140

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

mustdiechik
Copy link

What this PR does / why we need it:

  • best practice
  • prevent argocd error like "Namespace for $name networking.k8s.io/v1, Kind=Ingress is missing."

@fibbs
Copy link
Contributor

fibbs commented Dec 26, 2024

Hey, I do not agree that this is good practice to do. Even more, my understanding of how Helm handles the rendered manifest's metadata.namespace is as follows:

• When you deploy a Helm chart with helm install or helm upgrade, you can specify a namespace using the --namespace flag.
• If you do not specify the --namespace flag, Helm defaults to the default namespace unless overridden in the chart configuration.

The error you are seeing must be originated by anything else in your ArgoCD usage. I am not an ArgoCD expert, as I use with Flux and Fleet in daily manner, but I did deploy several Helm Charts with ArgoCD (specifically developed ones for a customer application, but also an older version of this one) and neither my own ones nor this Zabbix server one did ever need the changes you are suggesting with this PR.

@fibbs
Copy link
Contributor

fibbs commented Dec 26, 2024

@aeciopires your thoughts?

@aeciopires
Copy link
Member

Hi, @fibbs!

I think @mustdiechik is right. The PR is not setting the namespace name, it continues to behave as you said, only the name of the chosen namespace is being registered in the YAML.

I'm going to test it here using kind and argocd+github+helm to validate the need for this, but at first it seems to make sense.

I looked at the Prometheus helm chart (which I believe is also widely used with ArgoCD)... and there is this namespace line in the helm chart template files.

References:

@aeciopires
Copy link
Member

Thank you @mustdiechik for your contribution.

Can you please remove the changes in the Chart.yaml file?

Only maintainers should change this file just before releasing a release in Helm Chart... We recently changed the process to allow the accumulation of changes and pull requests in the same version.

Changes in Chart.yaml and README.md files in different PRs were causing a lot of conflict issues in the Git repository.

See new process in CONTRIBUTING.md

@fibbs
Copy link
Contributor

fibbs commented Dec 30, 2024

I am still not convinced this is a good idea. I have read at several places that it is "bad practice" to define the manifest's namespace within the templates and that it would not be necessary, as Helm would take care of that by itself.
I have myself several deployments running in ArgoCD where namespace is not explicitly set in any of the templates. Could I see the Application definition from ArgoCD you are using @mustdiechik?

@fibbs
Copy link
Contributor

fibbs commented Dec 30, 2024

I have spun up a testcluster, did an installation of ArgoCD and will do some tests, also for #139

@aeciopires
Copy link
Member

I am still not convinced this is a good idea. I have read at several places that it is "bad practice" to define the manifest's namespace within the templates and that it would not be necessary, as Helm would take care of that by itself. I have myself several deployments running in ArgoCD where namespace is not explicitly set in any of the templates. Could I see the Application definition from ArgoCD you are using @mustdiechik?

Hi @fibbs.

It is really bad practice to leave the namespace value fixed. You are right on that point.

But what I understand in the PR is that the value of the namespace field is dynamic depending on whether Helm and ArgoCD are using it at the installation.

I ran Helm with dry-run manually, and Helm is not really rendering the namespace in the resulting YAML files... I think that is what ArgoCD is complaining about...

I am finishing configuring ArgoCD and I will test it with and without the suggested change... then we will evaluate it with more information.

@fibbs
Copy link
Contributor

fibbs commented Dec 30, 2024

Bildschirmfoto 2024-12-30 um 17 10 56

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  creationTimestamp: "2024-12-30T15:33:01Z"
  generation: 49
  name: zabbix1
  namespace: argocd
  resourceVersion: "1631027"
  uid: 0e313ecc-d9b5-47c8-8938-7dfeecf8768f
spec:
  destination:
    namespace: zabbix1
    server: https://kubernetes.default.svc
  project: default
  source:
    chart: zabbix
    helm:
      parameters:
      - name: ingress.enabled
        value: "true"
      - name: ingress.hosts[0].host
        value: zabbix1.local
      - name: ingress.hosts[0].paths[0].paths
        value: /
      - name: ingress.hosts[0].paths[0].pathType
        value: ImplementationSpecific
      - name: zabbixServer.zabbixServerHA.enabled
        value: "false"
    repoURL: https://zabbix-community.github.io/helm-zabbix
    targetRevision: 6.1.2

@aeciopires
Copy link
Member

@fibbs I had tha same result.

My application.yaml of the ArgoCD is:

# References:
#
# https://blog.argoproj.io/draft-argo-cd-v2-6-release-candidate-ced1853bbfdb
# https://github.com/argoproj/argo-cd/issues/2531
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: zabbix
  namespace: argocd
  annotations:
    # https://argo-cd.readthedocs.io/en/stable/user-guide/sync-waves/
    argocd.argoproj.io/sync-wave: "6"
  finalizers:
    # https://argo-cd.readthedocs.io/en/stable/user-guide/app_deletion/
    - resources-finalizer.argocd.argoproj.io
# https://argo-cd.readthedocs.io/en/stable/operator-manual/declarative-setup/
spec:
  destination:
    server: https://kubernetes.default.svc
    namespace: monitoring
  # https://argo-cd.readthedocs.io/en/release-2.6/user-guide/projects/
  project: default
  # https://argo-cd.readthedocs.io/en/release-2.6/user-guide/multiple_sources/
  sources:
  - repoURL: https://zabbix-community.github.io/helm-zabbix
    chart: zabbix
    targetRevision: 6.1.2
    helm:
      valueFiles:
      - $values/argocd/zabbix/zabbix_values.yaml
  - repoURL: https://gitlab.com/aeciopires/repo-test.git
    # Branch or tag name
    targetRevision: main
    ref: values
  # https://argo-cd.readthedocs.io/en/stable/user-guide/auto_sync/
  syncPolicy:
    automated:
      prune: true
      selfHeal: true
    # https://argo-cd.readthedocs.io/en/stable/user-guide/sync-options/
    syncOptions:
    - CreateNamespace=true

I create a git repository for values. The principal diference is:

I don't used ingress. I test with MetalLB using the LoadBalancer service type:

[...]

zabbixWeb:
  enabled: true
  ZBX_SERVER_HOST: zabbix-zabbix-server
  ZBX_SERVER_PORT: 10051
  service:
    # Using MetalLB
    type: LoadBalancer
    externalTrafficPolicy: Cluster
    nodePort: 31080
    port: 80
[...]

ingress:
  enabled: false
[...]

@mustdiechik can you share more information about your installation?

@mustdiechik
Copy link
Author

spec:
  destination:
    server: https://kubernetes.default.svc
    namespace: monitoring

If you had set "namespace" you will not see argocd error, because argocd would be checking this namespace.
In common approach, I did not set namespace because in one release I install manifests in few namespaces: zbx, network policy, etc.

This PR does not brake your standard machinery, but makes it possible to use in non-standard approaches (CI\CD).

@aeciopires
Copy link
Member

Hi @mustdiechik!

Thanks for sharing more information.

We are currently facing an issue on the main branch regarding the deployment of Zabbix with postgresql.

To resolve this, @fibbs is putting a lot of effort into PR #148, which contains a breaking change but definitively solves the issue.

No further PRs will be merged until the new version with the breaking change and fix is ​​released.

After that, we will review other PRs and issues again and release hotfix versions with the necessary fixes and features.

We apologize for the inconvenience and appreciate your understanding.

@aeciopires aeciopires added enhancement New feature or request good first issue Good for newcomers labels Jan 11, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request good first issue Good for newcomers
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants