Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs(MADR): meshexternalservice routed through the specific zone #11853

Open
wants to merge 5 commits into
base: master
Choose a base branch
from

Conversation

lukidzi
Copy link
Contributor

@lukidzi lukidzi commented Oct 24, 2024

Motivation

There is a use case where a service is not part of the mesh and is only accessible or resolvable within a single Datacenter (DC). Without the proposed functionality, there is no way to expose this service to other zones.

Implementation information

MADR https://docs.google.com/document/d/12YrUy-kV3JZu9K4tSv6V6q4Dx8v7mp41_SdxJ0kQ_tc/edit?tab=t.0#heading=h.n6cmlf1eel2z

Supporting documentation

part of #11071

@lukidzi lukidzi added ci/skip-test PR: Don't run unit and e2e tests (maybe this is just a doc change) ci/skip-e2e-test PR: Don't run e2e tests labels Oct 24, 2024
@lukidzi lukidzi requested a review from a team as a code owner October 24, 2024 20:38
@lukidzi lukidzi requested review from jijiechen and Automaat and removed request for a team October 24, 2024 20:38
@lukidzi lukidzi changed the title docs(madr): meshexternalservice routed through the specific zone docs(MADR): meshexternalservice routed through the specific zone Oct 25, 2024
@jijiechen
Copy link
Member

I would prefer creating the resource on global CP with a label.

Because when we ask the user to create a resource in the zone, it implies the resource is "owned/maintained" by the zone. But this is not the case here.

Zones are phsically located at different clusters/places, so they may have very different permissions when considering the underlying Kubernetes/VM cluster. However, an external service that is only accessible through a zone does not belong to the zone. When the mesh admin/operator decides to open this accessiblity to other zones in the mesh, the responsibility of managing/maintaining this relationship with the external service should be shared by the whole mesh. So the MeshExternalService should be shown/managed through the global CP.

Let me raise an example scenario:

When the external service fails because of the IP address had been changed by it owner and the consumers within the mesh just found it. So they want to correct the address in MeshExternalService defined in the mesh.

In this case, I think they should go to the global CP admin, instead of the zone admin.

It's slightly different than the MeshService case. MeshServices are normally generated automatically and it originates from the Service objects on a Kubernetes cluster.

So it's reaonable for MeshServices to be "owned" by zones, because:

  1. They originate from Services, which are owned by zones
  2. People don't need to manage/maintain MeshServices directly, so the operational cost is low

@lukidzi
Copy link
Contributor Author

lukidzi commented Oct 25, 2024

That makes sense I can agree with most of this. I have another use case that could involve a dedicated database team maintaining a cluster independently. In this scenario, permissions for exposing services from other zones would be managed by the Mesh Operator, while the database team could handle the creation and maintenance of MeshExternalService resources. This setup would provide flexibility, allowing the database team to manage their resources without impacting broader mesh configurations.

@lukidzi lukidzi force-pushed the madr-mes-through-zone branch from 04589d2 to ba32b22 Compare October 25, 2024 09:59
@lukidzi
Copy link
Contributor Author

lukidzi commented Oct 28, 2024

I am going to move it to the google docs

@lukidzi lukidzi marked this pull request as draft October 28, 2024 10:46
Signed-off-by: Lukasz Dziedziak <[email protected]>
@lukidzi lukidzi marked this pull request as ready for review October 28, 2024 13:27
Copy link
Contributor

@slonka slonka left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@lukidzi I feel like there is nothing to add more from my side but there are some discussions that are not closed. Should we create a meeting and talk through them?

@bartsmykla bartsmykla changed the title docs(MADR): meshexternalservice routed through the specific zone docs(MADR): meshexternalservice routed through the specific zone Dec 2, 2024
@jakubdyszkiewicz jakubdyszkiewicz removed their request for review January 9, 2025 14:02
Copy link
Contributor

github-actions bot commented Jan 9, 2025

Reviewer Checklist

🔍 Each of these sections need to be checked by the reviewer of the PR 🔍:
If something doesn't apply please check the box and add a justification if the reason is non obvious.

  • Is the PR title satisfactory? Is this part of a larger feature and should be grouped using > Changelog?
  • PR description is clear and complete. It Links to relevant issue as well as docs and UI issues
  • This will not break child repos: it doesn't hardcode values (.e.g "kumahq" as an image registry)
  • IPv6 is taken into account (.e.g: no string concatenation of host port)
  • Tests (Unit test, E2E tests, manual test on universal and k8s)
    • Don't forget ci/ labels to run additional/fewer tests
  • Does this contain a change that needs to be notified to users? In this case, UPGRADE.md should be updated.
  • Does it need to be backported according to the backporting policy? (this GH action will add "backport" label based on these file globs, if you want to prevent it from adding the "backport" label use no-backport-autolabel label)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ci/skip-e2e-test PR: Don't run e2e tests ci/skip-test PR: Don't run unit and e2e tests (maybe this is just a doc change)
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants