Skip to content

Commit

Permalink
Stabilize scs-0214-v1
Browse files Browse the repository at this point in the history
Signed-off-by: Matthias Büchse <[email protected]>
  • Loading branch information
mbuechse committed Nov 20, 2024
1 parent 3274fff commit e827574
Show file tree
Hide file tree
Showing 4 changed files with 5 additions and 18 deletions.
13 changes: 3 additions & 10 deletions Standards/scs-0214-v2-k8s-node-distribution.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,8 @@
---
title: Kubernetes Node Distribution and Availability
type: Standard
status: Draft
status: Stable
stabilized_at: 2024-11-21
replaces: scs-0214-v1-k8s-node-distribution.md
track: KaaS
---
Expand Down Expand Up @@ -100,18 +101,10 @@ These labels MUST be kept up to date with the current state of the deployment.
The field gets autopopulated most of the time by either the kubelet or external mechanisms
like the cloud controller.

- `topology.scs.community/host-id`

This is an SCS-specific label; it MUST contain the hostID of the physical machine running
the hypervisor (NOT: the hostID of a virtual machine). Here, the hostID is an arbitrary identifier,
which need not contain the actual hostname, but it should nonetheless be unique to the host.
This helps identify the distribution over underlying physical machines,
which would be masked if VM hostIDs were used.

## Conformance Tests

The script `k8s-node-distribution-check.py` checks the nodes available with a user-provided
kubeconfig file. Based on the labels `topology.scs.community/host-id`,
kubeconfig file. Based on the labels
`topology.kubernetes.io/zone`, `topology.kubernetes.io/region` and `node-role.kubernetes.io/control-plane`,
the script then determines whether the nodes are distributed according to this standard.
If this isn't the case, the script produces an error.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -16,11 +16,7 @@ Worker nodes can also be distributed over "failure zones", but this isn't a requ
Distribution must be shown through labelling, so that users can access these information.

Node distribution metadata is provided through the usage of the labels
`topology.kubernetes.io/region`, `topology.kubernetes.io/zone` and
`topology.scs.community/host-id` respectively.

At the moment, not all labels are set automatically by most K8s cluster utilities, which incurs
additional setup and maintenance costs.
`topology.kubernetes.io/region` and `topology.kubernetes.io/zone`.

## Automated tests

Expand Down
2 changes: 1 addition & 1 deletion Tests/kaas/k8s-node-distribution/check_nodes_test.py
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@ def test_no_distribution(yaml_key, caplog, load_testdata):
assert record.levelname == "ERROR"


def test_missing_label(caplog, load_testdata):
def notest_missing_label(caplog, load_testdata):
data = load_testdata["missing-labels"]
assert check_nodes(data.values()) == 2
hostid_missing_records = [
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,6 @@
and does require these labels to be set, but should yield overall pretty
good initial results.
topology.scs.openstack.org/host-id # previously kubernetes.io/hostname
topology.kubernetes.io/zone
topology.kubernetes.io/region
node-role.kubernetes.io/control-plane
Expand All @@ -47,7 +46,6 @@
LABELS = (
"topology.kubernetes.io/region",
"topology.kubernetes.io/zone",
"topology.scs.community/host-id",
)

logger = logging.getLogger(__name__)
Expand Down

0 comments on commit e827574

Please sign in to comment.