From 7b2d1457146f221ae93f2971facb1f0d759ce087 Mon Sep 17 00:00:00 2001 From: Eirik A Date: Sun, 3 Apr 2022 09:10:54 +0100 Subject: [PATCH] Canonicalise more webpage docs + fix relative links (#868) * Canonicalise links for embedded website docs Signed-off-by: clux * same for security Signed-off-by: clux * move architecture to website Signed-off-by: clux * forgot to remove maintainers as well Signed-off-by: clux * move governance as well Signed-off-by: clux --- CHANGELOG.md | 2 +- CONTRIBUTING.md | 6 +- SECURITY.md | 2 +- architecture.md | 211 ------------------------------------------------ governance.md | 55 ------------- maintainers.md | 23 ------ 6 files changed, 5 insertions(+), 294 deletions(-) delete mode 100644 architecture.md delete mode 100644 governance.md delete mode 100644 maintainers.md diff --git a/CHANGELOG.md b/CHANGELOG.md index 42e54aa0b..5d22cd6bb 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -275,7 +275,7 @@ The following breaking changes were made as a part of an effort to refine errors * `kube::runtime::wait::conditions` added `is_crd_established` helper - [#659](https://github.com/kube-rs/kube-rs/issues/659) * `kube::CustomResource` derive can now take an arbitrary `#[kube(kube_core)]` path for `kube::core` - [#658](https://github.com/kube-rs/kube-rs/issues/658) * `kube::core` consistently re-exported across crates - * docs: major overhaul + [architecture.md](./architecture.md) - [#416](https://github.com/kube-rs/kube-rs/issues/416) via [#652](https://github.com/kube-rs/kube-rs/issues/652) + * docs: major overhaul + [architecture.md](https://kube.rs/architecture/) - [#416](https://github.com/kube-rs/kube-rs/issues/416) via [#652](https://github.com/kube-rs/kube-rs/issues/652) [0.61.0](https://github.com/kube-rs/kube-rs/releases/tag/0.61.0) / 2021-10-09 =================== diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index dd936540e..5d87a9e33 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -13,7 +13,7 @@ More information about `DCO` can be found [here](https://developercertificate.or All code that is contributed to kube-rs must go through the Pull Request (PR) process. To contribute a PR, fork this project, create a new branch, make changes on that branch, and then use GitHub to open a pull request with your changes. -Every PR must be reviewed by at least one [Maintainer](./maintainers.md) of the project. +Every PR must be reviewed by at least one [Maintainer](https://kube.rs/maintainers/) of the project. Once a PR has been marked "Approved" by a Maintainer (and no other Maintainer has an open "Rejected" vote), the PR may be merged. While it is fine for non-maintainers to contribute their own code reviews, those reviews do not satisfy the above requirement. @@ -75,7 +75,7 @@ All public interfaces should have doc tests with examples for [docs.rs](https:// When adding new non-trivial pieces of logic that results in a drop in coverage you should add a test. -Cross-reference with the coverage build [![coverage build](https://codecov.io/gh/kube-rs/kube-rs/branch/master/graph/badge.svg?token=9FCqEcyDTZ)](https://codecov.io/gh/kube-rs/kube-rs) and go to your branch. Coverage can also be run locally with [`cargo tarpaulin`](https://github.com/xd009642/tarpaulin) at project root. This will use our [tarpaulin.toml](./tarpaulin.toml) config, and **will run both unit and integration** tests. +Cross-reference with the coverage build [![coverage build](https://codecov.io/gh/kube-rs/kube-rs/branch/master/graph/badge.svg?token=9FCqEcyDTZ)](https://codecov.io/gh/kube-rs/kube-rs) and go to your branch. Coverage can also be run locally with [`cargo tarpaulin`](https://github.com/xd009642/tarpaulin) at project root. This will use our [tarpaulin.toml](https://github.com/kube-rs/kube-rs/blob/master/tarpaulin.toml) config, and **will run both unit and integration** tests. #### What type of test @@ -95,7 +95,7 @@ In general: **use the least powerful method** of testing available to you: ## Support ### Documentation -The [high-level architecture document](./architecture.md) is written for contributors. +The [high-level architecture document](https://kube.rs/architecture/) is written for contributors. ### Contact You can ask general questions / share ideas / query the community at the [kube-rs discussions forum](https://github.com/kube-rs/kube-rs/discussions). diff --git a/SECURITY.md b/SECURITY.md index 93946c6a4..d9b09da42 100644 --- a/SECURITY.md +++ b/SECURITY.md @@ -9,7 +9,7 @@ Once `0.71.1` is released, we will no longer provide updates for `0.69` releases ## Reporting a Vulnerability -To report a security problem in Kube-rs, please contact at least two [maintainers][./maintainers.md] +To report a security problem in Kube-rs, please contact at least two [maintainers](https://kube.rs/maintainers/) These people will help diagnose the severity of the issue and determine how to address the issue. Issues deemed to be non-critical will be filed as GitHub issues. diff --git a/architecture.md b/architecture.md deleted file mode 100644 index ba8ab91d6..000000000 --- a/architecture.md +++ /dev/null @@ -1,211 +0,0 @@ -# Architecture -This document describes the high-level architecture of kube-rs. - -This is intended for contributors or people interested in architecture. - -## Overview -The kube-rs repository contains 5 main crates, examples and tests. - -The main crate that users generally import is `kube`, and it's a straight facade crate that re-exports from the four other crates: - -- `kube_core` -> re-exported as `core` -- `kube_client` -> re-exported as `api` + `client` + `config` + `discovery` -- `kube_derive` -> re-exported as `CustomResource` -- `kube_runtime` -> re-exported as `runtime` - -In terms of dependencies between these 4: - -- `kube_core` is used by `kube_runtime`, `kube_derive` and `kube_client` -- `kube_client` is used by `kube_runtime` -- `kube_runtime` is the highest level abstraction - -The extra indirection crate `kube` is there to avoid cyclic dependencies between the client and the runtime (if the client re-exported the runtime then the two crates would be cyclically dependent). - -**NB**: We refer to these crates by their `crates.io` name using underscores for separators, but the folders have dashes as separators. - -When working on features/issues with `kube-rs` you will __generally__ work inside one of these crates at a time, so we will focus on these in isolation, but talk about possible overlaps at the end. - -## Kubernetes Ecosystem Considerations -The Rust ecosystem does not exist in a vaccum as we take heavy inspirations from the popular Go ecosystem. In particular: - -- `core` module contains invariants from [apimachinery](https://github.com/kubernetes/apimachinery) that is preseved across individual apis -- `client::Client` is a re-envisioning of a generic [client-go](https://github.com/kubernetes/client-go) -- `runtime::Controller` abstraction follows conventions in [controller-runtime](https://github.com/kubernetes-sigs/controller-runtime) -- `derive::CustomResource` derive macro for [CRDs](https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/) is loosely inspired by [kubebuilder's annotations](https://book.kubebuilder.io/reference/generating-crd.html) - -We do occasionally diverge on matters where following the go side is worse for the rust language, but when it comes to choosing names and finding out where some modules / functionality should reside; a precedent in `client-go`, `apimachinery`, `controller-runtime` and `kubebuilder` goes a long way. - -## Generated Structs -We do not maintain the kubernetes types generated from the `swagger.json` or the protos at present moment, and we do not handle client-side validation of fields relating to these types (that's left to the api-server). - -We generally use k8s-openapi's Rust bindings for Kubernetes' builtin types types, see: - -- [github.com:k8s-openapi](https://github.com/Arnavion/k8s-openapi/) -- [docs.rs:k8s-openapi](https://docs.rs/k8s-openapi/*/k8s_openapi/) - -We also maintain an experimental set of Protobuf bindings, see [k8s-pb](https://github.com/kazk/k8s-pb). - -## Crate Overviews -### kube-core -This crate only contains types relevant to the [Kubernetes API](https://kubernetes.io/docs/concepts/overview/kubernetes-api/), abstractions analogous to what you'll find inside [apimachinery](https://github.com/kubernetes/apimachinery/tree/master/pkg), and extra Rust traits that help us with generics further down in `kube-client`. - -Starting out with the basic type modules first: - -- `metadata`: the various metadata types; `ObjectMeta`, `ListMeta`, `TypeMeta` -- `request` + `response` + `subresource`: a [sans-IO](https://sans-io.readthedocs.io/) style http interface for the API -- `watch`: a generic enum and behaviour for the [watch api](https://kubernetes.io/docs/reference/using-api/api-concepts/#efficient-detection-of-changes) -- `params`: generic parameters passed to sans-IO request interface (`ListParams` etc, called `ListOptions` in apimachinery) - -Then there are traits - -- `crd`: a versioned `CustomResourceExt` trait for `kube-derive` -- `object` generic conveniences for iterating over typed lists of objects, and objects following spec/status conventions -- `resource`: a `Resource` trait for `kube-client`'s `Api` + a convenience `ResourceExt` trait for users - -The most important export here is the `Resource` trait and its impls. It is a pretty complex trait, with an associated type called `DynamicType` (that is default empty). Every `ObjectMeta`-using type that comes from `k8s-openapi` gets a blanket impl of `Resource` so we can use them generically (in `kube_client::Api`). - -Finally, there are two modules used by the higher level `discovery` module (in `kube-client`) and they have similar counterparts in [apimachinery/restmapper](https://github.com/kubernetes/apimachinery/blob/master/pkg/api/meta/restmapper.go) + [apimachinery/group_version](https://github.com/kubernetes/apimachinery/blob/master/pkg/runtime/schema/group_version.go): - -- `discovery`: types returned by the discovery api; capabilities, verbs, scopes, key info -- `gvk`: partial type information to infer api types - -The main type here from these two modules is `ApiResource` because it can also be used to construct a `kube_client::Api` instance without compile-time type information (both `DynamicObject` and `Object` has `Resource` impls where `DynamicType = ApiResource`). - -### kube-client - -#### config -Contains logic for determining the runtime environment (local [kubeconfigs](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/) or [in-cluster](https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/#accessing-the-api-from-a-pod)) so that we can construct our `Config` from either source. - -- `Config` is the source-agnostic type (with all the information needed by our `Client`) -- `Kubeconfig` is for loading from `~/.kube/config` or from any number of kubeconfig like files set by `KUBECONFIG` evar. -- `Config::from_cluster_env` reads environment variables that are injected when running inside a pod - -In general this module has similar functionality to the upstream [client-go/clientcmd](https://github.com/kubernetes/client-go/tree/7697067af71046b18e03dbda04e01a5bb17f9809/tools/clientcmd) module. - -#### client -The `Client` is one of the most complicated parts of `kube-rs`, because it has the most generic interface. People can mock the `Client`, people can replace individual components and force inject headers, people can choose their own tls stack, and - in theory - use whatever http clients they want. - -Generally, the `Client` is created from the properties of a `Config` to create a particular `hyper::Client` with a pre-configured amount of [tower::Layer](https://docs.rs/tower/*/tower/layer/trait.Layer.html)s (see `TryFrom for Client`), but users can also pass in an arbitrary `tower::Service` (to fully customise or to mock). The signature restrictions on `Client::new` is commensurately large. - -The `tls` module contains the `openssl` or `rustls` interfaces to let users pick their tls stacks. The connectors created in that module is passed to `hyper::Client` based on feature selection. - -The `Client` can be created from a particular type of using the properties in the `Config` to configure its layers. Some of our layers come straight from [tower-http](https://docs.rs/tower-http): - -- `tower_http::DecompressionLayer` to deal with gzip compression -- `tower_http::TraceLayer` to propagate http request information onto [tracing](https://docs.rs/tracing) spans. -- `tower_http::AddAuthorizationLayer` to set bearer tokens / basic auth (when needed) - -but we also have our own layers in the `middleware` module: - -- `BaseUriLayer` prefixes `Config::base_url` to requests -- `AuthLayer` configures either `AddAuthorizationLayer` or `AsyncFilterLayer` depending on authentication method in the kubeconfig. `AsyncFilterLayer` is like `AddAuthorizationLayer`, but with a token that's refreshed when necessary. - -(The `middleware` module is kept small to avoid mixing the business logic (`client::auth` openid connect oauth provider logic) with the tower layering glue.) - -The exported layers and tls connectors are mainly exposed through the `config_ext` module's `ConfigExt` trait which is only implemented by `Config` (because the config has all the properties needed for this in general, and it helps minimise our api surface). - -Finally, the `Client` manages other key aspects of IO the protocol such as: - -- `Client::connect` performs an [HTTP Upgrade](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Upgrade) for specialised verbs -- `Client::request` handles 90% of all requests -- `Client::request_events` handles streaming `watch` eventss using `tokio_utils`'s `FramedRead` codec -- `Client::request_status` handles `Either` responses from kubernetes - -#### api -The generic `Api` type and its methods. - -Builds on top of the `Request` / `Response` interface in `kube_core` by parametrising over a generic type `K` that implement `Resource` (plus whatever else is needed). - -The `Api` absorbs a `Client` on construction and is then configured with its `Scope` (through its `::namespaced` / `::default_namespaced` or `::all` constructors). - -For dynamic types (`Object` and `DynamicObject`) it has slightly more complicated constructors which have the `_with` suffix. - -The `core_methods` and most `subresource` methods generally follow this recipe: - -- create `Request` -- store the kubernetes verb in the [`http::Extensions`] object -- call the request with the `Client` and tell it what type(s) to deserialize into - -Some subresource methods (behind the `ws` feature) use the `remote_command` module's `AttachedProcess` interface expecting a duplex stream to deal with specialised websocket verbs (`exec` and `attach`) and is calling `Client::connect` first to get that stream. - -#### discovery -Deals with dynamic discovery of what apis are available on the api-server. -Normally this can be used to discover custom resources, but also certain standard resources that vary between providers. - -The `Discovery` client can be used to do a full recursive sweep of api-groups into all api resources (through `filter`/`exclude` -> `run`) and then the users can periodically re-`run` to keep the cache up to date (as kubernetes is being upgraded behind the scenes). - -The `discovery` module also contains a way to run smaller queries through the `oneshot` module; e.g. resolving resource name when having group version kind, resolving every resource within one specific group, or even one group at a pinned version. - -The equivalent Go logic is found in [client-go/discovery](https://github.com/kubernetes/client-go/blob/master/discovery/discovery_client.go) - -### kube-derive -The smallest crate. A simple [derive proc_macro](https://doc.rust-lang.org/reference/procedural-macros.html) to generate Kubernetes wrapper structs and trait impls around a data struct. - -Uses `darling` to parse `#[kube(attrs...)]` then uses `syn` and `quote` to produce a suitable syntax tree based on the attributes requested. - -It ultimately contains a lot of ugly json coercing from attributes into serialization code, but this is code that everyone working with custom resources need. - -It has hooks into `schemars` when using `JsonSchema` to ensure the correct type of [CRD schema](https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#specifying-a-structural-schema) is attached to the right part of the generated custom resource definition. - -### kube-runtime -The highest level crate that deals with the highest level abstractions (such as controllers/watchers/reflectors) and specific Kubernetes apis that need common care (finalisers, waiting for conditions, event publishing). - -#### watcher -The `watcher` module contains state machine wrappers around `Api::watch` that will watch and auto-recover on allowable failures. -The `watcher` fn is the general purpose one that is similar to informers in Go land, and will watch a collection of objects. The `watch_object` is a specialised version of this that watches a single object. - -#### reflector -The `reflector` module contains wrappers around `watcher` that will cache objects in memory. -The `reflector` fn wraps a `watcher` and a state `Store` that is updated on every event emitted by the `watcher`. - -The reason for the difference between `watcher::Event` (created by `watcher`) and `kube::api::WatchEvent` (created by `Api::watch`) is that `watcher` will deals with desync errors and do a full relist whose result is then propagated as a single event, ensuring the `reflector` can do a single, atomic update to its state `Store`. - -#### controller -The `controller` module contains the `Controller` type and its associated definitions. - -The `Controller` is configured to watch one root object (configured via `::new`), and several owned objects (via `::owns`), and - once `::run` - it will hit a users `reconcile` function for every change to the root object or any of its child objects (and internally it will traverse up the object tree - usually through owner references - to find the affected root object). - -The user is then meant to provide an idempotent `reconcile` fn, that does not know what underlying object was changed, to ensure the state configured in its crd, is what can be seen in the world. - -To manage this, a vector of watchers is converted into a [set of streams](https://docs.rs/futures/0.3.17/futures/stream/struct.SelectAll.html) of the same type by mapping the watchers so they have the same output type. This is why `watches` and `owns` differ: `owns` looks up `OwnerReferences`, but `watches` need you to define the relation yourself with a `mapper`. The mappers we support are `trigger_owners`, `trigger_self`, and the custom `trigger_with`. - -Once we have combined the stream of streams we essentially have a flattened super stream with events from multiple watchers that will act as our input events. With this, the `applier` can start running its fairly complex machinery: - -1. new input events get sent to the `scheduler` -2. scheduled events are then passed them through a `Runner` preventing duplicate parallel requests for the same object -3. when running, we send the affected object to the users `reconciler` fn and await that future -4. a) on success, prepare the users `Action` (generally a slow requeue several minutes from now) -4. b) on failure, prepare a `Action` based on the users error policy (generally a backoff'd requeue with shorter initial delay) -5. Map resulting `Action`s through an ad-hoc `scheduler` channel -6. Resulting requeue requests through the channel are picked up at the top of `applier` and merged with input events in step 1. - -Ideally, the process runs forever, and it minimises unnecessary reconcile calls (like users changing more than one related object while one reconcile is already happening). - -#### finalizer -Contains a helper wrapper `finalizer` for a `reconcile` fn used by a `Controller` when a user is using [finalizers](https://kubernetes.io/docs/concepts/overview/working-with-objects/finalizers/) to handle garbage collection. - -This lets the user focus on simply selecting the type of behaviour they would like to exhibit based on whether the object is being deleted or it's just being regularly reconciled (through enum matching on `finalizer::Event`). This lets the user elide checking for potential deletion timestamps and manage the state machinery of `metadata.finalizers` through jsonpatching. - -#### wait -Contains helpers for waiting for `conditions`, or objects to be fully removed (i.e. waiting for finalizers post delete). - -These build upon `watch_object` with specific mappers. - -#### events -Contains an event `Recorder` ala [client-go/events](https://github.com/kubernetes/client-go/tree/master/tools/events) that controllers can hook into, to publish events related to their reconciliations. - -## Crate Delineation and Overlaps -When working on the the client machinery, it's important to realise that there are effectively 5 layers involved: - -1. Sans-IO request builder (in `kube_core::Request`) -2. IO (in `kube_client::Client`) -3. Typing (in `kube_client::Api`) -4. Helpers for using the API correctly (e.g.`kube_runtime::watcher`) -5. High-level abstractions for specific tasks (e.g. `kube_runtime::controller`) - -At level 3, we we essentially have what the K8s team calls a basic client. As a consequence, new methods/subresources typically cross 2 crate boundaries (`kube_core`, `kube_client`), and needs to touch 3 main modules. - -Similarly, there are also the traits and types that define what an api means in `kube_core` like `Resource` and `ApiResource`. -If modifying these, then changes to `kube-derive` are likely necessary, as it needs to directly implement this for users. - -These types of cross-crate dependencies are why we expose `kube` as a single versioned facade crate that users can upgrade atomically (without being caught in the middle of a publish cycle). This also gives us better compatibility with `dependabot`. diff --git a/governance.md b/governance.md deleted file mode 100644 index 0857c8ab5..000000000 --- a/governance.md +++ /dev/null @@ -1,55 +0,0 @@ -# Kube-rs Governance - -This document defines project governance for Kube-rs. - -## Contributors - -Kube-rs is for everyone. Anyone can become a Kube-rs contributor simply by contributing to the project, whether through code, documentation, blog posts, community management, or other means. -As with all Kube-rs community members, contributors are expected to follow the [Kube-rs Code of Conduct][coc]. - -All contributions to Kube-rs code, documentation, or other components in the Kube-rs GitHub org must follow the guidelines in [CONTRIBUTING.md][contrib]. -Whether these contributions are merged into the project is the prerogative of the maintainers. - -## Maintainer Expectations - -Maintainers have the ability to merge code into the project. Anyone can become a Kube-rs maintainer (see "Becoming a maintainer" below.) - -As such, there are certain expectations for maintainers. Kube-rs maintainers are expected to: - -* Review pull requests, triage issues, and fix bugs in their areas of expertise, ensuring that all changes go through the project's code review and integration processes. -* Monitor the Kube-rs Discord, and Discussions and help out when possible. -* Rapidly respond to any time-sensitive security release processes. -* Participate on discussions on the roadmap. - -If a maintainer is no longer interested in or cannot perform the duties listed above, they should move themselves to emeritus status. -If necessary, this can also occur through the decision-making process outlined below. - -### Maintainer decision-making - -Ideally, all project decisions are resolved by maintainer consensus. -If this is not possible, maintainers may call a vote. -The voting process is a simple majority in which each maintainer receives one vote. - -### Special Tasks - -In addition to the outlined abilities and responsibilities outlined above, some maintainers take on additional tasks and responsibilities. - -#### Release Tasks - -As a maintainer on the release team, you are expected to: - -* Cut releases, and update the [CHANGELOG](./CHANGELOG.md) -* Pre-verify big releases against example repos -* Publish and update versions in example repos -* Verify the release - -### Becoming a maintainer - -Anyone can become a Kube-rs maintainer. Maintainers should be highly proficient in Rust; have relevant domain expertise; have the time and ability to meet the maintainer expectations above; and demonstrate the ability to work with the existing maintainers and project processes. - -To become a maintainer, start by expressing interest to existing maintainers. -Existing maintainers will then ask you to demonstrate the qualifications above by contributing PRs, doing code reviews, and other such tasks under their guidance. -After several months of working together, maintainers will decide whether to grant maintainer status. - -[coc]: https://github.com/kube-rs/kube-rs/blob/master/code-of-conduct.md -[contrib]: https://github.com/kube-rs/kube-rs/blob/master/CONTRIBUTING.md diff --git a/maintainers.md b/maintainers.md deleted file mode 100644 index d5a18e8ff..000000000 --- a/maintainers.md +++ /dev/null @@ -1,23 +0,0 @@ -# Maintainers - -The Kube-rs maintainers are: - -* Eirik Albrigtsen @clux -* Teo Klestrup Röijezon @teozkr -* Kaz Yoshihara @kazk - -## Emeriti - -Former maintainers include: - -* Ryan Levick @rylev - -