Skip to content

Commit

Permalink
Merge branch 'main' into bgreports
Browse files Browse the repository at this point in the history
  • Loading branch information
realshuting authored Oct 16, 2024
2 parents 3591587 + 444baa7 commit 07d1291
Show file tree
Hide file tree
Showing 13 changed files with 69 additions and 113 deletions.
4 changes: 2 additions & 2 deletions .github/workflows/check-links.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ jobs:

- name: Check unrendered links
id: lychee_unrendered
uses: lycheeverse/lychee-action@2bb232618be239862e31382c5c0eaeba12e5e966 # v2.0.1
uses: lycheeverse/lychee-action@7cd0af4c74a61395d455af97419279d86aafaede # v2.0.2
env:
GITHUB_TOKEN: ${{secrets.GITHUB_TOKEN}}
with:
Expand Down Expand Up @@ -48,7 +48,7 @@ jobs:

# - name: Check rendered links
# id: lychee_rendered
# uses: lycheeverse/lychee-action@2bb232618be239862e31382c5c0eaeba12e5e966 # v2.0.1
# uses: lycheeverse/lychee-action@7cd0af4c74a61395d455af97419279d86aafaede # v2.0.2
# env:
# GITHUB_TOKEN: ${{secrets.GITHUB_TOKEN}}
# with:
Expand Down
2 changes: 1 addition & 1 deletion content/en/docs/high-availability/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -77,4 +77,4 @@ Multiple replicas configured for the cleanup controller can be used for both ava

## Installing Kyverno in HA mode

The Helm chart is the recommended method of installing Kyverno in a production-grade, highly-available fashion as it provides all the necessary Kubernetes resources and configuration options to meet most production needs. For more information on installation of Kyverno in high availability, see the corresponding [installation section](../installation/methods.md#high-availability).
The Helm chart is the recommended method of installing Kyverno in a production-grade, highly-available fashion as it provides all the necessary Kubernetes resources and configuration options to meet most production needs. For more information on installation of Kyverno in high availability, see the corresponding [installation section](../installation/methods.md#high-availability-installation).
11 changes: 3 additions & 8 deletions content/en/docs/installation/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,24 +40,19 @@ A standard Kyverno installation consists of a number of different components, so

## Compatibility Matrix

Kyverno follows the same support policy as the Kubernetes project (N-2 policy) in which the current release and the previous two minor versions are maintained. Although previous versions may work, they are not tested and therefore no guarantees are made as to their full compatibility. The below table shows the compatibility matrix.
Kyverno follows the same support policy as the Kubernetes project (N-2 policy) in which the current release and the previous two minor versions are maintained. Although prior versions may work, they are not tested and therefore no guarantees are made as to their full compatibility. The below table shows the compatibility matrix.

| Kyverno Version | Kubernetes Min | Kubernetes Max |
|--------------------------------|----------------|----------------|
| 1.8.x | 1.23 | 1.25 |
| 1.9.x | 1.24 | 1.26 |
| 1.10.x | 1.24 | 1.26 |
| 1.11.x | 1.25 | 1.28 |
| 1.12.x | 1.26 | 1.29 |
| 1.13.x | 1.28 | 1.31 |

\* Due to a known issue with Kubernetes 1.23.0-1.23.2, support for 1.23 begins at 1.23.3.

**NOTE:** The [Enterprise Kyverno](https://nirmata.com/nirmata-enterprise-for-kyverno/) by Nirmata supports a wide range of Kubernetes versions for any Kyverno version. Refer to the Release Compatibility Matrix for the Enterprise Kyverno [here](https://docs.nirmata.io/docs/n4k/release-compatibility-matrix/) or contact [Nirmata support](mailto:[email protected]) for assistance.
**NOTE:** For long term compatibility Support select a [commercially supported Kyverno distribution](https://kyverno.io/support/nirmata).

## Security vs Operability

For a production installation, Kyverno should be installed in [high availability mode](../installation/methods.md#high-availability). Regardless of the installation method used for Kyverno, it is important to understand the risks associated with any webhook and how it may impact cluster operations and security especially in production environments. Kyverno configures its resource webhooks by default (but [configurable](../writing-policies/policy-settings.md)) in [fail closed mode](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#failure-policy). This means if the API server cannot reach Kyverno in its attempt to send an AdmissionReview request for a resource that matches a policy, the request will fail. For example, a validation policy exists which checks that all Pods must run as non-root. A new Pod creation request is submitted to the API server and the API server cannot reach Kyverno. Because the policy cannot be evaluated, the request to create the Pod will fail. Care must therefore be taken to ensure that Kyverno is always available or else configured appropriately to exclude certain key Namespaces, specifically that of Kyverno's, to ensure it can receive those API requests. There is a tradeoff between security by default and operability regardless of which option is chosen.
For a production installation, Kyverno should be installed in [high availability mode](../installation/methods.md#high-availability-installation). Regardless of the installation method used for Kyverno, it is important to understand the risks associated with any webhook and how it may impact cluster operations and security especially in production environments. Kyverno configures its resource webhooks by default (but [configurable](../writing-policies/policy-settings.md)) in [fail closed mode](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#failure-policy). This means if the API server cannot reach Kyverno in its attempt to send an AdmissionReview request for a resource that matches a policy, the request will fail. For example, a validation policy exists which checks that all Pods must run as non-root. A new Pod creation request is submitted to the API server and the API server cannot reach Kyverno. Because the policy cannot be evaluated, the request to create the Pod will fail. Care must therefore be taken to ensure that Kyverno is always available or else configured appropriately to exclude certain key Namespaces, specifically that of Kyverno's, to ensure it can receive those API requests. There is a tradeoff between security by default and operability regardless of which option is chosen.

The following combination may result in cluster inoperability if the Kyverno Namespace is not excluded:

Expand Down
1 change: 1 addition & 0 deletions content/en/docs/installation/customization.md
Original file line number Diff line number Diff line change
Expand Up @@ -394,6 +394,7 @@ The following flags can be used to control the advanced behavior of the various
| `logtostderr` (ABCR) | true | Log to standard error instead of files. |
| `maxAPICallResponseLength` (ABCR) | `10000000` | Sets the maximum length of the response body for API calls. |
| `maxAuditWorkers` (A) | `8` | Maximum number of workers for audit policy processing. |
| `maxBackgroundReports` (BR) | `10000` | Maximum number of ephemeralreports created for the background policies before we stop creating new ones. |
| `maxAuditCapacity` (A) | `1000` | Maximum number of workers for audit policy processing. |
| `maxQueuedEvents` (ABR) | `1000` | Defines the upper limit of events that are queued internally. |
| `metricsPort` (ABCR) | `8000` | Specifies the port to expose prometheus metrics. |
Expand Down
71 changes: 28 additions & 43 deletions content/en/docs/installation/methods.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,44 +6,36 @@ weight: 15

## Install Kyverno using Helm

The Helm chart is the recommended method of installing Kyverno in a production-grade, highly-available fashion as it provides all the necessary Kubernetes resources and configuration options to meet most production needs including platform-specific controls.

Kyverno can be deployed via a Helm chart--the recommended and preferred method for a production install--which is accessible either through the Kyverno repository or on [Artifact Hub](https://artifacthub.io/). Both generally available and pre-releases are available with Helm.

In order to install Kyverno with Helm, first add the Kyverno Helm repository.
Choose one of the installation configuration options based upon your environment type and availability needs.
- For a production installation, see below [High Availability](#high-availability-installation) section.
- For a non-production installation, see below [Standalone Installation](#standalone-installation) section..

```sh
helm repo add kyverno https://kyverno.github.io/kyverno/
```
### Standalone Installation

Scan the new repository for charts.
To install Kyverno using Helm in a non-production environment use:

```sh
helm repo add kyverno https://kyverno.github.io/kyverno/
helm repo update
helm install kyverno kyverno/kyverno -n kyverno --create-namespace
```

Optionally, show all available chart versions for Kyverno.

```sh
helm search repo kyverno -l
```

Choose one of the installation configuration options based upon your environment type and availability needs.
- For a production installation, see below [High Availability](#high-availability) section.
- For a non-production installation, see below [Standalone](#standalone) section for additional details.

{{% alert title="Note" color="warning" %}}
When deploying Kyverno to certain Kubernetes platforms such as EKS, AKS, or OpenShift; or when using certain GitOps tools such as ArgoCD, additional configuration options may be needed or recommended. See the [Platform-Specific Notes](platform-notes.md) section for additional details.
{{% /alert %}}
### High Availability Installation

After Kyverno is installed, you may choose to also install the Kyverno [Pod Security Standard policies](../../pod-security.md), an optional chart containing the full set of Kyverno policies which implement the Kubernetes [Pod Security Standards](https://kubernetes.io/docs/concepts/security/pod-security-standards/).
Use Helm to create a Namespace and install Kyverno in a highly-available configuration.

```sh
helm install kyverno-policies kyverno/kyverno-policies -n kyverno
helm install kyverno kyverno/kyverno -n kyverno --create-namespace \
--set admissionController.replicas=3 \
--set backgroundController.replicas=2 \
--set cleanupController.replicas=2 \
--set reportsController.replicas=2
```

### High Availability

The Helm chart is the recommended method of installing Kyverno in a production-grade, highly-available fashion as it provides all the necessary Kubernetes resources and configuration options to meet most production needs including platform-specific controls.

Since Kyverno is comprised of different controllers where each is contained in separate Kubernetes Deployments, high availability is achieved on a per-controller basis. A default installation of Kyverno provides four separate Deployments each with a single replica. Configure high availability on the controllers where you need the additional availability. Be aware that multiple replicas do not necessarily equate to higher scale or performance across all controllers. Please see the [high availability page](../high-availability/_index.md) for more complete details.

The Helm chart offers parameters to configure multiple replicas for each controller. For example, a highly-available, complete deployment of Kyverno would consist of the following values.
Expand All @@ -52,12 +44,11 @@ The Helm chart offers parameters to configure multiple replicas for each control
admissionController:
replicas: 3
backgroundController:
replicas: 2
replicas: 3
cleanupController:
replicas: 2
replicas: 3
reportsController:
replicas: 2

replicas: 3
```
For all of the available values and their defaults, please see the Helm chart [README](https://github.com/kyverno/kyverno/tree/release-1.10/charts/kyverno). You should carefully inspect all available chart values and their defaults to determine what overrides, if any, are necessary to meet the particular needs of your production environment.
Expand All @@ -70,30 +61,24 @@ By default, the Kyverno Namespace will be excluded using a namespaceSelector con

See also the [Namespace selectors](customization.md#namespace-selectors) section and especially the [Security vs Operability](_index.md#security-vs-operability) section.

Use Helm to create a Namespace and install Kyverno in a highly-available configuration.

```sh
helm install kyverno kyverno/kyverno -n kyverno --create-namespace \
--set admissionController.replicas=3 \
--set backgroundController.replicas=2 \
--set cleanupController.replicas=2 \
--set reportsController.replicas=2
```
## Platform Specific Settings

### Standalone
When deploying Kyverno to certain Kubernetes platforms such as EKS, AKS, or OpenShift; or when using certain GitOps tools such as ArgoCD, additional configuration options may be needed or recommended. See the [Platform-Specific Notes](platform-notes.md) section for additional details.

A standalone installation of Kyverno is suitable for lab, test/dev, or small environments typically associated with non-production. It configures a single replica for each Kyverno Deployment and omits many of the production-grade components.
### Pre-Release Installations (RC)

Use Helm to create a Namespace and install Kyverno.
To install pre-release versions, such as `alpha`, `beta`, and `rc` (release candidates) versions, add the `--devel` switch to Helm:

```sh
helm install kyverno kyverno/kyverno -n kyverno --create-namespace
helm install kyverno kyverno/kyverno -n kyverno --create-namespace --devel
```

To install pre-releases, add the `--devel` switch to Helm.
## Install Pod Security Policies via Helm

After Kyverno is installed, you may choose to also install the Kyverno [Pod Security Standard policies](../../pod-security.md), an optional chart containing the full set of Kyverno policies which implement the Kubernetes [Pod Security Standards](https://kubernetes.io/docs/concepts/security/pod-security-standards/).

```sh
helm install kyverno kyverno/kyverno -n kyverno --create-namespace --devel
helm install kyverno-policies kyverno/kyverno-policies -n kyverno
```

## Install Kyverno using YAMLs
Expand Down
12 changes: 4 additions & 8 deletions content/en/docs/writing-policies/generate.md
Original file line number Diff line number Diff line change
Expand Up @@ -127,10 +127,6 @@ spec:

For other examples of generate rules, see the [policy library](/policies/?policytypes=generate).

{{% alert title="Note" color="info" %}}
The field `spec.generateExisting` is no longer required for "classic" generate rules, is deprecated, and will be removed in an upcoming version.
{{% /alert %}}

## Clone Source

When a generate policy should take the source from a resource which already exists in the cluster, a `clone` object is used instead of a `data` object. When triggered, the generate policy will clone from the resource name and location defined in the rule to create the new resource. Use of the `clone` object implies no modification during the path from source to destination and Kyverno is not able to modify its contents (aside from metadata used for processing and tracking).
Expand Down Expand Up @@ -506,7 +502,7 @@ spec:

Use of a `generate` rule is common when creating net new resources from the point after which the policy was created. For example, a Kyverno `generate` policy is created so that all future Namespaces can receive a standard set of Kubernetes resources. However, it is also possible to generate resources based on **existing** resources. This can be extremely useful especially for Namespaces when deploying Kyverno to an existing cluster where you wish policy to apply retroactively.

Kyverno supports generation for existing resources. Generate existing policies are applied when the policy is created and in the background which creates target resources based on the match statement within the policy. They may also optionally be configured to apply upon updates to the policy itself. By defining the `spec.generateExisting` set to `true`, a generate rule will take effect for existing resources which have the same match characteristics.
Kyverno supports generation for existing resources. Generate existing policies are applied when the policy is created and in the background which creates target resources based on the match statement within the policy. They may also optionally be configured to apply upon updates to the policy itself. By defining the `generate[*].generateExisting` set to `true`, a generate rule will take effect for existing resources which have the same match characteristics.

Note that the benefits of using a "generate existing" rule is only the moment the policy is installed. Once the initial generation effects have been produced, the rule functions like a "standard" generate rule from that point forward. Generate existing rules are therefore primarily useful for one-time use cases when retroactive policy should be applied.

Expand All @@ -522,7 +518,6 @@ kind: ClusterPolicy
metadata:
name: generate-resources
spec:
generateExisting: true
rules:
- name: generate-existing-networkpolicy
match:
Expand All @@ -531,6 +526,7 @@ spec:
kinds:
- Namespace
generate:
generateExisting: true
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
name: default-deny
Expand All @@ -555,7 +551,6 @@ kind: ClusterPolicy
metadata:
name: create-default-pdb
spec:
generateExisting: true
rules:
- name: create-default-pdb
match:
Expand All @@ -568,6 +563,7 @@ spec:
namespaces:
- local-path-storage
generate:
generateExisting: true
apiVersion: policy/v1
kind: PodDisruptionBudget
name: "{{request.object.metadata.name}}-default-pdb"
Expand All @@ -582,7 +578,7 @@ spec:
```

{{% alert title="Note" color="info" %}}
The field `spec.generateExistingOnPolicyUpdate` has been replaced by `spec.generateExisting`. The former is no longer required, is deprecated, and will be removed in an upcoming version.
The field `spec.generateExisting` has been replaced by `spec.rules[*].generate[*].generateExisting`. The former is no longer required, is deprecated, and will be removed in an upcoming version.
{{% /alert %}}

## How It Works
Expand Down
10 changes: 5 additions & 5 deletions content/en/docs/writing-policies/mutate.md
Original file line number Diff line number Diff line change
Expand Up @@ -470,7 +470,6 @@ kind: ClusterPolicy
metadata:
name: mutate-existing-secret
spec:
mutateExistingOnPolicyUpdate: true
rules:
- name: mutate-secret-on-configmap-event
match:
Expand All @@ -483,6 +482,7 @@ spec:
namespaces:
- staging
mutate:
mutateExistingOnPolicyUpdate: true
# ...
targets:
- apiVersion: v1
Expand All @@ -508,7 +508,6 @@ kind: ClusterPolicy
metadata:
name: refresh-env-var-in-pods
spec:
mutateExistingOnPolicyUpdate: false
rules:
- name: refresh-from-secret-env
match:
Expand All @@ -522,6 +521,7 @@ spec:
operations:
- UPDATE
mutate:
mutateExistingOnPolicyUpdate: false
targets:
- apiVersion: apps/v1
kind: Deployment
Expand Down Expand Up @@ -622,7 +622,6 @@ kind: ClusterPolicy
metadata:
name: sync-cms
spec:
mutateExistingOnPolicyUpdate: false
rules:
- name: concat-cm
match:
Expand All @@ -635,6 +634,7 @@ spec:
namespaces:
- foo
mutate:
mutateExistingOnPolicyUpdate: false
targets:
- apiVersion: v1
kind: ConfigMap
Expand All @@ -660,7 +660,6 @@ kind: ClusterPolicy
metadata:
name: sync-cms
spec:
mutateExistingOnPolicyUpdate: false
rules:
- name: concat-cm
match:
Expand All @@ -673,6 +672,7 @@ spec:
namespaces:
- foo
mutate:
mutateExistingOnPolicyUpdate: false
targets:
- apiVersion: v1
kind: ConfigMap
Expand Down Expand Up @@ -918,7 +918,6 @@ kind: ClusterPolicy
metadata:
name: demo-cluster-policy
spec:
mutateExistingOnPolicyUpdate: false
rules:
- name: demo-generate
match:
Expand Down Expand Up @@ -951,6 +950,7 @@ spec:
matchLabels:
custom/related-namespace: "?*"
mutate:
mutateExistingOnPolicyUpdate: false
targets:
- apiVersion: v1
kind: Namespace
Expand Down
Loading

0 comments on commit 07d1291

Please sign in to comment.