Skip to content

Commit

Permalink
Fix trailing space.
Browse files Browse the repository at this point in the history
  • Loading branch information
mnlipp committed Jan 30, 2025
1 parent 150b9f2 commit ecd7ba7
Show file tree
Hide file tree
Showing 10 changed files with 97 additions and 97 deletions.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
# Run Qemu in Kubernetes Pods

The goal of this project is to provide easy to use and flexible components
for running Qemu based VMs in Kubernetes pods.
for running Qemu based VMs in Kubernetes pods.

See the [project's home page](https://jdrupes.org/vm-operator/)
for details.
2 changes: 1 addition & 1 deletion dev-example/Readme.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# Example setup for development

The CRD must be deployed independently. Apart from that, the
`kustomize.yaml`
`kustomize.yaml`

* creates a small cdrom image repository and

Expand Down
2 changes: 1 addition & 1 deletion overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ A Kubernetes operator for running VMs as pods.
VM-Operator
===========

The VM-operator enables you to easily run Qemu based VMs as pods
The VM-operator enables you to easily run Qemu based VMs as pods
in Kubernetes. It is built on the
[JGrapes](https://mnlipp.github.io/jgrapes/) event driven framework.

Expand Down
46 changes: 23 additions & 23 deletions webpages/vm-operator/controller.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,12 +5,12 @@ layout: vm-operator

# The Controller

The controller component (which is part of the manager) monitors
custom resources of kind `VirtualMachine`. It creates or modifies
The controller component (which is part of the manager) monitors
custom resources of kind `VirtualMachine`. It creates or modifies
other resources in the cluster as required to get the VM defined
by the CR up and running.
by the CR up and running.

Here is the sample definition of a VM from the
Here is the sample definition of a VM from the
["local-path" example](https://github.com/mnlipp/VM-Operator/tree/main/example/local-path):

```yaml
Expand All @@ -28,10 +28,10 @@ spec:
currentCpus: 2
maximumRam: 8Gi
currentRam: 4Gi

networks:
- user: {}

disks:
- volumeClaimTemplate:
metadata:
Expand All @@ -58,9 +58,9 @@ spec:
# generateSecret: false
```

## Pod management
## Pod management

The central resource created by the controller is a
The central resource created by the controller is a
[`Pod`](https://kubernetes.io/docs/concepts/workloads/pods/)
with the same name as the VM (`metadata.name`). The pod is created only
if `spec.vm.state` is "Running" (default is "Stopped" which deletes the
Expand All @@ -72,7 +72,7 @@ and thus the VM is automatically restarted. If set to `true`, the
VM's state is set to "Stopped" when the VM terminates and the pod is
deleted.

[^oldSts]: Before version 3.4, the operator created a
[^oldSts]: Before version 3.4, the operator created a
[stateful set](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/)
that in turn created the pod and the PVCs (see below).

Expand Down Expand Up @@ -113,7 +113,7 @@ as shown in this example:
```
The disk will be available as "/dev/*name*-disk" in the VM,
using the string from `.volumeClaimTemplate.metadata.name` as *name*.
using the string from `.volumeClaimTemplate.metadata.name` as *name*.
If no name is defined in the metadata, then "/dev/disk-*n*"
is used instead, with *n* being the index of the volume claim
template in the list of disks.
Expand All @@ -140,28 +140,28 @@ the PVCs by label in a delete command.

## Choosing an image for the runner

The image used for the runner can be configured with
The image used for the runner can be configured with
[`spec.image`](https://github.com/mnlipp/VM-Operator/blob/7e094e720b7b59a5e50f4a9a4ad29a6000ec76e6/deploy/crds/vms-crd.yaml#L19).
This is a mapping with either a single key `source` or a detailed
configuration using the keys `repository`, `path` etc.

Currently two runner images are maintained. One that is based on
Arch Linux (`ghcr.io/mnlipp/org.jdrupes.vmoperator.runner.qemu-arch`) and a
Currently two runner images are maintained. One that is based on
Arch Linux (`ghcr.io/mnlipp/org.jdrupes.vmoperator.runner.qemu-arch`) and a
second one based on Alpine (`ghcr.io/mnlipp/org.jdrupes.vmoperator.runner.qemu-alpine`).

Starting with release 1.0, all versions of runner images and managers
Starting with release 1.0, all versions of runner images and managers
that have the same major release number are guaranteed to be compatible.

## Generating cloud-init data

*Since: 2.2.0*
*Since: 2.2.0*

The optional object `.spec.cloudInit` with sub-objects `.cloudInit.metaData`,
`.cloudInit.userData` and `.cloudInit.networkConfig` can be used to provide
`.cloudInit.userData` and `.cloudInit.networkConfig` can be used to provide
data for
[cloud-init](https://cloudinit.readthedocs.io/en/latest/index.html).
The data from the CRD will be made available to the VM by the runner
as a vfat formatted disk (see the description of
as a vfat formatted disk (see the description of
[NoCloud](https://cloudinit.readthedocs.io/en/latest/reference/datasources/nocloud.html)).

If `.metaData.instance-id` is not defined, the controller automatically
Expand All @@ -180,9 +180,9 @@ generated automatically by the runner.)
*Since: 2.3.0*

You can define a display password using a Kubernetes secret.
When you start a VM, the controller checks if there is a secret
with labels "app.kubernetes.io/name: vm-runner,
app.kubernetes.io/component: display-secret,
When you start a VM, the controller checks if there is a secret
with labels "app.kubernetes.io/name: vm-runner,
app.kubernetes.io/component: display-secret,
app.kubernetes.io/instance: *vmname*" in the namespace of the
VM definition. The name of the secret can be chosen freely.

Expand All @@ -204,13 +204,13 @@ data:
```

If such a secret for the VM is found, the VM is configured to use
the display password specified. The display password in the secret
the display password specified. The display password in the secret
can be updated while the VM runs[^delay]. Activating/deactivating
the display password while a VM runs is not supported by Qemu and
therefore requires stopping the VM, adding/removing the secret and
restarting the VM.

[^delay]: Be aware of the possible delay, see e.g.
[^delay]: Be aware of the possible delay, see e.g.
[here](https://web.archive.org/web/20240223073838/https://ahmet.im/blog/kubernetes-secret-volumes-delay/).

*Since: 3.0.0*
Expand All @@ -221,7 +221,7 @@ values are those defined by qemu (`+n` seconds from now, `n` Unix
timestamp, `never` and `now`).

Unless `spec.vm.display.spice.generateSecret` is set to `false` in the VM
definition (CRD), the controller creates a secret for the display
definition (CRD), the controller creates a secret for the display
password automatically if none is found. The secret is created
with a random password that expires immediately, which makes the
display effectively inaccessible until the secret is modified.
Expand Down
12 changes: 6 additions & 6 deletions webpages/vm-operator/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,21 +15,21 @@ The image used for the VM pods combines Qemu and a control program
for starting and managing the Qemu process. This application is called
"[the runner](runner.html)".

While you can deploy a runner manually (or with the help of some
While you can deploy a runner manually (or with the help of some
helm templates), the preferred way is to deploy "[the manager](manager.html)"
application which acts as a Kubernetes operator for runners
application which acts as a Kubernetes operator for runners
and thus the VMs.

If you just want to try out things, you can skip the remainder of this
page and proceed to "[the manager](manager.html)".

## Motivation
The project was triggered by a remark in the discussion about RedHat
[dropping SPICE support](https://bugzilla.redhat.com/show_bug.cgi?id=2030592)
[dropping SPICE support](https://bugzilla.redhat.com/show_bug.cgi?id=2030592)
from the RHEL packages. Which means that you have to run Qemu in a
container on RHEL and derivatives if you want to continue using Spice.
So KubeVirt comes to mind. But
[one comment](https://bugzilla.redhat.com/show_bug.cgi?id=2030592#c4)
[one comment](https://bugzilla.redhat.com/show_bug.cgi?id=2030592#c4)
mentioned that the [KubeVirt](https://kubevirt.io/) project isn't
interested in supporting SPICE either.

Expand All @@ -44,7 +44,7 @@ much as possible.
## VMs and Pods

VMs are not the typical workload managed by Kubernetes. You can neither
have replicas nor can the containers simply be restarted without a major
have replicas nor can the containers simply be restarted without a major
impact on the "application". So there are many features for managing
pods that we cannot make use of. Qemu in its container can only be
deployed as a pod or using a stateful set with replica 1, which is rather
Expand All @@ -57,6 +57,6 @@ A second look, however, reveals that Kubernetes has more to offer.
* Its managing features *are* useful for running the component that
manages the pods with the VMs.

And if you use Kubernetes anyway, well then the VMs within Kubernetes
And if you use Kubernetes anyway, well then the VMs within Kubernetes
provide you with a unified view of all (or most of) your workloads,
which simplifies the maintenance of your platform.
46 changes: 23 additions & 23 deletions webpages/vm-operator/manager.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,13 +7,13 @@ layout: vm-operator

The Manager is the program that provides the controller from the
[operator pattern](https://github.com/cncf/tag-app-delivery/blob/eece8f7307f2970f46f100f51932db106db46968/operator-wg/whitepaper/Operator-WhitePaper_v1-0.md#operator-components-in-kubernetes)
together with a web user interface. It should be run in a container in the cluster.
together with a web user interface. It should be run in a container in the cluster.

## Installation

A manager instance manages the VMs in its own namespace. The only
common (and therefore cluster scoped) resource used by all instances
is the CRD. It is available
is the CRD. It is available
[here](https://github.com/mnlipp/VM-Operator/raw/main/deploy/crds/vms-crd.yaml)
and must be created first.

Expand All @@ -25,24 +25,24 @@ The example above uses the CRD from the main branch. This is okay if
you apply it once. If you want to preserve the link for automatic
upgrades, you should use a link that points to one of the release branches.

The next step is to create a namespace for the manager and the VMs, e.g.
The next step is to create a namespace for the manager and the VMs, e.g.
`vmop-demo`.

```sh
kubectl create namespace vmop-demo
```

Finally you have to create an account, the role, the binding etc. The
default files for creating these resources using the default namespace
can be found in the
Finally you have to create an account, the role, the binding etc. The
default files for creating these resources using the default namespace
can be found in the
[deploy](https://github.com/mnlipp/VM-Operator/tree/main/deploy)
directory. I recommend to use
[kustomize](https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/) to create your own configuration.
directory. I recommend to use
[kustomize](https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/) to create your own configuration.

## Initial Configuration

Use one of the `kustomize.yaml` files from the
[example](https://github.com/mnlipp/VM-Operator/tree/main/example) directory
[example](https://github.com/mnlipp/VM-Operator/tree/main/example) directory
as a starting point. The directory contains two examples. Here's the file
from subdirectory `local-path`:

Expand Down Expand Up @@ -91,23 +91,23 @@ patches:
storageClassName: local-path
```
The sample file adds a namespace (`vmop-demo`) to all resource
The sample file adds a namespace (`vmop-demo`) to all resource
definitions and patches the PVC `vmop-image-repository`. This is a volume
that is mounted into all pods that run a VM. The volume is intended
that is mounted into all pods that run a VM. The volume is intended
to be used as a common repository for CDROM images. The PVC must exist
and it must be bound before any pods can run.

The second patch affects the small volume that is created for each
runner and contains the VM's configuration data such as the EFI vars.
The manager's default configuration causes the PVC for this volume
to be created with no storage class (which causes the default storage
class to be used). The patch provides a new configuration file for
the manager that makes the reconciler use local-path as storage
class for this PVC. Details about the manager configuration can be
class to be used). The patch provides a new configuration file for
the manager that makes the reconciler use local-path as storage
class for this PVC. Details about the manager configuration can be
found in the next section.

Note that you need none of the patches if you are fine with using your
cluster's default storage class and this class supports ReadOnlyMany as
Note that you need none of the patches if you are fine with using your
cluster's default storage class and this class supports ReadOnlyMany as
access mode.

Check that the pod with the manager is running:
Expand All @@ -121,30 +121,30 @@ for creating your first VM.

## Configuration Details

The [config map](https://github.com/mnlipp/VM-Operator/blob/main/deploy/vmop-config-map.yaml)
for the manager may provide a configuration file (`config.yaml`) and
The [config map](https://github.com/mnlipp/VM-Operator/blob/main/deploy/vmop-config-map.yaml)
for the manager may provide a configuration file (`config.yaml`) and
a file with logging properties (`logging.properties`). Both files are mounted
into the container that runs the manager and are evaluated by the manager
on startup. If no files are provided, the manager uses built-in defaults.

The configuration file for the Manager follows the conventions of
the [JGrapes](https://jgrapes.org/) component framework.
The keys that start with a slash select the component within the
The keys that start with a slash select the component within the
application's component hierarchy. The mapping associated with the
selected component configures this component's properties.

The available configuration options for the components can be found
in their respective JavaDocs (e.g.
in their respective JavaDocs (e.g.
[here](latest-release/javadoc/org/jdrupes/vmoperator/manager/Reconciler.html)
for the Reconciler).

## Development Configuration

The [dev-example](https://github.com/mnlipp/VM-Operator/tree/main/dev-example)
directory contains a `kustomize.yaml` that uses the development namespace
directory contains a `kustomize.yaml` that uses the development namespace
`vmop-dev` and creates a deployment for the manager with 0 replicas.

This environment can be used for running the manager in the IDE. As the
This environment can be used for running the manager in the IDE. As the
namespace to manage cannot be detected from the environment, you must use
`-c ../dev-example/config.yaml` as argument when starting the manager. This
`-c ../dev-example/config.yaml` as argument when starting the manager. This
configures it to use the namespace `vmop-dev`.
Loading

0 comments on commit ecd7ba7

Please sign in to comment.