Skip to content

Deploy small clusters for learning purposes e.g. CKA/CKAD/CKS studies! 📚

Notifications You must be signed in to change notification settings

mikejoh/k8s-on-kvm

Repository files navigation

Kubernetes on KVM

Deploy (small) Kubernetes clusters on KVM, all from scratch! 🚀

✅ Pre-requisites

  • KVM (see your favourite Linux distribution how-to)
  • OpenTofu (the terraform fork)

🗒️ Important notes

  • See the scripts/ folder for various utility scripts.
  • You probably want to change user and group to libvirt-qemu and kvm respectively in /etc/libvirt/qemu.conf to mitigate permission issues on storage pools.
  • All cluster nodes will be running Ubuntu 24.04.

🏃 Getting started

If you run a ufw firewall locally you might need to add the following before continuing:

ufw allow in on virbr1
ufw allow out on virbr1

Provision cluster nodes

By default we'll deploy a cluster on three nodes, they will have both the control-plane and worker roles.

If you're setting a lower resource values on each node then you might need to set: --ignore-preflight-errors=mem,numcpu during kubeadm init.

  1. Change the k8s.auto.tfvars to fit your needs!
  2. Run tofu init
  3. Run tofu plan
  4. Run tofu apply
  5. The needed nodes shall be provisioned with everything included for you to start bootstrapping the cluster.

Accessing the nodes

Remember to start your VMs after a reboot!

SSH to one node a time

To find out the IP addresses of the VMs you can run the following using virsh:

sudo virsh net-dhcp-leases k8s_net

SSH using the provided helper script

This requires that you have fzf installed.

PRIVATE_SSH_KEY=~/.ssh/kvm-k8s scripts/ssh.sh

Make sure you're using the private key that matches the public key added as part of the cluster node provisioning. We're adding a user called cloud by default that has the provided public key as one of the ssh_authorized_keys.

Using tmux and xpanes to SSH to all available nodes

Don't forget to inline the private key path below and replace <PRIVATE_KEY> before running the command:

tmux

sudo virsh net-dhcp-leases k8s_net | tail -n +3 | awk '{print $5 }' | cut -d"/" -f1 | xpanes -l ev -c 'ssh -l cloud -i <PRIVATE_KEY> {}'

Bootstrap the first control-plane node

Please note that we're skipping the addon kube-proxy since in these clusters we want to utilize Cilium as the CNI and with a configuration that replaces the need for kube-proxy.

CONTROL_PLANE_IP=$(ip addr show ens3 | grep 'inet ' | awk '{print $2}' | cut -d/ -f1)
sudo kubeadm init --control-plane-endpoint "$CONTROL_PLANE_IP:6443" --upload-certs

Add a worker node

Add worker nodes to the cluster:

  1. Generate the join command on a control-plane node:
kubeadm token create --print-join-command
  1. Use the generate join command and run that on the worker node.

Upgrade cluster

As of writing this the clusters are deployed with the latest available patch release of v1.30. The following guide will upgrade the cluster to v1.31.2.

  1. Add the following line to the /etc/apt/sources.list.d/kubernetes.list, note the Kubernetes version:
echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.31/deb/ /" >> /etc/apt/sources.list.d/kubernetes.list
  1. Run apt update.

  2. Check which install candidate we have: apt policy kubeadm

kubeadm:
  Installed: 1.30.6-1.1
  Candidate: 1.31.2-1.1
  Version table:
 *** 1.31.2-1.1 500
        500 https://pkgs.k8s.io/core:/stable:/v1.31/deb  Packages
     1.31.1-1.1 500
        500 https://pkgs.k8s.io/core:/stable:/v1.31/deb  Packages
     1.31.0-1.1 500
        500 https://pkgs.k8s.io/core:/stable:/v1.31/deb  Packages
     1.30.6-1.1 500
        500 https://pkgs.k8s.io/core:/stable:/v1.30/deb  Packages
     1.30.5-1.1 500
        500 https://pkgs.k8s.io/core:/stable:/v1.30/deb  Packages
     1.30.4-1.1 500
        500 https://pkgs.k8s.io/core:/stable:/v1.30/deb  Packages
     1.30.3-1.1 500
        500 https://pkgs.k8s.io/core:/stable:/v1.30/deb  Packages
     1.30.2-1.1 500
        500 https://pkgs.k8s.io/core:/stable:/v1.30/deb  Packages
     1.30.1-1.1 500
        500 https://pkgs.k8s.io/core:/stable:/v1.30/deb  Packages
     1.30.0-1.1 500
        500 https://pkgs.k8s.io/core:/stable:/v1.30/deb  Packages
  1. Install a newer version kubeadm:
apt install kubeadm=1.31.2-1.1
  1. Drain and cordon the control-plane node:
kubectl drain <node name> --ignore-daemonsets --delete-emptydir-data --disable-eviction

Use --disable-eviction flag with caution, in this case we'll have a small test cluster which most likely wont accomodate for any Pod Distruption Budgets configured.

  1. Check the upgrade plan and proceed with upgrade:
kubeadm upgrade plan
kubeadm upgrade apply v1.31.2
[upgrade/version] You have chosen to change the cluster version to "v1.31.2"
[upgrade/versions] Cluster version: v1.30.6
[upgrade/versions] kubeadm version: v1.31.2
[upgrade] Are you sure you want to proceed? [y/N]: y
  1. Upgrade kubelet and kubectl:
apt install kubelet=1.31.2-1.1 kubectl=1.31.2-1.1
  1. Continue with the rest of the control-plane nodes.

Check versions:

kubectl version
Client Version: v1.31.2
Kustomize Version: v5.4.2
Server Version: v1.31.2

kubectl get nodes
NAME         STATUS   ROLES           AGE     VERSION
k8s01-cp01   Ready    control-plane   4d21h   v1.31.2
k8s01-w01    Ready    <none>          4d20h   v1.30.6

Please note that for workers you'll only install a newer version of the kubelet!

Clean up the cluster

Run the clean-up utility script: scripts/clean_up.sh, please note that this removes everything related to the VMs and tofu state.

📚 Add-ons

Install Cilium

./scripts/install_cilium.sh

Note that we run Cilium with kubeProxyReplacement=true with kube-proxy running, you could remove all things related to kube-proxy manually or skip the kube-proxy phase during kubeadm init.

Install Open Policy Agent Gatekeeper

  1. Install:
./scripts/install_opa.sh
  1. Create a constraint template and a constraint:
kubectl create -f manifests/opa/image-constraint.yaml

About

Deploy small clusters for learning purposes e.g. CKA/CKAD/CKS studies! 📚

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published