Repository for provisioning K3s container orchestration tool, basing on always free infrastructure resources from Oracle Cloud Infrastructure.
General overview of the repository structure. Not all files/directories are listed, only these that are specific to the tools in the repository.
.
├── .github # GitHub config files
│ ├── workflows # GitHub Actions config files
│ └── renovate.json # Renovate config
├── .spacelift # Spacelift config files
│ └── workflow.yml # Spacelift workflow tool config file
├── certificates # Certificates
├── components # Terraform root modules
├── machine-images # Source files for machine images
├── modules # Terraform modules
│ ├── <module-0> # Source files for Terraform <module-0>
│ └── ... # Other modules
├── stacks # Atmos stacks
├── toolbox # Toolbox config files
│ ├── rootfs # Atmos config file dir
│ ├── .gitconfig
│ ├── .mise.toml # Mise config file
│ └── Dockerfile
├── .pre-commit-config.yaml # Pre-commit config file
├── .trivyignore.yaml # Trivy config file
├── README.md
├── vendor.yaml # Atmos vendor config
hajlesilesia/provisioning
Docker image is a preferred way to distribute the tools used in this repository. It's designed to bring consistency for local and remote usage by being cross-platform (macOS, Linux, WSL), multi architecture (linux/amd64, linux/arm64), version controlled and reusable.
Run once:
docker run --rm hajlesilesia/provisioning:latest init | bash
Run every time new version was released and updated in the workflows files (for consistency with remote workflows):
docker pull hajlesilesia/provisioning:latest
Run on daily basis, preferably as a main terminal:
provisioning
Before cloning any repository, create .env
file with the following content in your local organization directory:
#!/usr/bin/env bash
git config --global user.email <email>
git config --global user.name <username>
# Due to volume mounts of the image, permission issues may occur. Observed examples:
# - PyCharm installed on Windows, codebase located in WSL 2 - autosave and backup denied for files created from container.
umask 0
Then, run:
. .env
git clone <repo-name>
cd "$(basename "$_" .git)"
touch .env
Copy following content into the .env
file in your local repository directory:
#!/usr/bin/env bash
. ../.env
# Pre-commit needs to be installed to allow `git` actions (e.g. pre-commit, pre-push, etc.)
pre-commit install
Then, run:
. .env
Note: always source .env
file (run: . .env
) after starting container to avoid permission issues between host/container.
Example: static analysis with hooks managed by pre-commit can be executed by running:
pre-commit --all-files --hook-stage manual
Following CLI tools are contained within the image:
Name | Description |
---|---|
mise | Tool version manager |
k3s | Container orchestration |
oci | Oracle Cloud Infrastructure cloud provider |
terraform | Infrastructure provisioning, static analysis |
atmos | Cloud architecture framework for native Terraform |
tflint | Static analysis |
trivy | Static analysis |
pre-commit | Managing pre-commit hooks |
helm | Container orchestration package manager |
packer | Machine images provisioning |
Renovate is used as a tool for automated dependency updates. Although it handles many dependencies out of the box, there are many that are not supported yet. These have to be taken care of separately via config file. Verify periodically all dependencies against Renovate latest documentation/config file, to see if dependency support is added/separate handling is still needed. See Renovate console for scanning details.
Dedicated Docker image is a preferred way to run static analysis as it brings consistency for local and remote usage.
Run once, to install hooks in the repository:
pre-commit install
Following static analysis tools are contained within the image with pre-commit hooks serving as an execution tool with following configuration.
Name | Description |
---|---|
pre-commit for Terraform | Hooks manager |
Terraform fmt | Canonical format check |
Terraform validate | Configuration files validation |
TFLint | Linter |
Trivy | Security vulnerabilities check |
Depending on the phase of software delivery, Atmos workflows/Spacelift is used as a tool for orchestration of infrastructure provisioning.
- Create Oracle Cloud Infrastructure account.
- Upgrade Free Tier account to a paid account. Paid accounts precede Free Tier accounts when it comes to resources (especially instances) provisioning by OCI. For Free Tier accounts it can take hours.
- [Optional] Create budgets to control costs. Paid account billing charges specified payment method according to the resource tier type.
- Generate API key for your user as described here.
- Create S3-compatible backend as described here.
- Generate customer secret key for your profile as described here.
Atmos workflows are used for cold starts. See configuration here. Execute these commands for full deployment:
atmos workflow apply-all-components -f foundation
# Create vault secrets here as they are needed for provisioning platform environment
atmos workflow apply-all-components -f plat-env -s plat-fra-prod
Spacelift is configured to work with Atmos as described here. See Spacelift console for configuration details. Custom workflow tool is defined here due to Terraform FOSS version constraints. Additional information:
semantic-release is used as a tool for automated version management and package publishing. See configuration here.
To avoid configuration drift and shorten deployment time for newly spun instances, immutable infrastructure is a preferred solution for provisioning machine images. HashiCorp Packer is used as a tool for building them.
OCI Vault is used as a secrets management solution for the cluster. Dedicated secret (see vault module config) stores the cluster initialization flag, useful for server nodes during cold start (spinning new cluster). External Secrets Operator automatically generates cluster secrets from the data stored within the vault as described here.
Hostnames are used for network resources to allow wildcard certificate usage with name defined as described here. Certificate is used in the vault setup. Otherwise, certificate would have to be updated each time private IP address of network resource (VPC, subnet, instance, load balancer, etc.) change.
terraform init
terraform workspace list
terraform workspace select default
mkdir -p ~/.kube
terraform output -raw kubeconfig > ~/.kube/config-google-cloud
echo 'export KUBECONFIG=~/.kube/config-google-cloud' >> ~/.bashrc
kubectl get nodes