This documentation aims to provide guidance, utilities and useful commands that could be leveraged in certification preparation and exam exercises for agility.
- Auxiliary Commands & Tips
- Linux helpers
- Docker commands
- Remove all stopped containers in the system
- Retrieve running processes inside a container
- Get a container by id
- Create a docker network
- Save Docker Image and Container
- Import Docker Image and Container
- List containers ordered by creation time
- Inspect docker image layers and filesystem
- Attack surface - understand if i am in a container
- Interact with Remote Docker Daemon
- Security tools
- Kubernetes commands
- Configure autocompletion kubectl
- Installing kubecolor
- Install Krew plugin manager for kubectl
- Setting kubernetes shortnames
- Get information
- Using JSONPath - Example: Retrieve API server IP
- Retrieving the yaml object entirely
- Interacting with ETCD
- Expose Replica Controller
- Getting own kubernetes permissions
To create a file from the command line without using nano, vim or similar utilities we can leverage command cat
and the usage of file delimiters EOF/EOL
.
cat > <file_name> <<EOL
<file_content>
EOL
# Example
cat > Dockerfile <<EOL
FROM ubuntu:20.04
RUN apt install nginx
EOL
docker rm -f $(docker ps -aq -f status=exited)
Explanation:
- with
$()
we are capturing the output (stdout) of the command inside the parenthesis. This command isdocker ps -aq -f status=exited
. The options are:docker
is the Docker CLI communicating with the Daemon.ps
gets the containers (by default only those running). There are two options for used for this command:-aq
:a
states for all containers andq
indicates that only the container id must be retrieved.-f status=existed
with this option we are filtering to only obtain those containers that have exited.
- Main command
docker rm -f <list-of-containers-id>
. Options explanation:rm
command for removing the containers.-f
forces container removal.
docker top <container_name>
docker container ls -f id=<id_container>
docker network create --driver <driver_name> <network_name>
#Example using default driver bridge
docker network create mynetwork
Docker Save: Use docker save
to create a tar archive of an image.
docker save -o myimage.tar myimage:latest
Docker Export: Use docker export
to create a tar archive of a container's filesystem.
docker export -o mycontainer.tar mycontainer_id
Note
Be aware of the limitations when using docker export
for containers. This command does not preserve the history, container creation metadata, layered information, or volume data associated with the container. For a complete backup or migration, consider using docker save
for images or Docker's volume backup methods for data persistence.
docker save
is ideal for sharing images, preserving their history and layers, while docker export
is for containers, flattening their filesystem into a single layer.
Docker Import for Images: Use docker import
to create an image from a tar archive previously exported with docker export
.
docker import mycontainer.tar mynewimage:latest
Docker Load for Images: To load an image saved with docker save
, use docker load
.
docker load -i myimage.tar
docker import
is used to create an image from a flat filesystem, while docker load
restores an image with its history and layers.
echo '['$(docker container ls --format '{{json .}}' | paste -sd "," -)']' | jq 'sort_by(.CreatedAt) | .[] | {ID: .ID, Image: .Image, CreatedAt: .CreatedAt}'
For this command to run, jq
must be installed in the system:
sudo apt-get update && sudo apt-get install -y jq
docker run -ti --rm -v /var/run/docker.sock:/var/run/docker.sock wagoodman/dive <your_image>
#Example
docker run -ti --rm -v /var/run/docker.sock:/var/run/docker.sock wagoodman/dive nginx:latest
Note
Following instruction helps to analyze an image that is in tar
not loaded to the docker images pool. The tar must be present in the directory you are executing the command
docker run -ti --rm -v /var/run/docker.sock:/var/run/docker.sock -v "$(pwd)"/nginx.tar:/app/nginx.tar wagoodman/dive docker-archive://app/nginx.tar
Caution
This project is no longer actively maintained and therefore its use is more for testing or complementary purposes.
docker run -t --rm -v /var/run/docker.sock:/var/run/docker.sock:ro pegleg/whaler <your_image>
There are several checks that could indicate if we are inside a container or not:
# Search for .dockerenv or other docker related files
find / -name "*.docker*"
# Check cgroups processes
cat /proc/1/cgroup
cat /proc/self/cgroup
# Check the host name for strange id (e.g., 07f90a194e6a)
hostname
# Check the cgroup of init process
cat /proc/self/mountinfo
Interacting with a remote Docker daemon allows you to manage Docker containers and images on a different host from your local machine. This capability is particularly useful for managing multiple Docker hosts or for situations where Docker needs to be controlled from a centralized location.
To interact with a remote Docker daemon, you need to configure the Docker CLI on your local machine. You can achieve this by setting the DOCKER_HOST
environment variable to point to the remote Docker daemon.
export DOCKER_HOST="tcp://<REMOTE_HOST>:2375"
Replace <REMOTE_HOST>
with the IP address or hostname of your remote Docker host and 2375
with the port configured for remote access.
When enabling remote access to the Docker daemon, it's crucial to secure the communication channel to prevent unauthorized access:
-
Basic Authentication:
Not directly supported for Docker daemon remote access. You would need to set up a reverse proxy (e.g., Nginx) in front of the Docker daemon to handle basic authentication. -
Token-Based Authentication:
Similar to basic authentication, Docker does not natively support token-based authentication for remote daemon access. Implementing this requires a reverse proxy or a third-party authentication mechanism. -
TLS Certificates:
Docker supports mutual TLS to secure remote daemon access. Both the client and the server verify each other's identities through certificates.- Generate CA, server, and client certificates.
- Configure the Docker daemon with
--tlsverify
,--tlscacert
,--tlscert
, and--tlskey
flags pointing to the respective certificates. - Use the Docker CLI with
--tlsverify
,--tlscacert
,--tlscert
, and--tlskey
options, or set the equivalent environment variables (DOCKER_TLS_VERIFY
,DOCKER_CERT_PATH
).
-
Firewall Configuration:
Ensure your firewall rules allow traffic on the Docker daemon port only from trusted sources. -
Docker Context:
Docker 19.03 and later support thedocker context
command, allowing you to easily switch between different Docker daemons, including remote ones, without manually changing environment variables each time.docker context create remote --docker "host=tcp://<REMOTE_HOST>:2376" docker context use remote
-
Monitoring and Logging:
Implement monitoring and logging for access to the remote Docker daemon to detect and respond to unauthorized access attempts.
hadolint -f json --error DL3008 --error DL3009 --no-fail Dockerfile | jq -r '.[] | select(.level=="warning") | .code'
cat trivy.json | jq -r '.Results[].Vulnerabilities[] | select(.Severity=="CRITICAL") | .VulnerabilityID'
- Create the trivy ignore file.
cat > .trivyignore <<EOL
CVE-2023-6879
CVE-2023-5841
CVE-2023-5841
CVE-2023-45853
CVE-2023-45853
EOL
- Launch the scan in the same folder the
.trivyignore
has been created or use the--ignorefile
option pointing to the location of your Trivy ignore file.
As a test, I will show the same with python:3.8
.
- First we analyze the image as it is and the result is
364
as it could be seen in the output image.
trivy image python:3.8 -f json | jq -s 'map(.Results[].Vulnerabilities[].VulnerabilityID) | unique | length'
- Then we create the
.trivyignore
file with the contents Get all vulnerable IDs of fixed Severity command. - Then we analyze the
python:3.8
image again.
trivy image python:3.8 -f json | jq -s 'map(.Results[].Vulnerabilities[].VulnerabilityID) | unique | length'
Note
If we remove the unique
filter in the jq
count, we will see exactly the 5 count difference between both analysis due to .trivyignore
content.
Notice the removal of .trivyignore
in between the two executions
# Do not forget the option if you wan to avoid exit code error.
--exit-code 0
kubectl completion bash >/etc/bash_completion.d/kubectl
wget https://github.com/hidetatz/kubecolor/releases/download/v0.0.25/kubecolor_0.0.25_Linux_x86_64.tar.gz -O kubecolor.tar.gz
tar xzfv kubecolor.tar.gz
rm LICENSE README.md
sudo mv kubecolor /usr/local/bin
sudo chmod +x /usr/local/bin/kubecolor
Tip
For more details, check Krew installation Guide
(
set -x; cd "$(mktemp -d)" &&
OS="$(uname | tr '[:upper:]' '[:lower:]')" &&
ARCH="$(uname -m | sed -e 's/x86_64/amd64/' -e 's/\(arm\)\(64\)\?.*/\1\2/' -e 's/aarch64$/arm64/')" &&
KREW="krew-${OS}_${ARCH}" &&
curl -fsSLO "https://github.com/kubernetes-sigs/krew/releases/latest/download/${KREW}.tar.gz" &&
tar zxvf "${KREW}.tar.gz" &&
./"${KREW}" install krew
)
sudo mv ${HOME}/.krew/bin/kubectl-krew /usr/local/bin
# ${HOME}/.bashrc
alias kubectl="kubecolor"
alias kd="kubectl describe"
alias kdel="kubectl delete"
alias kget="kubectl get"
alias ke="kubectl edit"
alias ka="kubectl apply"
alias kaf="kubectl apply -f"
alias klogs="kubectl logs"
alias ll='ls -alF'
alias la='ls -A'
alias l='ls -CF'
alias k='kubecolor'
alias kg='kubecolor get'
alias kd='kubecolor describe'
alias kn='f() { [ "$1" ] && kubecolor config set-context --current --namespace $1;}; f'
alias kcg='kubectl config view -o=jsonpath={".contexts[*].name"}'
alias kc='f() { [ "$1" ] && kubecolor config use-context $1; }; f'
alias deploy='kubectl get deploy'
alias pods='kubectl get pod'
alias ktaint="kubectl get nodes -o custom-columns='NAME:.metadata.name,TAINTS:.spec.taints'"
complete -F __start_kubectl k
kubectl get pods -n <ns> <pod-id> -o=custom-columns='NAME:metadata.name, NAMESPACE: metadata.namespace'
E.g.
kubectl get pods -n kube-system -l component=kube-apiserver -o=custom-columns='NAME:metadata.name, NAMESPACE:metadata.namespace, CONTAINER:spec.containers[].name'
export CLUSTERNAME=$(kubectl config view --minify -o jsonpath='{.clusters[].name}')
APISERVER=$(kubectl config view -o jsonpath="{.clusters[?(@.name==\"$CLUSTERNAME\")].cluster.server}")
Tip
jsonpath
in this command is a query language for JSON, similar to XPath for XML. It allows you to filter and format the output of complex JSON structures.
In this specific command:
APISERVER=$(kubectl config view -o jsonpath="{.clusters[?(@.name==\"$CLUSTERNAME\")].cluster.server}")
The jsonpath
expression {.clusters[?(@.name==\"$CLUSTERNAME\")].cluster.server}
is used to extract the server URL of the cluster whose name matches the value of $CLUSTERNAME
.
Here's a breakdown:
.clusters
: This navigates to theclusters
field in the JSON output.[?(@.name==\"$CLUSTERNAME\")]
: This filters the clusters array to only include the cluster where thename
field matches the$CLUSTERNAME
variable..cluster.server
: This navigates to theserver
field of thecluster
object of the filtered cluster.
The result is the server URL of the cluster with the name $CLUSTERNAME
. This value is then assigned to the APISERVER
variable.
kubectl get pods -n kube-system -l component=etcd -o yaml
As mentioned, ETCD is the datastore available for all the cluster and its main purpose is store the information of the cluster (replicated) so the state is consistent.
To interact with etcd there is the command etcdctl
available.
export ETCDL_SERVER=$(kubectl get pods -n kube-system -l component=etcd -o json | jq .items[].spec.containers[0].command | grep listen-client-urls | cut -d '=' -f 2 | cut -d "," -f 2 | cut -d '"' -f 1)
export ETCDL_POD=$(kubectl get pods -n kube-system -l component=etcd -o=jsonpath={'.items[0].metadata.name'})
export ETCDL_COMMAND="get / --prefix --keys-only"
export ETCDL_COMMAND="version"
# Inside the container
kubectl exec -it ${ETCDL_POD} -n kube-system -- sh -c "ETCDCTL_API=3 etcdctl --endpoints ${ETCDL_SERVER} --cacert /etc/kubernetes/pki/etcd/ca.crt --key /etc/kubernetes/pki/etcd/server.key --cert /etc/kubernetes/pki/etcd/server.crt ${ETCDL_COMMAND}"
- Example deployment:
kubectl create -f https://raw.githubusercontent.com/kubernetes/website/main/content/en/examples/application/deployment.yaml --dry-run -o json | jq '.metadata.name = "my-nginx"' | kubectl create -f -
Options explained:
-
kubectl create -f <file>
: This command is used to create resources in Kubernetes from a file. In this case, the file is being fetched directly from a URL. -
https://raw.githubusercontent.com/kubernetes/website/main/content/en/examples/application/deployment.yaml
: This is the URL of the file that contains the Kubernetes resource definitions. It's hosted on the Kubernetes website's GitHub repository. -
--dry-run
: This option allows you to see what the command would do without actually applying any changes. It's useful for testing and validating your commands. -
-o json
: This option changes the output format to JSON. By default,kubectl
commands output in a human-readable format, but JSON is easier to manipulate programmatically. -
| jq '.metadata.name = "my-nginx"'
: This part of the command uses thejq
tool to modify the JSON output from the previous command. It's changing thename
field in themetadata
section of the resource definition to "my-nginx". -
| kubectl create -f -
: This part of the command takes the output from the previous command (the modified JSON) and uses it as the input file for anotherkubectl create
command. The-f -
part tellskubectl
to read the file from standard input (which is the output of the previous command).
- Getting the container port if defined (in this case we know, but for further reference):
k get deployments.apps my-nginx -o=jsonpath='{.spec.template.spec.containers[].ports[].containerPort}'
Tip
If the pod template has several containers defined, or the container exposes several ports --> -o=jsonpath='{.spec.template.spec.containers[*].ports[*].containerPort}'
kubectl expose deployment my-nginx --port=80 --targetPort=80 --type=ClusterIP --name=my-nginx-svc
kubectl auth can-i --list