diff --git a/core/mondoo-kubernetes-security.mql.yaml b/core/mondoo-kubernetes-security.mql.yaml index b31263b9..83626bc3 100644 --- a/core/mondoo-kubernetes-security.mql.yaml +++ b/core/mondoo-kubernetes-security.mql.yaml @@ -460,7 +460,7 @@ queries: remediation: | Update the ownership and permissions: - ``` + ```bash chown root:root /etc/kubernetes/kubelet.conf chmod 600 /etc/kubernetes/kubelet.conf ``` @@ -499,7 +499,7 @@ queries: remediation: | Update the ownership and permissions: - ``` + ```bash chown root:root /etc/srv/kubernetes/pki/ca-certificates.crt chmod 600 /etc/srv/kubernetes/pki/ca-certificates.crt ``` @@ -529,7 +529,7 @@ queries: remediation: |- Run this command on the Control Plane node: - ``` + ```bash chmod 600 /etc/kubernetes/manifests/kube-apiserver.yaml chown root:root /etc/kubernetes/manifests/kube-apiserver.yaml ``` @@ -574,13 +574,13 @@ queries: remediation: |- On the etcd server node, get the etcd data directory, passed as an argument `--data-dir`, from the below command: - ``` + ```bash ps -ef | grep etcd ``` Run the below command: - ``` + ```bash chmod 700 /var/lib/etcd ``` refs: @@ -612,7 +612,7 @@ queries: remediation: |- Run this command on the Control Plane node: - ``` + ```bash chmod 600 /etc/kubernetes/admin.conf chown root:root /etc/kubernetes/admin.conf ``` @@ -643,7 +643,7 @@ queries: remediation: |- Run this command on the Control Plane node: - ``` + ```bash chmod 600 /etc/kubernetes/scheduler.conf chown root:root /etc/kubernetes/scheduler.conf ``` @@ -671,7 +671,7 @@ queries: remediation: |- Run this command on the Control Plane node: - ``` + ```bash chmod 600 /etc/kubernetes/controller-manager.conf chown root:root /etc/kubernetes/controller-manager.conf ``` @@ -699,13 +699,13 @@ queries: remediation: |- Run one of these commands on the Control Plane node depending on the location of your PKI/SSL directory: - ``` + ```bash chown -R root:root /etc/kubernetes/pki/ ``` or - ``` + ```bash chown -R root:root /etc/kubernetes/ssl/ ```` refs: @@ -724,7 +724,8 @@ queries: Otherwise unencrypted traffic could be intercepted and sensitive data could be leaked. remediation: |- Find the kube-apiserver process and check the `insecure-port` argument. If the argument is set to `0`, then the kube-apiserver is not listening on an insecure HTTP port: - ``` + + ```bash ps aux | grep kube-apiserver ``` refs: @@ -743,7 +744,8 @@ queries: desc: Ensure the kube-apiserver does not allow anonymous authentication. remediation: |- Find the kube-apiserver process and check the `--anonymous-auth` argument. If the argument is set to `false`, then the kube-apiserver does not allow anonymous authentication: - ``` + + ```bash ps aux | grep kube-apiserver ``` refs: @@ -5102,12 +5104,16 @@ queries: audit: | Check to ensure no Pods have explicitly asked for the NET_RAW capability (or asked for ALL capabilities which includes NET_RAW): - ```kubectl get pods -A -o json | jq -r '.items[] | select(.spec.containers[].securityContext.capabilities.add | . != null and any(.[] ; ascii_upcase | test("ALL|NET_RAW")) ) | .metadata.namespace + "/" + .metadata.name' | uniq``` + ```bash + kubectl get pods -A -o json | jq -r '.items[] | select(.spec.containers[].securityContext.capabilities.add | . != null and any(.[] ; ascii_upcase | test("ALL|NET_RAW")) ) | .metadata.namespace + "/" + .metadata.name' | uniq + ``` Additionally, a Pod that doesn't define a list of capabilities to drop at all, or that has a non-empty drop list that doesn't drop NET_RAW (or the ALL capability which includes NET_RAW) will implicitly run with NET_RAW. List these Pods with: - ```kubectl get pods -A -o json | jq -r '.items[] | select( .spec.containers[].securityContext.capabilities.drop | . == null or ( any(.[] ; ascii_upcase | test("ALL|NET_RAW")) | not) ) | .metadata.namespace + "/" + .metadata.name' | uniq``` + ```bash + kubectl get pods -A -o json | jq -r '.items[] | select( .spec.containers[].securityContext.capabilities.drop | . == null or ( any(.[] ; ascii_upcase | test("ALL|NET_RAW")) | not) ) | .metadata.namespace + "/" + .metadata.name' | uniq + ``` remediation: | For any Pods that explicitly add the NET_RAW or ALL capability, update the Pods (or the Deployments/DaemonSets/CronJobs/etc that produced the Pods) to ensure they do not ask for the NET_RAW or ALL capability: @@ -5153,12 +5159,16 @@ queries: audit: | Check to ensure no DaemonSets have explicitly asked for the NET_RAW capability (or asked for ALL capabilities which includes NET_RAW): - ```kubectl get daemonsets -A -o json | jq -r '.items[] | select(.spec.template.spec.containers[].securityContext.capabilities.add | . != null and any(.[] ; ascii_upcase | test("ALL|NET_RAW")) ) | .metadata.namespace + "/" + .metadata.name' | uniq``` + ```bash + kubectl get daemonsets -A -o json | jq -r '.items[] | select(.spec.template.spec.containers[].securityContext.capabilities.add | . != null and any(.[] ; ascii_upcase | test("ALL|NET_RAW")) ) | .metadata.namespace + "/" + .metadata.name' | uniq + ``` Additionally, a DaemonSet that doesn't define a list of capabilities to drop at all, or that has a non-empty drop list that doesn't drop NET_RAW (or the ALL capability which includes NET_RAW) will implicitly run with NET_RAW. List these DaemonSets with: - ```kubectl get daemonsets -A -o json | jq -r '.items[] | select( .spec.template.spec.containers[].securityContext.capabilities.drop | . == null or ( any(.[] ; ascii_upcase | test("ALL|NET_RAW")) | not) ) | .metadata.namespace + "/" + .metadata.name' | uniq``` + ```bash + kubectl get daemonsets -A -o json | jq -r '.items[] | select( .spec.template.spec.containers[].securityContext.capabilities.drop | . == null or ( any(.[] ; ascii_upcase | test("ALL|NET_RAW")) | not) ) | .metadata.namespace + "/" + .metadata.name' | uniq + ``` remediation: | For any DaemonSets that explicitly add the NET_RAW or ALL capability, update them to ensure they do not ask for the NET_RAW or ALL capability: @@ -5208,12 +5218,16 @@ queries: audit: | Check to ensure no ReplicaSets have explicitly asked for the NET_RAW capability (or asked for ALL capabilities which includes NET_RAW): - ```kubectl get replicasets -A -o json | jq -r '.items[] | select(.spec.template.spec.containers[].securityContext.capabilities.add | . != null and any(.[] ; ascii_upcase | test("ALL|NET_RAW")) ) | .metadata.namespace + "/" + .metadata.name' | uniq``` + ```bash + kubectl get replicasets -A -o json | jq -r '.items[] | select(.spec.template.spec.containers[].securityContext.capabilities.add | . != null and any(.[] ; ascii_upcase | test("ALL|NET_RAW")) ) | .metadata.namespace + "/" + .metadata.name' | uniq + ``` Additionally, a ReplicaSet that doesn't define a list of capabilities to drop at all, or that has a non-empty drop list that doesn't drop NET_RAW (or the ALL capability which includes NET_RAW) will implicitly run with NET_RAW. List these DaemonSets with: - ```kubectl get replicasets -A -o json | jq -r '.items[] | select( .spec.template.spec.containers[].securityContext.capabilities.drop | . == null or ( any(.[] ; ascii_upcase | test("ALL|NET_RAW")) | not) ) | .metadata.namespace + "/" + .metadata.name' | uniq``` + ```bash + kubectl get replicasets -A -o json | jq -r '.items[] | select( .spec.template.spec.containers[].securityContext.capabilities.drop | . == null or ( any(.[] ; ascii_upcase | test("ALL|NET_RAW")) | not) ) | .metadata.namespace + "/" + .metadata.name' | uniq + ``` remediation: | For any ReplicaSets that explicitly add the NET_RAW or ALL capability, update them to ensure they do not ask for the NET_RAW or ALL capability: @@ -5262,12 +5276,16 @@ queries: audit: | Check to ensure no Jobs have explicitly asked for the NET_RAW capability (or asked for ALL capabilities which includes NET_RAW): - ```kubectl get jobs -A -o json | jq -r '.items[] | select(.spec.template.spec.containers[].securityContext.capabilities.add | . != null and any(.[] ; ascii_upcase | test("ALL|NET_RAW")) ) | .metadata.namespace + "/" + .metadata.name' | uniq``` + ```bash + kubectl get jobs -A -o json | jq -r '.items[] | select(.spec.template.spec.containers[].securityContext.capabilities.add | . != null and any(.[] ; ascii_upcase | test("ALL|NET_RAW")) ) | .metadata.namespace + "/" + .metadata.name' | uniq + ``` Additionally, a Job that doesn't define a list of capabilities to drop at all, or that has a non-empty drop list that doesn't drop NET_RAW (or the ALL capability which includes NET_RAW) will implicitly run with NET_RAW. List these DaemonSets with: - ```kubectl get jobs -A -o json | jq -r '.items[] | select( .spec.template.spec.containers[].securityContext.capabilities.drop | . == null or ( any(.[] ; ascii_upcase | test("ALL|NET_RAW")) | not) ) | .metadata.namespace + "/" + .metadata.name' | uniq``` + ```bash + kubectl get jobs -A -o json | jq -r '.items[] | select( .spec.template.spec.containers[].securityContext.capabilities.drop | . == null or ( any(.[] ; ascii_upcase | test("ALL|NET_RAW")) | not) ) | .metadata.namespace + "/" + .metadata.name' | uniq + ``` remediation: | For any Jobs that explicitly add the NET_RAW or ALL capability, update them to ensure they do not ask for the NET_RAW or ALL capability: @@ -5317,12 +5335,16 @@ queries: audit: | Check to ensure no Deployments have explicitly asked for the NET_RAW capability (or asked for ALL capabilities which includes NET_RAW): - ```kubectl get deployments -A -o json | jq -r '.items[] | select(.spec.template.spec.containers[].securityContext.capabilities.add | . != null and any(.[] ; ascii_upcase | test("ALL|NET_RAW")) ) | .metadata.namespace + "/" + .metadata.name' | uniq``` + ```bash + kubectl get deployments -A -o json | jq -r '.items[] | select(.spec.template.spec.containers[].securityContext.capabilities.add | . != null and any(.[] ; ascii_upcase | test("ALL|NET_RAW")) ) | .metadata.namespace + "/" + .metadata.name' | uniq + ``` Additionally, a Deployment that doesn't define a list of capabilities to drop at all, or that has a non-empty drop list that doesn't drop NET_RAW (or the ALL capability which includes NET_RAW) will implicitly run with NET_RAW. List these DaemonSets with: - ```kubectl get deployments -A -o json | jq -r '.items[] | select( .spec.template.spec.containers[].securityContext.capabilities.drop | . == null or ( any(.[] ; ascii_upcase | test("ALL|NET_RAW")) | not) ) | .metadata.namespace + "/" + .metadata.name' | uniq``` + ```bash + kubectl get deployments -A -o json | jq -r '.items[] | select( .spec.template.spec.containers[].securityContext.capabilities.drop | . == null or ( any(.[] ; ascii_upcase | test("ALL|NET_RAW")) | not) ) | .metadata.namespace + "/" + .metadata.name' | uniq + ``` remediation: | For any Deployments that explicitly add the NET_RAW or ALL capability, update them to ensure they do not ask for the NET_RAW or ALL capability: @@ -5372,12 +5394,16 @@ queries: audit: | Check to ensure no StatefulSets have explicitly asked for the NET_RAW capability (or asked for ALL capabilities which includes NET_RAW): - ```kubectl get statefulsets -A -o json | jq -r '.items[] | select(.spec.template.spec.containers[].securityContext.capabilities.add | . != null and any(.[] ; ascii_upcase | test("ALL|NET_RAW")) ) | .metadata.namespace + "/" + .metadata.name' | uniq``` + ```bash + kubectl get statefulsets -A -o json | jq -r '.items[] | select(.spec.template.spec.containers[].securityContext.capabilities.add | . != null and any(.[] ; ascii_upcase | test("ALL|NET_RAW")) ) | .metadata.namespace + "/" + .metadata.name' | uniq + ``` Additionally, a StatefulSet that doesn't define a list of capabilities to drop at all, or that has a non-empty drop list that doesn't drop NET_RAW (or the ALL capability which includes NET_RAW) will implicitly run with NET_RAW. List these DaemonSets with: - ```kubectl get statefulsets -A -o json | jq -r '.items[] | select( .spec.template.spec.containers[].securityContext.capabilities.drop | . == null or ( any(.[] ; ascii_upcase | test("ALL|NET_RAW")) | not) ) | .metadata.namespace + "/" + .metadata.name' | uniq``` + ```bash + kubectl get statefulsets -A -o json | jq -r '.items[] | select( .spec.template.spec.containers[].securityContext.capabilities.drop | . == null or ( any(.[] ; ascii_upcase | test("ALL|NET_RAW")) | not) ) | .metadata.namespace + "/" + .metadata.name' | uniq + ``` remediation: | For any StatefulSets that explicitly add the NET_RAW or ALL capability, update them to ensure they do not ask for the NET_RAW or ALL capability: @@ -5427,12 +5453,16 @@ queries: audit: | Check to ensure no CronJobs have explicitly asked for the NET_RAW capability (or asked for ALL capabilities which includes NET_RAW): - ```kubectl get cronjobs -A -o json | jq -r '.items[] | select(.spec.jobTemplate.spec.template.spec.containers[].securityContext.capabilities.add | . != null and any(.[] ; ascii_upcase | test("ALL|NET_RAW")) ) | .metadata.namespace + "/" + .metadata.name' | uniq``` + ```bash + kubectl get cronjobs -A -o json | jq -r '.items[] | select(.spec.jobTemplate.spec.template.spec.containers[].securityContext.capabilities.add | . != null and any(.[] ; ascii_upcase | test("ALL|NET_RAW")) ) | .metadata.namespace + "/" + .metadata.name' | uniq + ``` Additionally, a CronJob that doesn't define a list of capabilities to drop at all, or that has a non-empty drop list that doesn't drop NET_RAW (or the ALL capability which includes NET_RAW) will implicitly run with NET_RAW. List these DaemonSets with: - ```kubectl get cronjobs -A -o json | jq -r '.items[] | select( .spec.jobTemplate.spec.template.spec.containers[].securityContext.capabilities.drop | . == null or ( any(.[] ; ascii_upcase | test("ALL|NET_RAW")) | not) ) | .metadata.namespace + "/" + .metadata.name' | uniq``` + ```bash + kubectl get cronjobs -A -o json | jq -r '.items[] | select( .spec.jobTemplate.spec.template.spec.containers[].securityContext.capabilities.drop | . == null or ( any(.[] ; ascii_upcase | test("ALL|NET_RAW")) | not) ) | .metadata.namespace + "/" + .metadata.name' | uniq + ``` remediation: | For any CronJobs that explicitly add the NET_RAW or ALL capability, update them to ensure they do not ask for the NET_RAW or ALL capability: @@ -5488,7 +5518,9 @@ queries: audit: | Check to ensure no Pods have explicitly asked for the SYS_ADMIN capability (or asked for ALL capabilities which includes SYS_ADMIN): - ```kubectl get pods -A -o json | jq -r '.items[] | select(.spec.containers[].securityContext.capabilities.add | . != null and any(.[] ; ascii_upcase | test("ALL|SYS_ADMIN")) ) | .metadata.namespace + "/" + .metadata.name' | uniq``` + ```bash + kubectl get pods -A -o json | jq -r '.items[] | select(.spec.containers[].securityContext.capabilities.add | . != null and any(.[] ; ascii_upcase | test("ALL|SYS_ADMIN")) ) | .metadata.namespace + "/" + .metadata.name' | uniq + ``` remediation: | For any Pods that explicitly add the SYS_ADMIN or ALL capability, update the Pods (or the Deployments/DaemonSets/CronJobs/etc that produced the Pods) to ensure they do not ask for the SYS_ADMIN or ALL capability: @@ -5521,7 +5553,9 @@ queries: audit: | Check to ensure no DaemonSets have explicitly asked for the SYS_ADMIN capability (or asked for ALL capabilities which includes SYS_ADMIN): - ```kubectl get daemonsets -A -o json | jq -r '.items[] | select(.spec.template.spec.containers[].securityContext.capabilities.add | . != null and any(.[] ; ascii_upcase | test("ALL|SYS_ADMIN")) ) | .metadata.namespace + "/" + .metadata.name' | uniq``` + ```bash + kubectl get daemonsets -A -o json | jq -r '.items[] | select(.spec.template.spec.containers[].securityContext.capabilities.add | . != null and any(.[] ; ascii_upcase | test("ALL|SYS_ADMIN")) ) | .metadata.namespace + "/" + .metadata.name' | uniq + ``` remediation: | For any DaemonSets that explicitly add the SYS_ADMIN or ALL capability, update them to ensure they do not ask for the SYS_ADMIN or ALL capability: @@ -5556,7 +5590,9 @@ queries: audit: | Check to ensure no ReplicaSets have explicitly asked for the SYS_ADMIN capability (or asked for ALL capabilities which includes SYS_ADMIN): - ```kubectl get replicasets -A -o json | jq -r '.items[] | select(.spec.template.spec.containers[].securityContext.capabilities.add | . != null and any(.[] ; ascii_upcase | test("ALL|SYS_ADMIN")) ) | .metadata.namespace + "/" + .metadata.name' | uniq``` + ```bash + kubectl get replicasets -A -o json | jq -r '.items[] | select(.spec.template.spec.containers[].securityContext.capabilities.add | . != null and any(.[] ; ascii_upcase | test("ALL|SYS_ADMIN")) ) | .metadata.namespace + "/" + .metadata.name' | uniq + ``` remediation: | For any ReplicaSets that explicitly add the SYS_ADMIN or ALL capability, update them to ensure they do not ask for the SYS_ADMIN or ALL capability: @@ -5591,7 +5627,9 @@ queries: audit: | Check to ensure no Jobs have explicitly asked for the SYS_ADMIN capability (or asked for ALL capabilities which includes SYS_ADMIN): - ```kubectl get jobs -A -o json | jq -r '.items[] | select(.spec.template.spec.containers[].securityContext.capabilities.add | . != null and any(.[] ; ascii_upcase | test("ALL|SYS_ADMIN")) ) | .metadata.namespace + "/" + .metadata.name' | uniq``` + ```bash + kubectl get jobs -A -o json | jq -r '.items[] | select(.spec.template.spec.containers[].securityContext.capabilities.add | . != null and any(.[] ; ascii_upcase | test("ALL|SYS_ADMIN")) ) | .metadata.namespace + "/" + .metadata.name' | uniq + ``` remediation: | For any Jobs that explicitly add the SYS_ADMIN or ALL capability, update them to ensure they do not ask for the SYS_ADMIN or ALL capability: @@ -5625,7 +5663,9 @@ queries: audit: | Check to ensure no Deployments have explicitly asked for the SYS_ADMIN capability (or asked for ALL capabilities which includes SYS_ADMIN): - ```kubectl get deployments -A -o json | jq -r '.items[] | select(.spec.template.spec.containers[].securityContext.capabilities.add | . != null and any(.[] ; ascii_upcase | test("ALL|SYS_ADMIN")) ) | .metadata.namespace + "/" + .metadata.name' | uniq``` + ```bash + kubectl get deployments -A -o json | jq -r '.items[] | select(.spec.template.spec.containers[].securityContext.capabilities.add | . != null and any(.[] ; ascii_upcase | test("ALL|SYS_ADMIN")) ) | .metadata.namespace + "/" + .metadata.name' | uniq + ``` remediation: | For any Deployments that explicitly add the SYS_ADMIN or ALL capability, update them to ensure they do not ask for the SYS_ADMIN or ALL capability: @@ -5660,7 +5700,9 @@ queries: audit: | Check to ensure no StatefulSets have explicitly asked for the SYS_ADMIN capability (or asked for ALL capabilities which includes SYS_ADMIN): - ```kubectl get statefulsets -A -o json | jq -r '.items[] | select(.spec.template.spec.containers[].securityContext.capabilities.add | . != null and any(.[] ; ascii_upcase | test("ALL|SYS_ADMIN")) ) | .metadata.namespace + "/" + .metadata.name' | uniq``` + ```bash + kubectl get statefulsets -A -o json | jq -r '.items[] | select(.spec.template.spec.containers[].securityContext.capabilities.add | . != null and any(.[] ; ascii_upcase | test("ALL|SYS_ADMIN")) ) | .metadata.namespace + "/" + .metadata.name' | uniq + ``` remediation: | For any StatefulSets that explicitly add the SYS_ADMIN or ALL capability, update them to ensure they do not ask for the SYS_ADMIN or ALL capability: @@ -5695,7 +5737,9 @@ queries: audit: | Check to ensure no CronJobs have explicitly asked for the SYS_ADMIN capability (or asked for ALL capabilities which includes SYS_ADMIN): - ```kubectl get cronjobs -A -o json | jq -r '.items[] | select(.spec.jobTemplate.spec.template.spec.containers[].securityContext.capabilities.add | . != null and any(.[] ; ascii_upcase | test("ALL|SYS_ADMIN")) ) | .metadata.namespace + "/" + .metadata.name' | uniq``` + ```bash + kubectl get cronjobs -A -o json | jq -r '.items[] | select(.spec.jobTemplate.spec.template.spec.containers[].securityContext.capabilities.add | . != null and any(.[] ; ascii_upcase | test("ALL|SYS_ADMIN")) ) | .metadata.namespace + "/" + .metadata.name' | uniq + ``` remediation: | For any CronJobs that explicitly add the SYS_ADMIN or ALL capability, update them to ensure they do not ask for the SYS_ADMIN or ALL capability: @@ -5733,7 +5777,9 @@ queries: audit: | Check to ensure no Pods are binding any of their containers to a host port: - ```kubectl get pods -A -o json | jq -r '.items[] | select( (.spec.containers[].ports | . != null and any(.[].hostPort; . != null) ) ) | .metadata.namespace + "/" + .metadata.name' | uniq``` + ```bash + kubectl get pods -A -o json | jq -r '.items[] | select( (.spec.containers[].ports | . != null and any(.[].hostPort; . != null) ) ) | .metadata.namespace + "/" + .metadata.name' | uniq + ``` remediation: | For any Pods that bind to a host port, update the Pods (or the Deployments/DaemonSets/CronJobs/etc that produced the Pods) to ensure they do not bind to a host port: @@ -5770,7 +5816,9 @@ queries: audit: | Check to ensure no DaemonSets are binding any of their containers to a host port: - ```kubectl get daemonsets -A -o json | jq -r '.items[] | select( (.spec.template.spec.containers[].ports | . != null and any(.[].hostPort; . != null) ) ) | .metadata.namespace + "/" + .metadata.name' | uniq``` + ```bash + kubectl get daemonsets -A -o json | jq -r '.items[] | select( (.spec.template.spec.containers[].ports | . != null and any(.[].hostPort; . != null) ) ) | .metadata.namespace + "/" + .metadata.name' | uniq + ``` remediation: | For any DaemonSets that bind to a host port, update the DaemonSets to ensure they do not bind to a host port: @@ -5809,7 +5857,9 @@ queries: audit: | Check to ensure no ReplicaSets are binding any of their containers to a host port: - ```kubectl get replicasets -A -o json | jq -r '.items[] | select( (.spec.template.spec.containers[].ports | . != null and any(.[].hostPort; . != null) ) ) | .metadata.namespace + "/" + .metadata.name' | uniq``` + ```bash + kubectl get replicasets -A -o json | jq -r '.items[] | select( (.spec.template.spec.containers[].ports | . != null and any(.[].hostPort; . != null) ) ) | .metadata.namespace + "/" + .metadata.name' | uniq + ``` remediation: | For any ReplicaSets that bind to a host port, update the ReplicaSets to ensure they do not bind to a host port: @@ -5848,7 +5898,9 @@ queries: audit: | Check to ensure no Jobs are binding any of their containers to a host port: - ```kubectl get jobs -A -o json | jq -r '.items[] | select( (.spec.template.spec.containers[].ports | . != null and any(.[].hostPort; . != null) ) ) | .metadata.namespace + "/" + .metadata.name' | uniq``` + ```bash + kubectl get jobs -A -o json | jq -r '.items[] | select( (.spec.template.spec.containers[].ports | . != null and any(.[].hostPort; . != null) ) ) | .metadata.namespace + "/" + .metadata.name' | uniq + ``` remediation: | For any ReplicaSets that bind to a host port, update the Jobs to ensure they do not bind to a host port: @@ -5887,7 +5939,9 @@ queries: audit: | Check to ensure no Deployments are binding any of their containers to a host port: - ```kubectl get deployments -A -o json | jq -r '.items[] | select( (.spec.template.spec.containers[].ports | . != null and any(.[].hostPort; . != null) ) ) | .metadata.namespace + "/" + .metadata.name' | uniq``` + ```bash + kubectl get deployments -A -o json | jq -r '.items[] | select( (.spec.template.spec.containers[].ports | . != null and any(.[].hostPort; . != null) ) ) | .metadata.namespace + "/" + .metadata.name' | uniq + ``` remediation: | For any Deployments that bind to a host port, update the Deployments to ensure they do not bind to a host port: @@ -5926,7 +5980,9 @@ queries: audit: | Check to ensure no StatefulSets are binding any of their containers to a host port: - ```kubectl get statefulsets -A -o json | jq -r '.items[] | select( (.spec.template.spec.containers[].ports | . != null and any(.[].hostPort; . != null) ) ) | .metadata.namespace + "/" + .metadata.name' | uniq``` + ```bash + kubectl get statefulsets -A -o json | jq -r '.items[] | select( (.spec.template.spec.containers[].ports | . != null and any(.[].hostPort; . != null) ) ) | .metadata.namespace + "/" + .metadata.name' | uniq + ``` remediation: | For any StatefulSets that bind to a host port, update the StatefulSets to ensure they do not bind to a host port: @@ -5965,7 +6021,9 @@ queries: audit: | Check to ensure no CronJobs are binding any of their containers to a host port: - ```kubectl get cronjobs -A -o json | jq -r '.items[] | select( (.spec.jobTemplate.spec.template.spec.containers[].ports | . != null and any(.[].hostPort; . != null) ) ) | .metadata.namespace + "/" + .metadata.name' | uniq``` + ```bash + kubectl get cronjobs -A -o json | jq -r '.items[] | select( (.spec.jobTemplate.spec.template.spec.containers[].ports | . != null and any(.[].hostPort; . != null) ) ) | .metadata.namespace + "/" + .metadata.name' | uniq + ``` remediation: | For any CronJobs that bind to a host port, update the CronJobs to ensure they do not bind to a host port: @@ -6059,7 +6117,9 @@ queries: audit: | Check to ensure no containers in a Pod are mounting hostPath volumes as read-write: - ```kubectl get pods -A -o json | jq -r '.items[] | [.spec.volumes[] | select(.hostPath != null) | .name] as $myVar | select(.spec.containers[].volumeMounts | (. != null and ( .[] | ( [.name] | inside($myVar) ) and .readOnly != true ) ) ) | .metadata.namespace + "/" + .metadata.name' | uniq``` + ```bash + kubectl get pods -A -o json | jq -r '.items[] | [.spec.volumes[] | select(.hostPath != null) | .name] as $myVar | select(.spec.containers[].volumeMounts | (. != null and ( .[] | ( [.name] | inside($myVar) ) and .readOnly != true ) ) ) | .metadata.namespace + "/" + .metadata.name' | uniq + ``` remediation: | For any Pod containers that mount a hostPath volume as read-write, update them (or the Deployment/StatefulSet/etc that created the Pod): @@ -6109,7 +6169,9 @@ queries: audit: | Check to ensure no containers in a DaemonSet are mounting hostPath volumes as read-write: - ```kubectl get daemonsets -A -o json | jq -r '.items[] | [.spec.template.spec.volumes[] | select(.hostPath != null) | .name] as $myVar | select(.spec.template.spec.containers[].volumeMounts | (. != null and ( .[] | ( [.name] | inside($myVar) ) and .readOnly != true ) ) ) | .metadata.namespace + "/" + .metadata.name' | uniq``` + ```bash + kubectl get daemonsets -A -o json | jq -r '.items[] | [.spec.template.spec.volumes[] | select(.hostPath != null) | .name] as $myVar | select(.spec.template.spec.containers[].volumeMounts | (. != null and ( .[] | ( [.name] | inside($myVar) ) and .readOnly != true ) ) ) | .metadata.namespace + "/" + .metadata.name' | uniq + ``` remediation: | For any DaemonSet containers that mount a hostPath volume as read-write, update them: @@ -6161,7 +6223,9 @@ queries: audit: | Check to ensure no containers in a ReplicaSet are mounting hostPath volumes as read-write: - ```kubectl get replicasets -A -o json | jq -r '.items[] | [.spec.template.spec.volumes[] | select(.hostPath != null) | .name] as $myVar | select(.spec.template.spec.containers[].volumeMounts | (. != null and ( .[] | ( [.name] | inside($myVar) ) and .readOnly != true ) ) ) | .metadata.namespace + "/" + .metadata.name' | uniq``` + ```bash + kubectl get replicasets -A -o json | jq -r '.items[] | [.spec.template.spec.volumes[] | select(.hostPath != null) | .name] as $myVar | select(.spec.template.spec.containers[].volumeMounts | (. != null and ( .[] | ( [.name] | inside($myVar) ) and .readOnly != true ) ) ) | .metadata.namespace + "/" + .metadata.name' | uniq + ``` remediation: | For any ReplicaSet containers that mount a hostPath volume as read-write, update them: @@ -6213,7 +6277,9 @@ queries: audit: | Check to ensure no containers in a Job are mounting hostPath volumes as read-write: - ```kubectl get jobs -A -o json | jq -r '.items[] | [.spec.template.spec.volumes[] | select(.hostPath != null) | .name] as $myVar | select(.spec.template.spec.containers[].volumeMounts | (. != null and ( .[] | ( [.name] | inside($myVar) ) and .readOnly != true ) ) ) | .metadata.namespace + "/" + .metadata.name' | uniq``` + ```bash + kubectl get jobs -A -o json | jq -r '.items[] | [.spec.template.spec.volumes[] | select(.hostPath != null) | .name] as $myVar | select(.spec.template.spec.containers[].volumeMounts | (. != null and ( .[] | ( [.name] | inside($myVar) ) and .readOnly != true ) ) ) | .metadata.namespace + "/" + .metadata.name' | uniq + ``` remediation: | For any Job containers that mount a hostPath volume as read-write, update them: @@ -6265,7 +6331,9 @@ queries: audit: | Check to ensure no containers in a Deployment are mounting hostPath volumes as read-write: - ```kubectl get deployments -A -o json | jq -r '.items[] | [.spec.template.spec.volumes[] | select(.hostPath != null) | .name] as $myVar | select(.spec.template.spec.containers[].volumeMounts | (. != null and ( .[] | ( [.name] | inside($myVar) ) and .readOnly != true ) ) ) | .metadata.namespace + "/" + .metadata.name' | uniq``` + ```bash + kubectl get deployments -A -o json | jq -r '.items[] | [.spec.template.spec.volumes[] | select(.hostPath != null) | .name] as $myVar | select(.spec.template.spec.containers[].volumeMounts | (. != null and ( .[] | ( [.name] | inside($myVar) ) and .readOnly != true ) ) ) | .metadata.namespace + "/" + .metadata.name' | uniq + ``` remediation: | For any Deployment containers that mount a hostPath volume as read-write, update them: @@ -6317,7 +6385,9 @@ queries: audit: | Check to ensure no containers in a StatefulSet are mounting hostPath volumes as read-write: - ```kubectl get statefulsets -A -o json | jq -r '.items[] | [.spec.template.spec.volumes[] | select(.hostPath != null) | .name] as $myVar | select(.spec.template.spec.containers[].volumeMounts | (. != null and ( .[] | ( [.name] | inside($myVar) ) and .readOnly != true ) ) ) | .metadata.namespace + "/" + .metadata.name' | uniq``` + ```bash + kubectl get statefulsets -A -o json | jq -r '.items[] | [.spec.template.spec.volumes[] | select(.hostPath != null) | .name] as $myVar | select(.spec.template.spec.containers[].volumeMounts | (. != null and ( .[] | ( [.name] | inside($myVar) ) and .readOnly != true ) ) ) | .metadata.namespace + "/" + .metadata.name' | uniq + ``` remediation: | For any StatefulSet containers that mount a hostPath volume as read-write, update them: @@ -6369,7 +6439,9 @@ queries: audit: | Check to ensure no containers in a CronJob are mounting hostPath volumes as read-write: - ```kubectl get cronjobs -A -o json | jq -r '.items[] | [.spec.jobTemplate.spec.template.spec.volumes[] | select(.hostPath != null) | .name] as $myVar | select(.spec.jobTemplate.spec.template.spec.containers[].volumeMounts | (. != null and ( .[] | ( [.name] | inside($myVar) ) and .readOnly != true ) ) ) | .metadata.namespace + "/" + .metadata.name' | uniq``` + ```bash + kubectl get cronjobs -A -o json | jq -r '.items[] | [.spec.jobTemplate.spec.template.spec.volumes[] | select(.hostPath != null) | .name] as $myVar | select(.spec.jobTemplate.spec.template.spec.containers[].volumeMounts | (. != null and ( .[] | ( [.name] | inside($myVar) ) and .readOnly != true ) ) ) | .metadata.namespace + "/" + .metadata.name' | uniq + ``` remediation: | For any CronJob containers that mount a hostPath volume as read-write, update them: @@ -6402,7 +6474,10 @@ queries: Tiller is the in-cluster component for the Helm v2 package manager. It is communicating directly to the Kubernetes API and therefore it has broad RBAC permissions. An attacker can use that to get cluster-wide access. audit: | Verify there are no deployments running Tiller: - ```kubectl get deployments -A -o=custom-columns="NAME:.metadata.name,IMAGE:.spec.template.spec.containers[*].image"``` + + ```bash + kubectl get deployments -A -o=custom-columns="NAME:.metadata.name,IMAGE:.spec.template.spec.containers[*].image" + ``` remediation: | Delete any deployments that are running Tiller. - uid: mondoo-kubernetes-security-pod-tiller @@ -6417,7 +6492,10 @@ queries: Tiller is the in-cluster component for the Helm v2 package manager. It is communicating directly to the Kubernetes API and therefore it has broad RBAC permissions. An attacker can use that to get cluster-wide access. audit: | Verify there are no pods running Tiller: - ```kubectl get pods -A -o=custom-columns="NAME:.metadata.name,IMAGE:.spec.template.spec.containers[*].image"``` + + ```bash + kubectl get pods -A -o=custom-columns="NAME:.metadata.name,IMAGE:.spec.template.spec.containers[*].image" + ``` remediation: | Delete any pods that are running Tiller. - uid: mondoo-kubernetes-security-deployment-k8s-dashboard @@ -6432,7 +6510,10 @@ queries: The Kubernetes dashboard allows browsing through cluster resources such as workloads, configmaps and secrets. In 2019 Tesla was hacked because their Kubernetes dashboard was publicly exposed. This allowed the attackers to extract credentials and deploy Bitcoin miners on the cluster. audit: | Verify there are no deployments running Kubernetes dashboard: - ```kubectl get deployments -A -o=custom-columns="NAME:.metadata.name,IMAGE:.spec.template.spec.containers[*].image"``` + + ```bash + kubectl get deployments -A -o=custom-columns="NAME:.metadata.name,IMAGE:.spec.template.spec.containers[*].image" + ``` remediation: | Delete any deployments that are running Kubernetes dashboard. - uid: mondoo-kubernetes-security-pod-k8s-dashboard @@ -6449,6 +6530,9 @@ queries: The Kubernetes dashboard allows browsing through cluster resources such as workloads, configmaps and secrets. In 2019 Tesla was hacked because their Kubernetes dashboard was publicly exposed. This allowed the attackers to extract credentials and deploy Bitcoin miners on the cluster. audit: | Verify there are no pods running Kubernetes dashboard: - ```kubectl get pods -A -o=custom-columns="NAME:.metadata.name,IMAGE:.spec.template.spec.containers[*].image"``` + + ```bash + kubectl get pods -A -o=custom-columns="NAME:.metadata.name,IMAGE:.spec.template.spec.containers[*].image" + ``` remediation: | Delete any pods that are running Kubernetes dashboard.