Skip to content

Latest commit

 

History

History
113 lines (86 loc) · 5.28 KB

File metadata and controls

113 lines (86 loc) · 5.28 KB

Kubescape-Prometheus Integration

Version: 0.0.0 Type: application AppVersion: v0.0.0

Installing Kubescape-Prometheus integration Helm chart:

  1. Install the kube-prometheus-stack Helm Chart
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update
kubectl create namespace prometheus
helm install -n prometheus kube-prometheus-stack prometheus-community/kube-prometheus-stack --set prometheus.prometheusSpec.podMonitorSelectorNilUsesHelmValues=false,prometheus.prometheusSpec.serviceMonitorSelectorNilUsesHelmValues=false
  1. Install the kubescape-prometheus-integrator Helm Chart
helm repo add kubescape https://kubescape.github.io/helm-charts/
helm repo update
helm upgrade --install kubescape-prometheus kubescape/kubescape-prometheus-integrator -n kubescape-prometheus --create-namespace

  • Resource Kind: Deployment
  • Communication: REST API
  • Responsibility: Runs Kubescape for detecting misconfigurations in the cluster; This is microservice uses the same engine as the Kubescape CLI tool.

Component Diagram

graph TB

subgraph Cluster
    pr(Prometheus)
    ks(Kubescape)
    k8sApi(Kubernetes API)
end

pr -->|Start Scan| ks
ks -->|Collect Cluster Info|k8sApi
ks -->|Scan results| pr

classDef k8s fill:#326ce5,stroke:#fff,stroke-width:1px,color:#fff;
classDef plain fill:#ddd,stroke:#fff,stroke-width:1px,color:#000

class k8sApi k8s
class pr plain

Loading

Adjusting Resource Usage for Your Cluster

By default, Kubescape is configured for small- to medium-sized clusters. If you have a larger cluster and you experience slowdowns or see Kubernetes evicting components, please revise the amount of resources allocated for the troubled component.

Taking Kubescape for example, we found that our defaults of 500 MiB of memory and 500m CPU work well for clusters up to 1250 total resources. If you have more total resources or experience resource pressure already, first check out how many resources are in your cluster by running the following command:

kubectl get all -A --no-headers | wc -l

The command should print an approximate count of resources in your cluster. Then, based on the number you see, allocate 100 MiB of memory for every 200 resources in your cluster over the count of 1250, but no less than 128 MiB total. The formula for memory is as follows:

MemoryLimit := max(128, 0.4 * YOUR_AMOUNT_OF_RESOURCES)

For example, if your cluster has 500 resources, a sensible memory limit would be:

kubescape:
  resources:
    limits:
      memory: 200Mi  # max(128, 0.4 * 500) == 200

If your cluster has 50 resources, we still recommend allocating at least 128 MiB of memory.

When it comes to CPU, the more you allocate, the faster Kubescape will scan your cluster. This is especially true for clusters that have a large amount of resources. However, we recommend that you give Kubescape no less than 500m CPU no matter the size of your cluster so it can scan a relatively large amount of resources fast ;)

Chart support

Values

Key Type Default Description
kubescape.affinity object {} Assign custom affinity rules to the deployment
kubescape.downloadArtifacts bool false download policies every scan, we recommend it should remain true, you should change to 'false' when running in an air-gapped environment or when scanning with high frequency (when running with Prometheus)
kubescape.enableHostScan bool false enable host scanner feature
kubescape.enabled bool true enable/disable kubescape scanning
kubescape.image.repository string "quay.io/kubescape/kubescape" source code (public repo)
kubescape.nodeSelector object {} Node selector
kubescape.serviceMonitor.enabled bool true enable/disable service monitor for prometheus (operator) integration
kubescape.volumes object [] Additional volumes for Kubescape
kubescape.volumeMounts object [] Additional volumeMounts for Kubescape
kubescapeHostScanner.volumes object [] Additional volumes for the host scanner
kubescapeHostScanner.volumeMounts object [] Additional volumeMounts for the host scanner
awsIamRoleArn string nil AWS IAM arn role
cloudRegion string nil cloud region
cloudProviderEngine string nil cloud provider engine
gkeProject string nil GKE project
gkeServiceAccount string nil GKE service account
volumes object [] Additional volumes for all containers
volumeMounts object [] Additional volumeMounts for all containers