Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sync with upstream #120

Merged
merged 71 commits into from
Nov 19, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
71 commits
Select commit Hold shift + click to select a range
cb0c1b6
Plugin args: tag arguments with omitempty to reduce the marshalled js…
ingvagabund Aug 6, 2024
33a7470
bump k8s.io libs to v0.31.0
a7i Aug 14, 2024
4e4c5f7
Merge pull request #1496 from a7i/k8s-1.30
k8s-ci-robot Aug 15, 2024
0cf1fc9
descheduler v0.31: update e2e test versions
a7i Aug 29, 2024
9fa48cd
chore: upgrade python EOL and action versions
a7i Aug 29, 2024
a300009
Merge pull request #1505 from a7i/python-eol
k8s-ci-robot Aug 29, 2024
dbe4423
Merge pull request #1504 from a7i/k8s-1.31-e2e
k8s-ci-robot Aug 29, 2024
0b50594
feat(helm): make securityContext conditional in Deployment and CronJob
bendikp Aug 30, 2024
ed6a133
Merge pull request #1507 from bendikp/make-security-context-conditional
k8s-ci-robot Sep 2, 2024
0f1890e
Merge pull request #1480 from ingvagabund/omitempty-for-plugin-args
k8s-ci-robot Sep 2, 2024
fdd6910
modify IMAGE_TAG to fix the version parsing issue
fanhaouu Sep 3, 2024
ab6a3ca
avoid e2e test timeout
fanhaouu Sep 3, 2024
4989cc3
descheduler v0.31: update docs and manifests
a7i Aug 29, 2024
33868c4
chore: replace `github.com/ghodss/yaml` with `sigs.k8s.io/yaml`
Juneezee Sep 8, 2024
73432b7
Merge pull request #1506 from a7i/docs-v0.31
k8s-ci-robot Sep 9, 2024
4d6a0f1
Merge pull request #1508 from fanhaouu/fix-run-e2e-tests-bug
k8s-ci-robot Sep 9, 2024
b35e93e
Merge pull request #1510 from Juneezee/chore/yaml
k8s-ci-robot Sep 9, 2024
2c00560
descheduler v0.31.0: bump helm chart
a7i Sep 9, 2024
f19a297
bump kustomize files
a7i Sep 9, 2024
c9c03ee
Merge pull request #1511 from a7i/bump-kustomize
k8s-ci-robot Sep 9, 2024
3bf40c8
chore: bump golangci-lint to latest
a7i Sep 9, 2024
9f15e02
Merge pull request #1513 from a7i/amir/bump-golangci
k8s-ci-robot Sep 9, 2024
b094acb
Merge pull request #1512 from a7i/bump-helm
k8s-ci-robot Sep 9, 2024
6e30321
fix: github action Release Charts to have write permissions
a7i Sep 9, 2024
8b0744c
Merge pull request #1514 from a7i/amir/gha-perms
k8s-ci-robot Sep 9, 2024
d25cba0
[e2e] abstract common methods
fanhaouu Sep 12, 2024
18ef695
Merge pull request #1517 from fanhaouu/e2e-common-method
k8s-ci-robot Sep 20, 2024
af495e6
e2e: TopologySpreadConstraint: build a descheduler image and run the …
fanhaouu Sep 22, 2024
0ac05f6
e2e: LeaderElection: build a descheduler image and run the deschedule…
fanhaouu Sep 20, 2024
347a08a
add update lease permission
fanhaouu Sep 20, 2024
8b6a675
remove policy_leaderelection yaml file
fanhaouu Sep 22, 2024
05ce561
e2e: FailedPods: build a descheduler image and run the descheduler as…
fanhaouu Sep 20, 2024
e0a8c77
e2e: DuplicatePods: build a descheduler image and run the descheduler…
fanhaouu Sep 20, 2024
2c033a1
Merge pull request #1520 from fanhaouu/e2e-duplicatepods
k8s-ci-robot Sep 30, 2024
042fef7
Merge pull request #1521 from fanhaouu/e2e-failedpods
k8s-ci-robot Sep 30, 2024
8e762d2
Merge pull request #1523 from fanhaouu/e2e-topologyspreadconstraint
k8s-ci-robot Sep 30, 2024
e1e537d
Merge pull request #1522 from fanhaouu/e2e-leaderelection
k8s-ci-robot Oct 1, 2024
3e61666
test: construct e2e deployments through buildTestDeployment
ingvagabund Oct 1, 2024
22d9230
Make sure dry runs sees all the resources a normal run would do (#1526)
john7doe Oct 4, 2024
b07be07
Merge pull request #1527 from ingvagabund/e2e-buildTestDeployment
k8s-ci-robot Oct 8, 2024
e0ff750
Move default LNU threshold setting under setDefaultForLNUThresholds
ingvagabund Oct 11, 2024
e3c41d6
lnu: move static code from Balance under plugin constructor
ingvagabund Oct 11, 2024
89bd188
hnu: move static code from Balance under plugin constructor
ingvagabund Oct 11, 2024
7696f00
Merge pull request #1532 from ingvagabund/node-utilization-refactoring
k8s-ci-robot Oct 14, 2024
ef0c2c1
add ignorePodsWithoutPDB option (#1529)
john7doe Oct 15, 2024
0c552b6
Update Dockerfile - GoLang v 1.22.7 FIX - CVE-2024-34156
sagar-18 Oct 31, 2024
a18425a
Merge pull request #1539 from sagar-18/patch-1
k8s-ci-robot Nov 5, 2024
7eeb07d
Update nodes sorting function to respect available resources
ingvagabund Nov 8, 2024
269f16c
DeschedulerServer: new Apply function for applying configuration
ingvagabund Nov 12, 2024
fb4b874
Move RunE code under Run
ingvagabund Nov 12, 2024
1e48cfe
Merge pull request #1541 from ingvagabund/sortNodesByUsage-dont-hardc…
k8s-ci-robot Nov 13, 2024
da52983
Merge pull request #1542 from ingvagabund/descheduler-server-apply
k8s-ci-robot Nov 13, 2024
e655a7e
nodeutilization: NodeUtilization: make pod utilization extraction con…
ingvagabund Nov 13, 2024
e9f4385
nodeutilization: iterate through existing resources
ingvagabund Nov 13, 2024
67d3d52
sortNodesByUsage: drop extended resources as they are already counted in
ingvagabund Nov 13, 2024
d419816
Merge pull request #1546 from ingvagabund/sortNodesByUsage-extended
k8s-ci-robot Nov 13, 2024
5ba11e0
Merge pull request #1543 from ingvagabund/node-utilization-refactoring-I
k8s-ci-robot Nov 13, 2024
af8a744
Merge pull request #1544 from ingvagabund/node-utilization-refactorin…
k8s-ci-robot Nov 13, 2024
9950b8a
nodeutilization: usage2KeysAndValues for constructing a key:value lis…
ingvagabund Nov 14, 2024
cd408dd
bump(golangci-lint)=v1.62.0
ingvagabund Nov 14, 2024
23a6d26
Merge pull request #1549 from ingvagabund/usageKeysAndValues
k8s-ci-robot Nov 14, 2024
7b1178b
Merge pull request #1551 from ingvagabund/bump-golangci-lint
k8s-ci-robot Nov 14, 2024
d1c64c4
nodeutilization: separate code responsible for requested resource ext…
ingvagabund Nov 13, 2024
343ebb9
Merge pull request #1545 from ingvagabund/node-utilization-refactorin…
k8s-ci-robot Nov 15, 2024
2049f87
Sync with upstream
ingvagabund Nov 16, 2024
74d965f
UPSTREAM: 1466: Define EvictionsInBackground feature gate
ingvagabund Aug 12, 2024
d4aaf5d
UPSTREAM: 1466: Introduce RequestEviction feature for evicting pods i…
ingvagabund Aug 30, 2024
31c6b4c
UPSTREAM: 1555: [nodeutilization]: actual usage client through kubern…
ingvagabund Nov 7, 2024
b43f005
UPSTREAM: 1555: go mod tidy/vendor k8s.io/metrics
ingvagabund Nov 7, 2024
5ddb1ad
UPSTREAM: 1533: [nodeutilization]: prometheus usage client through pr…
ingvagabund Nov 7, 2024
5814784
UPSTREAM: 1533: Update vendor for prometheus deps
ingvagabund Nov 7, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
16 changes: 8 additions & 8 deletions .github/workflows/helm.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -20,27 +20,27 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3
uses: actions/checkout@v4
with:
fetch-depth: 0

- name: Set up Helm
uses: azure/setup-helm@v2.1
uses: azure/setup-helm@v4.2.0
with:
version: v3.9.2
version: v3.15.1

- uses: actions/setup-python@v3.1.2
- uses: actions/setup-python@v5.1.1
with:
python-version: 3.7
python-version: 3.12

- uses: actions/setup-go@v3
- uses: actions/setup-go@v5
with:
go-version-file: 'go.mod'

- name: Set up chart-testing
uses: helm/chart-testing-action@v2.2.1
uses: helm/chart-testing-action@v2.6.1
with:
version: v3.7.0
version: v3.11.0

- name: Install Helm Unit Test Plugin
run: |
Expand Down
8 changes: 4 additions & 4 deletions .github/workflows/manifests.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -7,16 +7,16 @@ jobs:
deploy:
strategy:
matrix:
k8s-version: ["v1.30.0"]
descheduler-version: ["v0.30.0"]
k8s-version: ["v1.31.0"]
descheduler-version: ["v0.31.0"]
descheduler-api: ["v1alpha2"]
manifest: ["deployment"]
runs-on: ubuntu-latest
steps:
- name: Checkout Repo
uses: actions/checkout@v3
uses: actions/checkout@v4
- name: Create kind cluster
uses: helm/kind-action@v1.5.0
uses: helm/kind-action@v1.10.0
with:
node_image: kindest/node:${{ matrix.k8s-version }}
kubectl_version: ${{ matrix.k8s-version }}
Expand Down
9 changes: 6 additions & 3 deletions .github/workflows/release.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,9 @@ on:
branches:
- release-*

permissions:
contents: write # allow actions to update gh-pages branch

jobs:
release:
runs-on: ubuntu-latest
Expand All @@ -20,12 +23,12 @@ jobs:
git config user.email "[email protected]"

- name: Install Helm
uses: azure/setup-helm@v1
uses: azure/setup-helm@v4.2.0
with:
version: v3.7.0
version: v3.15.1

- name: Run chart-releaser
uses: helm/chart-releaser-action@v1.1.0
uses: helm/chart-releaser-action@v1.6.0
env:
CR_TOKEN: "${{ secrets.GITHUB_TOKEN }}"
CR_RELEASE_NAME_TEMPLATE: "descheduler-helm-chart-{{ .Version }}"
2 changes: 1 addition & 1 deletion .github/workflows/security.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ jobs:
fail-fast: false
steps:
- name: Checkout
uses: actions/checkout@v3
uses: actions/checkout@v4
with:
fetch-depth: 0

Expand Down
2 changes: 1 addition & 1 deletion Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
FROM golang:1.22.5
FROM golang:1.22.7

WORKDIR /go/src/sigs.k8s.io/descheduler
COPY . .
Expand Down
76 changes: 54 additions & 22 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,18 +33,15 @@ but relies on the default scheduler for that.
## ⚠️ Documentation Versions by Release

If you are using a published release of Descheduler (such as
`registry.k8s.io/descheduler/descheduler:v0.26.1`), follow the documentation in
`registry.k8s.io/descheduler/descheduler:v0.31.0`), follow the documentation in
that version's release branch, as listed below:

|Descheduler Version|Docs link|
|---|---|
|v0.31.x|[`release-1.31`](https://github.com/kubernetes-sigs/descheduler/blob/release-1.31/README.md)|
|v0.30.x|[`release-1.30`](https://github.com/kubernetes-sigs/descheduler/blob/release-1.30/README.md)|
|v0.29.x|[`release-1.29`](https://github.com/kubernetes-sigs/descheduler/blob/release-1.29/README.md)|
|v0.28.x|[`release-1.28`](https://github.com/kubernetes-sigs/descheduler/blob/release-1.28/README.md)|
|v0.27.x|[`release-1.27`](https://github.com/kubernetes-sigs/descheduler/blob/release-1.27/README.md)|
|v0.26.x|[`release-1.26`](https://github.com/kubernetes-sigs/descheduler/blob/release-1.26/README.md)|
|v0.25.x|[`release-1.25`](https://github.com/kubernetes-sigs/descheduler/blob/release-1.25/README.md)|
|v0.24.x|[`release-1.24`](https://github.com/kubernetes-sigs/descheduler/blob/release-1.24/README.md)|

The
[`master`](https://github.com/kubernetes-sigs/descheduler/blob/master/README.md)
Expand Down Expand Up @@ -96,17 +93,17 @@ See the [resources | Kustomize](https://kubectl.docs.kubernetes.io/references/ku

Run As A Job
```
kustomize build 'github.com/kubernetes-sigs/descheduler/kubernetes/job?ref=v0.26.1' | kubectl apply -f -
kustomize build 'github.com/kubernetes-sigs/descheduler/kubernetes/job?ref=release-1.31' | kubectl apply -f -
```

Run As A CronJob
```
kustomize build 'github.com/kubernetes-sigs/descheduler/kubernetes/cronjob?ref=v0.26.1' | kubectl apply -f -
kustomize build 'github.com/kubernetes-sigs/descheduler/kubernetes/cronjob?ref=release-1.31' | kubectl apply -f -
```

Run As A Deployment
```
kustomize build 'github.com/kubernetes-sigs/descheduler/kubernetes/deployment?ref=v0.26.1' | kubectl apply -f -
kustomize build 'github.com/kubernetes-sigs/descheduler/kubernetes/deployment?ref=release-1.31' | kubectl apply -f -
```

## User Guide
Expand All @@ -127,23 +124,34 @@ These are top level keys in the Descheduler Policy that you can use to configure
| `maxNoOfPodsToEvictPerNode` |`int`| `nil` | maximum number of pods evicted from each node (summed through all strategies) |
| `maxNoOfPodsToEvictPerNamespace` |`int`| `nil` | maximum number of pods evicted from each namespace (summed through all strategies) |
| `maxNoOfPodsToEvictTotal` |`int`| `nil` | maximum number of pods evicted per rescheduling cycle (summed through all strategies) |
| `metricsCollector` |`object`| `nil` | configures collection of metrics for actual resource utilization |
| `metricsCollector.enabled` |`bool`| `false` | enables kubernetes [metrics server](https://kubernetes-sigs.github.io/metrics-server/) collection |
| `prometheus` |`object`| `nil` | configures collection of Prometheus metrics for actual resource utilization |
| `prometheus.url` |`string`| `nil` | points to a Prometheus server url |
| `prometheus.insecureSkipVerify` |`bool`| `nil` | disables server certificate chain and host name verification |
| `prometheus.authToken` |`object`| `nil` | sets Prometheus server authentication token |
| `prometheus.authToken.raw` |`string`| `nil` | set the authentication token as a raw string (takes precedence over secretReference) |
| `prometheus.authToken.secretReference` |`object`| `nil` | read the authentication token from a kubernetes secret (the secret is expected to contain the token under `prometheusAuthToken` data key) |
| `prometheus.authToken.secretReference.namespace` |`string`| `nil` | authentication token kubernetes secret namespace (the curent rbac allows to get secrets from kube-system namespace) |
| `prometheus.authToken.secretReference.name` |`string`| `nil` | authentication token kubernetes secret name |

### Evictor Plugin configuration (Default Evictor)

The Default Evictor Plugin is used by default for filtering pods before processing them in an strategy plugin, or for applying a PreEvictionFilter of pods before eviction. You can also create your own Evictor Plugin or use the Default one provided by Descheduler. Other uses for the Evictor plugin can be to sort, filter, validate or group pods by different criteria, and that's why this is handled by a plugin and not configured in the top level config.

| Name |type| Default Value | Description |
|------|----|---------------|-------------|
| `nodeSelector` |`string`| `nil` | limiting the nodes which are processed |
| `evictLocalStoragePods` |`bool`| `false` | allows eviction of pods with local storage |
| Name |type| Default Value | Description |
|---------------------------|----|---------------|-----------------------------------------------------------------------------------------------------------------------------|
| `nodeSelector` |`string`| `nil` | limiting the nodes which are processed |
| `evictLocalStoragePods` |`bool`| `false` | allows eviction of pods with local storage |
| `evictSystemCriticalPods` |`bool`| `false` | [Warning: Will evict Kubernetes system pods] allows eviction of pods with any priority, including system pods like kube-dns |
| `ignorePvcPods` |`bool`| `false` | set whether PVC pods should be evicted or ignored |
| `evictFailedBarePods` |`bool`| `false` | allow eviction of pods without owner references and in failed phase |
|`labelSelector`|`metav1.LabelSelector`||(see [label filtering](#label-filtering))|
|`priorityThreshold`|`priorityThreshold`||(see [priority filtering](#priority-filtering))|
|`nodeFit`|`bool`|`false`|(see [node fit filtering](#node-fit-filtering))|
|`minReplicas`|`uint`|`0`| ignore eviction of pods where owner (e.g. `ReplicaSet`) replicas is below this threshold |
|`minPodAge`|`metav1.Duration`|`0`| ignore eviction of pods with a creation time within this threshold |
| `ignorePvcPods` |`bool`| `false` | set whether PVC pods should be evicted or ignored |
| `evictFailedBarePods` |`bool`| `false` | allow eviction of pods without owner references and in failed phase |
| `labelSelector` |`metav1.LabelSelector`|| (see [label filtering](#label-filtering)) |
| `priorityThreshold` |`priorityThreshold`|| (see [priority filtering](#priority-filtering)) |
| `nodeFit` |`bool`|`false`| (see [node fit filtering](#node-fit-filtering)) |
| `minReplicas` |`uint`|`0`| ignore eviction of pods where owner (e.g. `ReplicaSet`) replicas is below this threshold |
| `minPodAge` |`metav1.Duration`|`0`| ignore eviction of pods with a creation time within this threshold |
| `ignorePodsWithoutPDB` |`bool`|`false`| set whether pods without PodDisruptionBudget should be evicted or ignored |

### Example policy

Expand All @@ -160,6 +168,15 @@ nodeSelector: "node=node1" # you don't need to set this, if not set all will be
maxNoOfPodsToEvictPerNode: 5000 # you don't need to set this, unlimited if not set
maxNoOfPodsToEvictPerNamespace: 5000 # you don't need to set this, unlimited if not set
maxNoOfPodsToEvictTotal: 5000 # you don't need to set this, unlimited if not set
metricsCollector:
enabled: true # you don't need to set this, metrics are not collected if not set
prometheus: # you don't need to set this, prometheus client will not get created if not set
url: http://prometheus-kube-prometheus-prometheus.prom.svc.cluster.local
insecureSkipVerify: true
authToken:
secretReference:
namespace: "kube-system"
name: "authtoken"
profiles:
- name: ProfileName
pluginConfig:
Expand Down Expand Up @@ -279,11 +296,18 @@ If that parameter is set to `true`, the thresholds are considered as percentage
`thresholds` will be deducted from the mean among all nodes and `targetThresholds` will be added to the mean.
A resource consumption above (resp. below) this window is considered as overutilization (resp. underutilization).

**NOTE:** Node resource consumption is determined by the requests and limits of pods, not actual usage.
**NOTE:** By default node resource consumption is determined by the requests and limits of pods, not actual usage.
This approach is chosen in order to maintain consistency with the kube-scheduler, which follows the same
design for scheduling pods onto nodes. This means that resource usage as reported by Kubelet (or commands
like `kubectl top`) may differ from the calculated consumption, due to these components reporting
actual usage metrics. Implementing metrics-based descheduling is currently TODO for the project.
actual usage metrics. Metrics-based descheduling can be enabled by setting `metricsUtilization.metricsServer` field.
In order to have the plugin consume the metrics the metric collector needs to be configured as well.
Alternatively, it is possible to create a prometheus client and configure a prometheus query to consume
metrics outside of the kubernetes metrics server. The query is expected to return a vector of values for
each node. The values are expected to be any real number within <0; 1> interval. During eviction only
a single pod is evicted at most from each overutilized node. There's currently no support for evicting
more. Kubernetes metric server takes precedence over Prometheus.
See `metricsCollector` field at [Top Level configuration](#top-level-configuration) for available options.

**Parameters:**

Expand All @@ -294,6 +318,10 @@ actual usage metrics. Implementing metrics-based descheduling is currently TODO
|`targetThresholds`|map(string:int)|
|`numberOfNodes`|int|
|`evictableNamespaces`|(see [namespace filtering](#namespace-filtering))|
|`metricsUtilization`|object|
|`metricsUtilization.metricsServer`|bool|
|`metricsUtilization.prometheus.query`|string|


**Example:**

Expand All @@ -313,6 +341,10 @@ profiles:
"cpu" : 50
"memory": 50
"pods": 50
metricsUtilization:
metricsServer: true
# prometheus:
# query: instance:node_cpu:rate:sum
plugins:
balance:
enabled:
Expand Down Expand Up @@ -861,7 +893,7 @@ does not exist, descheduler won't create it and will throw an error.

### Label filtering

The following strategies can configure a [standard kubernetes labelSelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.26/#labelselector-v1-meta)
The following strategies can configure a [standard kubernetes labelSelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.31/#labelselector-v1-meta)
to filter pods by their labels:

* `PodLifeTime`
Expand Down
4 changes: 2 additions & 2 deletions charts/descheduler/Chart.yaml
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
apiVersion: v1
name: descheduler
version: 0.30.1
appVersion: 0.30.1
version: 0.31.0
appVersion: 0.31.0
description: Descheduler for Kubernetes is used to rebalance clusters by evicting pods that can potentially be scheduled on better nodes. In the current implementation, descheduler does not schedule replacement of evicted pods but relies on the default scheduler for that.
keywords:
- kubernetes
Expand Down
2 changes: 2 additions & 0 deletions charts/descheduler/templates/cronjob.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -91,8 +91,10 @@ spec:
{{- toYaml .Values.livenessProbe | nindent 16 }}
resources:
{{- toYaml .Values.resources | nindent 16 }}
{{- if .Values.securityContext }}
securityContext:
{{- toYaml .Values.securityContext | nindent 16 }}
{{- end }}
volumeMounts:
- mountPath: /policy-dir
name: policy-volume
Expand Down
2 changes: 2 additions & 0 deletions charts/descheduler/templates/deployment.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -67,8 +67,10 @@ spec:
{{- toYaml .Values.livenessProbe | nindent 12 }}
resources:
{{- toYaml .Values.resources | nindent 12 }}
{{- if .Values.securityContext }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
{{- end }}
volumeMounts:
- mountPath: /policy-dir
name: policy-volume
Expand Down
41 changes: 36 additions & 5 deletions cmd/descheduler/app/options/options.go
Original file line number Diff line number Diff line change
Expand Up @@ -21,14 +21,21 @@ import (
"strings"
"time"

promapi "github.com/prometheus/client_golang/api"
"github.com/spf13/pflag"

metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
apiserver "k8s.io/apiserver/pkg/server"
apiserveroptions "k8s.io/apiserver/pkg/server/options"
clientset "k8s.io/client-go/kubernetes"

restclient "k8s.io/client-go/rest"
cliflag "k8s.io/component-base/cli/flag"
componentbaseconfig "k8s.io/component-base/config"
componentbaseoptions "k8s.io/component-base/config/options"
"k8s.io/component-base/featuregate"
"k8s.io/klog/v2"
metricsclient "k8s.io/metrics/pkg/client/clientset/versioned"

"sigs.k8s.io/descheduler/pkg/apis/componentconfig"
"sigs.k8s.io/descheduler/pkg/apis/componentconfig/v1alpha1"
Expand All @@ -45,11 +52,14 @@ const (
type DeschedulerServer struct {
componentconfig.DeschedulerConfiguration

Client clientset.Interface
EventClient clientset.Interface
SecureServing *apiserveroptions.SecureServingOptionsWithLoopback
DisableMetrics bool
EnableHTTP2 bool
Client clientset.Interface
EventClient clientset.Interface
MetricsClient metricsclient.Interface
PrometheusClient promapi.Client
SecureServing *apiserveroptions.SecureServingOptionsWithLoopback
SecureServingInfo *apiserver.SecureServingInfo
DisableMetrics bool
EnableHTTP2 bool
// FeatureGates enabled by the user
FeatureGates map[string]bool
// DefaultFeatureGates for internal accessing so unit tests can enable/disable specific features
Expand Down Expand Up @@ -118,3 +128,24 @@ func (rs *DeschedulerServer) AddFlags(fs *pflag.FlagSet) {

rs.SecureServing.AddFlags(fs)
}

func (rs *DeschedulerServer) Apply() error {
err := features.DefaultMutableFeatureGate.SetFromMap(rs.FeatureGates)
if err != nil {
return err
}
rs.DefaultFeatureGates = features.DefaultMutableFeatureGate

// loopbackClientConfig is a config for a privileged loopback connection
var loopbackClientConfig *restclient.Config
var secureServing *apiserver.SecureServingInfo
if err := rs.SecureServing.ApplyTo(&secureServing, &loopbackClientConfig); err != nil {
klog.ErrorS(err, "failed to apply secure server configuration")
return err
}

secureServing.DisableHTTP2 = !rs.EnableHTTP2
rs.SecureServingInfo = secureServing

return nil
}
Loading