You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I want to import my existing autoscaler configuration into terraform so I started to work with kubernetes_horizontal_pod_autoscaler_v2.hpa to replicate manifest I built some time ago without terraform.
However I'm not able to create this manifest with it (the manifest is valid and already applied into my kubernetes cluster and it's working as intended.
apiVersion: autoscaling/v2kind: HorizontalPodAutoscalermetadata:
name: sidekiqnamespace: testspec:
scaleTargetRef:
apiVersion: apps/v1kind: Deploymentname: testminReplicas: 1maxReplicas: 3metrics:
- type: Podspods:
metric:
name: sidekiq_queue_latency # How long a job has been waiting in the queuetarget:
type: ValueaverageValue: "20"# Keep it under 20 seconds
- type: Podspods:
metric:
name: sidekiq_jobs_waiting_count # How many jobs are waiting to be processedtarget:
type: ValueaverageValue: "10"# Keep it under 10 jobs
Terraform Version, Provider Version and Kubernetes Version
Terraform v1.9.6
on darwin_arm64
+ provider registry.terraform.io/hashicorp/kubernetes v2.35.0
Namespace and deployment are created, HPA creation fails with this error. It seems that averageValue is not passed.
╷
│ Error: HorizontalPodAutoscaler.autoscaling "test-hpa" is invalid: spec.metrics[0].pods.target.averageValue: Required value: must specify a positive target averageValue
│
│ with kubernetes_horizontal_pod_autoscaler_v2.hpa,
│ on hpa.tf line 46, in resource "kubernetes_horizontal_pod_autoscaler_v2" "hpa":
│ 46: resource "kubernetes_horizontal_pod_autoscaler_v2" "hpa" {
│
╵
Error on import
If I try to import my existing manifest (the one at the beginning of the issue) something strange happens:
terraform apply -target kubernetes_deployment_v1.deployment -auto-approve
kubernetes apply -f hpa.yaml # Namespace must exist
terraform plan -target kubernetes_horizontal_pod_autoscaler_v2.sidekiq # to see the diff
I expect the previous plan to show no diffs, however it shows 2 strange diffs
kubernetes_namespace_v1.test: Refreshing state... [id=test]
kubernetes_deployment_v1.deployment: Refreshing state... [id=test/test]
kubernetes_horizontal_pod_autoscaler_v2.sidekiq: Preparing import... [id=test/sidekiq]
kubernetes_horizontal_pod_autoscaler_v2.sidekiq: Refreshing state... [id=test/sidekiq]
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
~ update in-place
Terraform will perform the following actions:
# kubernetes_horizontal_pod_autoscaler_v2.sidekiq will be updated in-place
# (imported from "test/sidekiq")
~ resource "kubernetes_horizontal_pod_autoscaler_v2" "sidekiq" {
id = "test/sidekiq"
metadata {
annotations = {}
generate_name = null
generation = 0
labels = {}
name = "sidekiq"
namespace = "test"
resource_version = "597615"
uid = "4b4de789-3ba9-4056-9bd2-0531fb69ffcf"
}
~ spec {
max_replicas = 3
min_replicas = 1
target_cpu_utilization_percentage = 0
~ metric {
type = "Pods"
~ pods {
metric {
name = "sidekiq_queue_latency"
}
~ target {
average_utilization = 0
+ average_value = "20"
type = "Value"
- value = "<nil>" -> null
}
}
}
~ metric {
type = "Pods"
~ pods {
metric {
name = "sidekiq_jobs_waiting_count"
}
~ target {
average_utilization = 0
+ average_value = "10"
type = "Value"
- value = "<nil>" -> null
}
}
}
scale_target_ref {
api_version = "apps/v1"
kind = "Deployment"
name = "test"
}
}
}
Plan: 1 to import, 0 to add, 1 to change, 0 to destroy.
╷
│ Warning: Resource targeting is in effect
│
│ You are creating a plan with the -target option, which means that the result of this plan may not represent all of the changes requested by the current configuration.
│
│ The -target option is not for routine use, and is provided only for exceptional situations such as recovering from errors or mistakes, or when Terraform specifically suggests to use it as part of an error message.
╵
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
Note: You didn't use the -out option to save this plan, so Terraform can't guarantee to take exactly these actions if you run "terraform apply" now.
If I try to apply I get the same error as the previous apply
╷
│ Error: Failed to update horizontal pod autoscaler: HorizontalPodAutoscaler.autoscaling "sidekiq" is invalid: [spec.metrics[0].pods.target.averageValue: Required value: must specify a positive target averageValue, spec.metrics[1].pods.target.averageValue: Required value: must specify a positive target averageValue]
│
│ with kubernetes_horizontal_pod_autoscaler_v2.sidekiq,
│ on hpa.tf line 81, in resource "kubernetes_horizontal_pod_autoscaler_v2" "sidekiq":
│ 81: resource "kubernetes_horizontal_pod_autoscaler_v2" "sidekiq" {
│
The text was updated successfully, but these errors were encountered:
Thank you for reporting this issue. For some metric types, averageValue should be set regardless of the target type. We haven't taken this into account in the provider logic. I think we should be able to fix this soon.
I want to import my existing autoscaler configuration into terraform so I started to work with
kubernetes_horizontal_pod_autoscaler_v2.hpa
to replicate manifest I built some time ago without terraform.However I'm not able to create this manifest with it (the manifest is valid and already applied into my kubernetes cluster and it's working as intended.
Terraform Version, Provider Version and Kubernetes Version
Affected Resource(s)
Terraform Configuration Files
Steps to Reproduce
terraform apply -target kubernetes_horizontal_pod_autoscaler_v2.hpa
Expected Behavior
It should have created a namespace, a deployment and a hpa configuration similar to this:
Actual Behavior
Namespace and deployment are created, HPA creation fails with this error. It seems that averageValue is not passed.
Error on import
If I try to import my existing manifest (the one at the beginning of the issue) something strange happens:
I expect the previous plan to show no diffs, however it shows 2 strange diffs
If I try to apply I get the same error as the previous apply
The text was updated successfully, but these errors were encountered: