Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error: Failed to update proposed state from prior state #2487

Open
giorgisturuagit opened this issue May 9, 2024 · 10 comments
Open

Error: Failed to update proposed state from prior state #2487

giorgisturuagit opened this issue May 9, 2024 · 10 comments
Labels

Comments

@giorgisturuagit
Copy link

I were using 2.29.0 version and after updating it to 2.30.0 version I got this kind of error, without changing anything, only provider version

Error: Failed to update proposed state from prior state

with kubernetes_manifest.kibana,
on main.tf line 87, in resource "kubernetes_manifest" "kibana":
87: resource "kubernetes_manifest" "kibana" {

AttributeName("podTemplate"): can't use
tftypes.Object["metadata":tftypes.Object["creationTimestamp":tftypes.DynamicPseudoType],
"spec":tftypes.Object["containers":tftypes.Tuple[tftypes.Object["env":tftypes.Tuple[tftypes.Object["name":tftypes.String,
"value":tftypes.String]], "name":tftypes.String,
"readinessProbe":tftypes.Object["httpGet":tftypes.Object["path":tftypes.String,
"port":tftypes.Number, "scheme":tftypes.String]],

@cgaolei
Copy link

cgaolei commented May 10, 2024

Having exactly the same issue today. No change of anything except provider version upgraded to 2.30.0.

@bufadu
Copy link

bufadu commented May 14, 2024

Same here, looks related to f83d63a

Trace is :

Stack trace from the terraform-provider-kubernetes_v2.30.0_x5 plugin:

panic: ElementKeyInt(0): can't use tftypes.Object["attach_metadata":tftypes.Object["node":tftypes.Bool], "authorization":tftypes.Object["credentials":tftypes.Object["key":tftypes.String, "name":tftypes.String, "optional":tftypes.Bool], "credentialsFile":tftypes.String, "type":tftypes.String], "basicAuth":tftypes.Object["password":tftypes.Object["key":tftypes.String, "name":tftypes.String, "optional":tftypes.Bool], "password_file":tftypes.String, "username":tftypes.Object["key":tftypes.String, "name":tftypes.String, "optional":tftypes.Bool]], "bearerTokenFile":tftypes.String, "bearerTokenSecret":tftypes.Object["key":tftypes.String, "name":tftypes.String, "optional":tftypes.Bool], blablabla

goroutine 113 [running]:
github.com/hashicorp/terraform-plugin-go/tftypes.NewValue(...)
	github.com/hashicorp/[email protected]/tftypes/value.go:278
github.com/hashicorp/terraform-provider-kubernetes/manifest/morph.DeepUnknown({0x3122888, 0xc005843950?}, {{0x3122888?, 0xc005b99440?}, {0x297c820?, 0xc0087cd968?}}, 0xc0087cddd0)
	github.com/hashicorp/terraform-provider-kubernetes/manifest/morph/scaffold.go:86 +0x19ae
github.com/hashicorp/terraform-provider-kubernetes/manifest/morph.DeepUnknown({0x3122620, 0xc0059a9ad0?}, {{0x3122620?, 0xc005b9e330?}, {0x2aa8b60?, 0xc005b6ff80?}}, 0xc0087cdc50)
	github.com/hashicorp/terraform-provider-kubernetes/manifest/morph/scaffold.go:33 +0x1cb4
github.com/hashicorp/terraform-provider-kubernetes/manifest/morph.DeepUnknown({0x3122620, 0xc005b6f530?}, {{0x3122620?, 0xc005b9f320?}, {0x2aa8b60?, 0xc005b6f7a0?}}, 0xc0087cd9e0)
	github.com/hashicorp/terraform-provider-kubernetes/manifest/morph/scaffold.go:33 +0x1cb4
github.com/hashicorp/terraform-provider-kubernetes/manifest/provider.(*RawProviderServer).PlanResourceChange(0xc0000ae600, {0x311b4b8, 0xc001080f90}, 0xc0000ac5a0)
	github.com/hashicorp/terraform-provider-kubernetes/manifest/provider/plan.go:369 +0x3173
github.com/hashicorp/terraform-plugin-mux/tf5muxserver.(*muxServer).PlanResourceChange(0xc000164b60, {0x311b4b8?, 0xc001080c90?}, 0xc0000ac5a0)
	github.com/hashicorp/[email protected]/tf5muxserver/mux_server_PlanResourceChange.go:73 +0x2ad
github.com/hashicorp/terraform-plugin-go/tfprotov5/tf5server.(*server).PlanResourceChange(0xc000498320, {0x311b4b8?, 0xc001080270?}, 0xc000270000)
	github.com/hashicorp/[email protected]/tfprotov5/tf5server/server.go:811 +0x3d0
github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/tfplugin5._Provider_PlanResourceChange_Handler({0x2d1a880?, 0xc000498320}, {0x311b4b8, 0xc001080270}, 0xc0010ac000, 0x0)
	github.com/hashicorp/[email protected]/tfprotov5/internal/tfplugin5/tfplugin5_grpc.pb.go:500 +0x169
google.golang.org/grpc.(*Server).processUnaryRPC(0xc0001c2e00, {0x311b4b8, 0xc0010801e0}, {0x3123a60, 0xc000e8ed00}, 0xc0011a4000, 0xc000e76de0, 0x41c7480, 0x0)
	google.golang.org/[email protected]/server.go:1386 +0xe23
google.golang.org/grpc.(*Server).handleStream(0xc0001c2e00, {0x3123a60, 0xc000e8ed00}, 0xc0011a4000)
	google.golang.org/[email protected]/server.go:1797 +0x100c
google.golang.org/grpc.(*Server).serveStreams.func2.1()
	google.golang.org/[email protected]/server.go:1027 +0x8b
created by google.golang.org/grpc.(*Server).serveStreams.func2 in goroutine 49
	google.golang.org/[email protected]/server.go:1038 +0x135

Error: The terraform-provider-kubernetes_v2.30.0_x5 plugin crashed!

for this kubernetes_manifest :

resource "kubernetes_manifest" "redacted" {

  manifest = {
    "apiVersion" = "operator.victoriametrics.com/v1beta1"
    "kind"       = "VMServiceScrape"
    "metadata" = {
      "name"      = "redacted"
      "namespace" = "redacted"
    }
    "spec" = {
      "discoveryRole" = "endpoints"
      "endpoints" = [
        {
          "port" = "metrics"
          "relabelConfigs" = [
            {
              "action" = "keep"
              "regex"  = "true"
              "sourceLabels" = [
                "__meta_kubernetes_service_annotation_prometheus_io_scrape",
              ]
            },
            {
              "action" = "replace"
              "regex"  = "(https?)"
              "sourceLabels" = [
                "__meta_kubernetes_service_annotation_prometheus_io_scheme",
              ]
              "targetLabel" = "__scheme__"
            },
            {
              "action" = "replace"
              "regex"  = "(.+)"
              "sourceLabels" = [
                "__meta_kubernetes_service_annotation_prometheus_io_path",
              ]
              "targetLabel" = "__metrics_path__"
            },
            {
              "action"      = "replace"
              "regex"       = "([^:]+)(?::\\d+)?;(\\d+)"
              "replacement" = "$1:$2"
              "sourceLabels" = [
                "__address__",
                "__meta_kubernetes_service_annotation_prometheus_io_port",
              ]
              "targetLabel" = "__address__"
            },
          ]
        },
      ]
      "jobLabel" = "app.kubernetes.io/name"
      "namespaceSelector" = {
        "any" = true
      }
      "selector" = {}
    }
  }
}```

@cgaolei
Copy link

cgaolei commented Jun 9, 2024

Temporary workaround is using the previous provider version. (verified working)

  required_providers {
    kubernetes = {
      source = "hashicorp/kubernetes"
      version = "2.29.0"
    }
  }

@Dr-Octavius
Copy link

Dr-Octavius commented Jul 13, 2024

Hi, would like to add on to this issue since this is referencing Kibana!

I too faced the issue,

kubernetes_manifest.my_kibana_dev: Modifying...
╷
│ Error: Provider produced inconsistent result after apply
│ 
│ When applying changes to kubernetes_manifest.my_kibana_dev, provider
│ "provider[\"registry.terraform.io/hashicorp/kubernetes\"]" produced an
│ unexpected new value: .object: wrong final value type: incorrect object
│ attributes.
│ 
│ This is a bug in the provider, which should be reported in the provider's
│ own issue tracker.

Operation failed: failed running terraform apply (exit 1)

but with a different setup (using ECK operator)

TLDR; kubernetes_manifest is likely to fail just due to how kubernetes itself updates the kubernetes StatefulSet resource. SPOILER: It does not and actually is an ongoing issue in the kubernetes community for like 3 years.

The Workaround does not seem to work when using the ECK (Elastic Cloud on Kubernetes) Operator for Elastic Search & Kibana Deployments.

This is somewhat documented (I think...) for elasticsearch, in that due to ECK Operator will try to apply some form of a "rollingStrategy"-esque update due to the fact that node orchestration with the ECK Operator doing it's best to apply patches one pod at a time (may be reading the documentation wrongly but that is what it seems to imply), provided there is more than 1 (elastic) node available via the maxSurge and minSurge values.

And this is because the kubernetes_manifests actually gets translated into a StatefulSet resource,.

My guess is that the ECK Operator is probably doing the same for all the resources it manages as well.

Honestly, I have no workaround for this. The only thing you can do is add a manual resource destruction step before running any updates of any StatefulSet resource, before using the kubernetes_manifest to make your updates.

Adding the following block

field_manager {
    name            = "terraform"
    force_conflicts = true
  }

in themanifest = {...} block works in populating changes to elasticsearch but I think all this does is remove the existing StatefulSet and rewrites it.

Either way, the problem may not simply be a procider issue, but just a limitation on how kubernetes functions in the first place. Not sure how the maintainers can actually do anything about this though.

@norman-zon
Copy link

norman-zon commented Jul 17, 2024

I am seeing the same error when trying to update FluxCD HelmRelease CR from v1beta2 to v2 for which the changelog specifies:

Deprecated fields have been removed from the HelmRelease v2 API:

.spec.chart.spec.valuesFile replaced by .spec.chart.spec.valuesFiles
.spec.postRenderers.kustomize.patchesJson6902 replaced by .spec.postRenderers.kustomize.patches
.spec.postRenderers.kustomize.patchesStrategicMerge replaced by .spec.postRenderers.kustomize.patches
.status.lastAppliedRevision replaced by .status.history.chartVersion

The error thrown is:

Error: Failed to update proposed state from prior state

  with module.flux_fluentd[0].kubernetes_manifest.helmrelease[0],
  on .terraform/modules/flux_fluentd/flux_application/application.tf line 91, in resource "kubernetes_manifest" "helmrelease":
  91: resource "kubernetes_manifest" "helmrelease" {

AttributeName("postRenderers"): can't use
tftypes.Tuple[tftypes.Object["kustomize":tftypes.Object["images":tftypes.List[tftypes.Object["digest":tftypes.String,
"name":tftypes.String, "newName":tftypes.String, "newTag":tftypes.String]],
"patches":tftypes.List[tftypes.Object["patch":tftypes.String,
"target":tftypes.Object["annotationSelector":tftypes.String,
"group":tftypes.String, "kind":tftypes.String,
"labelSelector":tftypes.String, "name":tftypes.String,
"namespace":tftypes.String, "version":tftypes.String]]],
"patchesJson6902":tftypes.Tuple[tftypes.Object["patch":tftypes.Tuple[tftypes.Object["from":tftypes.String,
"op":tftypes.String, "path":tftypes.String,
"value":tftypes.DynamicPseudoType]],
"target":tftypes.Object["annotationSelector":tftypes.String,
"group":tftypes.String, "kind":tftypes.String,
"labelSelector":tftypes.String, "name":tftypes.String,
"namespace":tftypes.String, "version":tftypes.String]]],
"patchesStrategicMerge":tftypes.Tuple[tftypes.DynamicPseudoType]]]] as
tftypes.List[tftypes.Object["kustomize":tftypes.Object["images":tftypes.List[tftypes.Object["digest":tftypes.String,
"name":tftypes.String, "newName":tftypes.String, "newTag":tftypes.String]],
"patches":tftypes.List[tftypes.Object["patch":tftypes.String,
"target":tftypes.Object["annotationSelector":tftypes.String,
"group":tftypes.String, "kind":tftypes.String,
"labelSelector":tftypes.String, "name":tftypes.String,
"namespace":tftypes.String, "version":tftypes.String]]]]]]

EDIT: happens with v2.31 as well as with v2.29.0

@Dr-Octavius
Copy link

Dr-Octavius commented Jul 23, 2024

Hi guys!

Ok I managed to fix this upon looking at the kubernetes docs more intensely.

Pay attention to this part of the docs regarding computed fields

TLDR;

UPDATE
Subsequent runs still continue to face errors...this really seems like a provider issue

AttributeName("spec"): can't use tftypes.Object["config":tftypes.DynamicPseudoType, "count":tftypes.Number, "elasticsearchRef":tftypes.Object["name":tftypes.String, "namespace":tftypes.String, "secretName":tftypes.String, "serviceName":tftypes.String], "enterpriseSearchRef":tftypes.Object["name":tftypes.String, "namespace":tftypes.String, "secretName":tftypes.String, "serviceName":tftypes.String], "http":tftypes.Object["service":tftypes.Object["metadata":tftypes.Object["annotations":tftypes.Map[tftypes.String], "finalizers":tftypes.List[tftypes.String], "labels":tftypes.Map[tftypes.String], "name":tftypes.String, "namespace":tftypes.String], "spec":tftypes.Object["allocateLoadBalancerNodePorts":tftypes.Bool, "clusterIP":tftypes.String, "clusterIPs":tftypes.List[tftypes.String], "externalIPs":tftypes.List[tftypes.String], "externalName":tftypes.String, "externalTrafficPolicy":tftypes.String, "healthCheckNodePort":tftypes.Number, "internalTrafficPolicy":tftypes.String, "ipFamilies":tftypes.List[tftypes.String], "ipFamilyPolicy":tftypes.String, "loadBalancerClass":tftypes.String, "loadBalancerIP":tftypes.String, "loadBalancerSourceRanges":tftypes.List[tftypes.String], "ports":tftypes.List[tftypes.Object["appProtocol":tftypes.String, "name":tftypes.String, "nodePort":tftypes.Number, "port":tftypes.Number, "protocol":tftypes.String, "targetPort":tftypes.String]], "publishNotReadyAddresses":tftypes.Bool, "selector":tftypes.Map[tftypes.String], "sessionAffinity":tftypes.String, "sessionAffinityConfig":tftypes.Object["clientIP":tftypes.Object["timeoutSeconds":tftypes.Number]], "type":tftypes.String]], "tls":tftypes.Object["certificate":tftypes.Object["secretName":tftypes.String], "selfSignedCertificate":tftypes.Object["disabled":tftypes.Bool, "subjectAltNames":tftypes.List[tftypes.Object["dns":tftypes.String, "ip":tftypes.String]]]]], "image":tftypes.String, "monitoring":tftypes.Object["logs":tftypes.Object["elasticsearchRefs":tftypes.List[tftypes.Object["name":tftypes.String, "namespace":tftypes.String, "secretName":tftypes.String, "serviceName":tftypes.String]]], "metrics":tftypes.Object["elasticsearchRefs":tftypes.List[tftypes.Object["name":tftypes.String, "namespace":tftypes.String, "secretName":tftypes.String, "serviceName":tftypes.String]]]], "podTemplate":tftypes.Object["metadata":tftypes.Object["creationTimestamp":tftypes.DynamicPseudoType], "spec":tftypes.Object["automountServiceAccountToken":tftypes.Bool, "containers":tftypes.Tuple[tftypes.Object["name":tftypes.String, "resources":tftypes.Object["limits":tftypes.Object["cpu":tftypes.String, "memory":tftypes.String]]]], "nodeSelector":tftypes.Object["nodepool":tftypes.String]]], "revisionHistoryLimit":tftypes.Number, "secureSettings":tftypes.List[tftypes.Object["entries":tftypes.List[tftypes.Object["key":tftypes.String, "path":tftypes.String]], "secretName":tftypes.String]], "serviceAccountName":tftypes.String, "version":tftypes.String] as tftypes.Object["config":tftypes.DynamicPseudoType, "count":tftypes.Number, "elasticsearchRef":tftypes.Object["name":tftypes.String, "namespace":tftypes.String, "secretName":tftypes.String, "serviceName":tftypes.String], "enterpriseSearchRef":tftypes.Object["name":tftypes.String, "namespace":tftypes.String, "secretName":tftypes.String, "serviceName":tftypes.String], "http":tftypes.Object["service":tftypes.Object["metadata":tftypes.Object["annotations":tftypes.Map[tftypes.String], "finalizers":tftypes.List[tftypes.String], "labels":tftypes.Map[tftypes.String], "name":tftypes.String, "namespace":tftypes.String], "spec":tftypes.Object["allocateLoadBalancerNodePorts":tftypes.Bool, "clusterIP":tftypes.String, "clusterIPs":tftypes.List[tftypes.String], "externalIPs":tftypes.List[tftypes.String], "externalName":tftypes.String, "externalTrafficPolicy":tftypes.String, "healthCheckNodePort":tftypes.Number, "internalTrafficPolicy":tftypes.String, "ipFamilies":tftypes.List[tftypes.String], "ipFamilyPolicy":tftypes.String, "loadBalancerClass":tftypes.String, "loadBalancerIP":tftypes.String, "loadBalancerSourceRanges":tftypes.List[tftypes.String], "ports":tftypes.List[tftypes.Object["appProtocol":tftypes.String, "name":tftypes.String, "nodePort":tftypes.Number, "port":tftypes.Number, "protocol":tftypes.String, "targetPort":tftypes.String]], "publishNotReadyAddresses":tftypes.Bool, "selector":tftypes.Map[tftypes.String], "sessionAffinity":tftypes.String, "sessionAffinityConfig":tftypes.Object["clientIP":tftypes.Object["timeoutSeconds":tftypes.Number]], "type":tftypes.String]], "tls":tftypes.Object["certificate":tftypes.Object["secretName":tftypes.String], "selfSignedCertificate":tftypes.Object["disabled":tftypes.Bool, "subjectAltNames":tftypes.List[tftypes.Object["dns":tftypes.String, "ip":tftypes.String]]]]], "image":tftypes.String, "monitoring":tftypes.Object["logs":tftypes.Object["elasticsearchRefs":tftypes.List[tftypes.Object["name":tftypes.String, "namespace":tftypes.String, "secretName":tftypes.String, "serviceName":tftypes.String]]], "metrics":tftypes.Object["elasticsearchRefs":tftypes.List[tftypes.Object["name":tftypes.String, "namespace":tftypes.String, "secretName":tftypes.String, "serviceName":tftypes.String]]]], "podTemplate":tftypes.Object["spec":tftypes.Object["automountServiceAccountToken":tftypes.Bool, "containers":tftypes.Tuple[tftypes.Object["name":tftypes.String, "resources":tftypes.Object["limits":tftypes.Object["cpu":tftypes.String, "memory":tftypes.String]]]], "nodeSelector":tftypes.Object["nodepool":tftypes.String]]], "revisionHistoryLimit":tftypes.Number, "secureSettings":tftypes.List[tftypes.Object["entries":tftypes.List[tftypes.Object["key":tftypes.String, "path":tftypes.String]], "secretName":tftypes.String]], "serviceAccountName":tftypes.String, "version":tftypes.String]

Process

Upon looking at the docs, I first figured out that it would probably be a great idea to look into the resulting .yaml from this apply. Take a look below for my own manifest produced by inspecting with kubectl get operation,

apiVersion: redacted
kind: Kibana
metadata:
  annotations:
    association.k8s.elastic.co/es-conf: '{redacted}'
  creationTimestamp: redacted
  generation: redacted
  name: redacted
  namespace: redacted
  resourceVersion: redacted
  uid: redacted
spec:
  count: redacted
  elasticsearchRef:
    name: redacted
  enterpriseSearchRef: {redacted}
  http:
    service:
      metadata: {redacted}
      spec: {redacted}
    tls:
      certificate: {redacted}
  monitoring:
    logs: {redacted}
    metrics: {redacted}
  podTemplate:
    metadata:
      creationTimestamp: redacted
    spec:
      automountServiceAccountToken: redacted
      containers:
      - name: kibana
        resources:
          limits:
            cpu: redacted
            memory: redacted
      nodeSelector:
        nodepool: redacted
  version: redacted
status:
  associationStatus: redacted
  count: redacted
  elasticsearchAssociationStatus: redacted
  health: redacted
  observedGeneration: redacted
  selector: redacted
  version: redacted

I found the above really weird because some fields were missing in my original kubernetes_manifest block

resource "kubernetes_manifest" "redacted" {
  manifest = {
    apiVersion = "redacted"
    kind = "Kibana"
    metadata = {
      name = redacted
      namespace = redacted
    }
    spec = {
      version = redacted
      count = redacted
      elasticsearchRef = {
        name = redacted
      }
      podTemplate = {
        spec = {
          automountServiceAccountToken = redacted
          containers = [
            {
              name = "kibana"
              resources = {
                limits = {
                  memory = redacted
                  cpu = redacted
                }
              }
            }
          ]
          nodeSelector = {
            nodepool = redacted
          }
        }
      }
    }
  }
  field_manager {
    ...
  }
  depends_on = [redacted]
}

I thought perhaps the fields that ended up being changed were not in my .tf and I should add it under the computed_fields block as such

resource "kubernetes_manifest" "redacted" {
  manifest = {
    ...
  }
  # Added this block
  computed_fields = [
    "metadata.annotations",
    "metadata.creationTimestamp",
    "metadata.generation",
    "metadata.resourceVersion",
    "metadata.uid",
    "spec.enterpriseSearchRef",
    "spec.http",
    "spec.monitoring",
    "spec.podTemplate.metadata",
    "status"
  ]
  field_manager {
    ...
  }
  depends_on = [redacted]
}

II thought that by adding the fields I did not mutate in my manifest block to the computed_fields block so as to match the eventual .yaml, it would solve the problem. WRONG! I still faced the same error here,

kubernetes_manifest.lome_kibana_sgp1_dev: Modifying...
╷
│ Error: Provider produced inconsistent result after apply
│ 
│ When applying changes to kubernetes_manifest.lome_kibana_sgp1_dev, provider
│ "provider[\"registry.terraform.io/hashicorp/kubernetes\"]" produced an
│ unexpected new value: .object: wrong final value type: incorrect object
│ attributes.
│ 
│ This is a bug in the provider, which should be reported in the provider's
│ own issue tracker.

Operation failed: failed running terraform apply (exit 1)

Which got me thinking that perhaps, there were some fields in the .yaml that were attributes which were eventually translated from manifest to object which I didn't know about. With this in mind, I decided to change my configuration to below,

resource "kubernetes_manifest" "redacted" {
  manifest = {
    ...
  }
  # Changed this block
  computed_fields = [
    "metadata",
    "spec",
    "status"
  ]
  field_manager {
    ...
  }
  depends_on = [redacted]
}

Which did the trick for me.

Final Thoughts

  • Some updates may result in inconsistent output due to how computed_fields work
  • This does not seem to be too much of a cause of concern because it is likely that if the Kubernetes API is not changing and defaulting to some other value, your terraform configuration is probably wrong anyway
  • Certain edge cases that have yet to be discovered may be affected. Perhaps the maintainers can take note:)
    Hope this helps all of y'all

@vfouqueron
Copy link

I have the same issue as described above, neither field_manager to terraform, nor putting everything in computed_fields works for me. Version 2.29.0 has the same issue.

I always endup with :

│ Error: Provider produced inconsistent result after apply
│ 
│ When applying changes to
│ module.infrastructure.module.arangodb.kubernetes_manifest.arango_deployment,
│ provider "provider[\"registry.terraform.io/hashicorp/kubernetes\"]"
│ produced an unexpected new value: .object: wrong final value type:
│ incorrect object attributes.
│ 
│ This is a bug in the provider, which should be reported in the provider's
│ own issue tracker

My terraform file :

resource "kubernetes_manifest" "arango_deployment" {
  field_manager {
    force_conflicts = true
    name            = "arangodb_operator"
  }
  manifest = {
    apiVersion = "database.arangodb.com/v1"
    kind       = "ArangoDeployment"
    metadata = {
      name      = "arangodb"
      namespace = module.k8s_namespace.namespace
    }
    spec = {
      agents = {
        volumeClaimTemplate = {
          spec = {
            accessModes = ["ReadWriteOnce"]
            resources = {
              requests = {
                storage = "2Gi"
              }
            }
            storageClassName = var.storage_classname

          }
        }
      }
      bootstrap = {
        passwordSecretNames = {
          root = one(kubernetes_secret_v1.arango_root_password.metadata[*].name)
        }
      }
      dbservers = {
        volumeClaimTemplate = {
          spec = {
            accessModes = ["ReadWriteOnce"]
            resources = {
              requests = {
                storage = "50Gi"
              }
            }
            storageClassName = var.storage_classname
          }
        }
      }
      mode     = "Cluster"
      timezone = "Europe/Paris"
    }
   wait {
     condition {
       type   = "Ready"
       status = "True"
     }
  }
  depends_on = [helm_release.arangodb]
}

I only have the issue with wait section, but I think this is because it first think it is deployed and resolves before encountering the error. Here are the full trace logs : https://gist.github.com/685f48c73cd814eb7fe5375fe85ca48e.git

@maur1th
Copy link

maur1th commented Dec 3, 2024

Just encountered this issue on v2.34.0.

@Nuru
Copy link

Nuru commented Dec 20, 2024

Just encountered this issue on v2.35.0

@tempoivo
Copy link

Same issue encountered on v2.35.1 with elasticsearch + kibana.
Only workaround to apply some changes is terraform apply -refresh=false

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

9 participants