Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error: error cloning virtual machine: A specified parameter was not correct: spec.disk.backing.crypto #2241

Open
4 tasks done
stackthecat opened this issue Jul 26, 2024 · 2 comments
Labels
bug Type: Bug needs-triage Status: Issue Needs Triage

Comments

@stackthecat
Copy link

Community Guidelines

  • I have read and agree to the HashiCorp Community Guidelines .
  • Vote on this issue by adding a 👍 reaction to the original issue initial description to help the maintainers prioritize.
  • Do not leave "+1" or other comments that do not add relevant information or questions.
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment.

Terraform

v1.9.3

Terraform Provider

v2.8.2

VMware vSphere

7.0U3 Build 22357613

Description

When applying terraform plan, storage_policy_id is well retrieved from datasource.

It is successfully applied at VM Folder Level, but not at disk level. (when i delete storage_policy_id from disk block, VM Encryption Policy is applied on VM folder and virtual machine provisioning is successfull)

Affected Resources or Data Sources

resource/vsphere_virtual_machine

Terraform Configuration

data "vsphere_virtual_machine" "tpzf_template" {
  name          = "${var.template_folder}/${var.base_template_name}"
  datacenter_id = data.vsphere_datacenter.datacenter.id
}

resource "vsphere_virtual_machine" "vms_ul" {
  count                = length(var.ul_vms)
  name                 = var.ul_vms[count.index].vm_ul_hostname
  datastore_id         = data.vsphere_datastore.vms_datastores[count.index % length(data.vsphere_datastore.vms_datastores)].id
  host_system_id       = data.vsphere_host.hosts[count.index % length(data.vsphere_host.hosts)].id
  resource_pool_id     = data.vsphere_resource_pool.default.id
  num_cpus             = var.ul_vms[count.index].cpus 
  memory               = var.ul_vms[count.index].memory
  guest_id             = data.vsphere_virtual_machine.tpzf_template.guest_id
  scsi_type            = data.vsphere_virtual_machine.tpzf_template.scsi_type
  firmware             = data.vsphere_virtual_machine.tpzf_template.firmware
  storage_policy_id    = data.vsphere_storage_policy.snc_encrypted.id
  cpu_hot_add_enabled  = true
  cpu_hot_remove_enabled = true
  memory_hot_add_enabled = true
  enable_disk_uuid       = true
  clone {
    template_uuid = data.vsphere_virtual_machine.tpzf_template.id
    linked_clone = true
    customize {
        timeout = 30
        network_interface {
          ipv4_address = var.ul_vms[count.index].primary_ip
          ipv4_netmask = var.ul_vms[count.index].netmask
        }
        ipv4_gateway = var.ul_vms[count.index].primary_gw_ip
        dns_server_list = ["8.8.8.8", "8.8.4.4"]
        linux_options {
          host_name = var.ul_vms[count.index].vm_ul_hostname
          domain    = ""
          time_zone = "utc"
        }
  }
  }
  network_interface {
    network_id = data.vsphere_network.Internal-SNC-GW.id
  }
  disk {
    label            = data.vsphere_virtual_machine.tpzf_template.disks.0.label
    size             = data.vsphere_virtual_machine.tpzf_template.disks.0.size
    thin_provisioned = data.vsphere_virtual_machine.tpzf_template.disks.0.thin_provisioned
    storage_policy_id  = data.vsphere_storage_policy.snc_encrypted.id
  }
}

Debug Output

2024-07-26T17:37:57.347+0200 [TRACE] provider.terraform-provider-vsphere_v2.8.2_x5: Received downstream response: tf_proto_version=5.6 tf_provider_addr=provider tf_rpc=ApplyResourceChange @caller=github.com/hashicorp/[email protected]/tfprotov5/internal/tf5serverlogging/downstream_request.go:42 @module=sdk.proto diagnostic_error_count=1 diagnostic_warning_count=0 tf_req_duration_ms=4615 tf_req_id=a7e93494-afa3-329b-6482-04e75586fbec tf_resource_type=vsphere_virtual_machine timestamp="2024-07-26T17:37:57.347+0200"
2024-07-26T17:37:57.347+0200 [ERROR] provider.terraform-provider-vsphere_v2.8.2_x5: Response contains error diagnostic: tf_proto_version=5.6 tf_rpc=ApplyResourceChange @caller=github.com/hashicorp/[email protected]/tfprotov5/internal/diag/diagnostics.go:58 @module=sdk.proto tf_provider_addr=provider tf_req_id=a7e93494-afa3-329b-6482-04e75586fbec tf_resource_type=vsphere_virtual_machine diagnostic_detail="" diagnostic_severity=ERROR diagnostic_summary="error cloning virtual machine: A specified parameter was not correct: spec.disk.backing.crypto" timestamp="2024-07-26T17:37:57.347+0200"
2024-07-26T17:37:57.348+0200 [TRACE] provider.terraform-provider-vsphere_v2.8.2_x5: Served request: tf_proto_version=5.6 tf_req_id=a7e93494-afa3-329b-6482-04e75586fbec tf_resource_type=vsphere_virtual_machine @caller=github.com/hashicorp/[email protected]/tfprotov5/tf5server/server.go:878 @module=sdk.proto tf_provider_addr=provider tf_rpc=ApplyResourceChange timestamp="2024-07-26T17:37:57.347+0200"
2024-07-26T17:37:57.348+0200 [TRACE] maybeTainted: vsphere_virtual_machine.vms_ul[1] encountered an error during creation, so it is now marked as tainted
2024-07-26T17:37:57.352+0200 [ERROR] vertex "vsphere_virtual_machine.vms_ul[1]" error: error cloning virtual machine: A specified parameter was not correct: spec.disk.backing.crypto

Panic Output

No response

Expected Behavior

Correct provisionning of vSphere virtual machine from a template with Encryption Policy applied both on virtual disk and VM Folder

Actual Behavior

Error: error cloning virtual machine: A specified parameter was not correct: spec.disk.backing.crypto
│ 
│   with vsphere_virtual_machine.vms_ul[1],
│   on main.tf line 6, in resource "vsphere_virtual_machine" "vms_ul":
│    6: resource "vsphere_virtual_machine" "vms_ul" {

Steps to Reproduce

terraform apply

Environment Details

When applying terraform plan, storage_policy_id is well retrieved from datasource.

It is successfully applied at VM Folder Level, but not at disk level. (when i delete storage_policy_id from disk block, VM Encryption Policy is applied on VM folder and virtual machine provisioning is successfull)

# vsphere_virtual_machine.vms_ul[5] will be created
  + resource "vsphere_virtual_machine" "vms_ul" {
      + annotation                              = (known after apply)
      + boot_retry_delay                        = 10000
      + change_version                          = (known after apply)
      + cpu_hot_add_enabled                     = true
      + cpu_hot_remove_enabled                  = true
      + cpu_limit                               = -1
      + cpu_share_count                         = (known after apply)
      + cpu_share_level                         = "normal"
      + datastore_id                            = "datastore-1071"
      + default_ip_address                      = (known after apply)
      + enable_disk_uuid                        = true
      + ept_rvi_mode                            = (known after apply)
      + extra_config_reboot_required            = true
      + firmware                                = "efi"
      + force_power_off                         = true
      + guest_id                                = "otherLinux64Guest"
      + guest_ip_addresses                      = (known after apply)
      + hardware_version                        = (known after apply)
      + host_system_id                          = "host-1047"
      + hv_mode                                 = (known after apply)
      + id                                      = (known after apply)
      + ide_controller_count                    = 2
      + imported                                = (known after apply)
      + latency_sensitivity                     = "normal"
      + memory                                  = 8192
      + memory_hot_add_enabled                  = true
      + memory_limit                            = -1
      + memory_share_count                      = (known after apply)
      + memory_share_level                      = "normal"
      + migrate_wait_timeout                    = 30
      + moid                                    = (known after apply)
      + name                                    = "kwrk03"
      + num_cores_per_socket                    = 1
      + num_cpus                                = 4
      + power_state                             = (known after apply)
      + poweron_timeout                         = 300
      + reboot_required                         = (known after apply)
      + resource_pool_id                        = "resgroup-1007"
      + run_tools_scripts_after_power_on        = true
      + run_tools_scripts_after_resume          = true
      + run_tools_scripts_before_guest_shutdown = true
      + run_tools_scripts_before_guest_standby  = true
      + sata_controller_count                   = 0
      + scsi_bus_sharing                        = "noSharing"
      + scsi_controller_count                   = 1
      + scsi_type                               = "pvscsi"
      + shutdown_wait_timeout                   = 3
      + storage_policy_id                       =  "4d5f673c-536f-11e6-beb8-9e71128cae77"
      + swap_placement_policy                   = "inherit"
      + sync_time_with_host                     = true
      + tools_upgrade_policy                    = "manual"
      + uuid                                    = (known after apply)
      + vapp_transport                          = (known after apply)
      + vmware_tools_status                     = (known after apply)
      + vmx_path                                = (known after apply)
      + wait_for_guest_ip_timeout               = 0
      + wait_for_guest_net_routable             = true
      + wait_for_guest_net_timeout              = 5

      + clone {
          + linked_clone  = true
          + template_uuid = "421fd8a7-b455-644f-8a2d-ecb417a32463"
          + timeout       = 30

          + customize {
              + dns_server_list = [
                  + "X.X.X.X",
                  + ""X.X.X.X",
                ]
              + ipv4_gateway    = "X.X.X.X"
              + timeout         = 30

              + linux_options {
                  + host_name    = "kwrk03"
                  + hw_clock_utc = true
                  + time_zone    = "utc"
                    # (1 unchanged attribute hidden)
                }

              + network_interface {
                  + ipv4_address = "X.X.X.X"
                  + ipv4_netmask = 24
                }
            }
        }

      + disk {
          + attach            = false
          + controller_type   = "scsi"
          + datastore_id      = "<computed>"
          + device_address    = (known after apply)
          + disk_mode         = "persistent"
          + disk_sharing      = "sharingNone"
          + eagerly_scrub     = false
          + io_limit          = -1
          + io_reservation    = 0
          + io_share_count    = 0
          + io_share_level    = "normal"
          + keep_on_remove    = false
          + key               = 0
          + label             = "Hard disk 1"
          + path              = (known after apply)
          + size              = 50
          + storage_policy_id = "4d5f673c-536f-11e6-beb8-9e71128cae77"
          + thin_provisioned  = true
          + unit_number       = 0
          + uuid              = (known after apply)
          + write_through     = false
        }

      + network_interface {
          + adapter_type          = "vmxnet3"
          + bandwidth_limit       = -1
          + bandwidth_reservation = 0
          + bandwidth_share_count = (known after apply)
          + bandwidth_share_level = "normal"
          + device_address        = (known after apply)
          + key                   = (known after apply)
          + mac_address           = (known after apply)
          + network_id            = "dvportgroup-1027"
        }
    }

Screenshots

No response

References

No response

@stackthecat stackthecat added bug Type: Bug needs-triage Status: Issue Needs Triage labels Jul 26, 2024
Copy link

Hello, stackthecat! 🖐

Thank you for submitting an issue for this provider. The issue will now enter into the issue lifecycle.

If you want to contribute to this project, please review the contributing guidelines and information on submitting pull requests.

@stackthecat
Copy link
Author

Any news about this issue ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Type: Bug needs-triage Status: Issue Needs Triage
Projects
None yet
Development

No branches or pull requests

1 participant