Skip to content

Commit

Permalink
fix(1334 test case): change test case description
Browse files Browse the repository at this point in the history
Signed-off-by: Jack Yu <[email protected]>
  • Loading branch information
Yu-Jack authored and khushboo-rancher committed Mar 18, 2024
1 parent fe75d11 commit 5af15dd
Show file tree
Hide file tree
Showing 6 changed files with 29 additions and 13 deletions.
2 changes: 1 addition & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -21,5 +21,5 @@ scripts/**/*.sh
.tox
.report.json
test_result.json

public
*.hugo_build.lock
37 changes: 25 additions & 12 deletions docs/content/manual/volumes/1334-evict-disks-check-vms.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,20 +2,33 @@
title: Verify that VMs stay up when disks are evicted
---

* Related issues: [#1334](https://github.com/harvester/harvester/issues/1334) Volumes fail with Scheduling Failure after evicting disc on multi-disc node
* Related issues
- [#1334](https://github.com/harvester/harvester/issues/1334) Volumes fail with Scheduling Failure after evicting disc on multi-disc node
- [#5307](https://github.com/harvester/harvester/issues/5307) Replicas should be evicted and rescheduled to other disks before removing extrak disk

## Verification Steps

1. Created 3 node Harvester setup with ipxe example in KVM/libvirt
1. Added formatted disk to node0 VM
1. Created three VMs on node0
1. Created large files on three VMs to see where they were located with dd if=/dev/urandom of=file1.txt count=5192 bs=1M
1. Checked Longhorn to be sure that some VMs were on new disk
1. Deleted disk from Harvester
1. Checked Longhorn to be sure that disk was marked for eviction
1. Verified that VMs were still available while evicting replicas by running commands from serial console/SSH
1. Verified that disk was removed from Longhorn and VMS were still up.
1. Created 3 nodes Harvester.
1. Added formatted disk (called disk A) to node0 VM in the harvester node page.
1. Added disk tag `test` on following disk in the longhorn page.
1. disk A of node0
1. root disk of node1
1. root disk of node2
1. Created storage class with disk tag `test`.
1. Created volume (called B) with previous storage class. You should check scheduling status in the longhorn page.
1. Created VM with volume B as extra volume (not boot volume).
1. The node scheduling status should look like this. {{< image "images/volumes/1334-image-01.png" >}}
1. Deleted disk A in the harvester node page, error message should be displayed.
1. Added disk tag `test` to **root disk of node0**.
1. Requested eviction and disabled scheduling on disk A in the longhorn dashboard.
Before:
{{< image "images/volumes/1334-image-02.png" >}}

After:
{{< image "images/volumes/1334-image-03.png" >}}

1. Removed disk A in the harvester node page, and it should be removed.

## Expected Results
1. Disk is removed from Longhorn
1. VMs stay up
1. Disk A is removed on the harvester node page.
1. VM is running in whole steps.
3 changes: 3 additions & 0 deletions docs/layouts/shortcodes/image.html
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
{{ $baseURL := .Page.Site.BaseURL }}
{{ $image := .Get 0 }}
<img src="{{ $baseURL }}{{ $image }}" alt="{{ $image }}">
Binary file added docs/static/images/volumes/1334-image-01.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/static/images/volumes/1334-image-02.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/static/images/volumes/1334-image-03.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.

0 comments on commit 5af15dd

Please sign in to comment.