Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

controller: improve the mechanism of disk removing (backport #102) #106

Merged
merged 6 commits into from
May 23, 2024

Conversation

mergify[bot]
Copy link

@mergify mergify bot commented May 21, 2024

Problem:
When the disk is gone, we cannot remove it on Harvester UI.
That's a regression after we introduce the inactive/corrupted concept.

Solution:
Improve the removing disk mechanism.

Related Issue:
harvester/harvester#5726

Test plan:

  1. Create 3 node Harvester cluster
  2. add an extra disk and provision it as Longhorn disk
  3. remove the above disk (physically). (If you use kvm, try virtctl command with attach-device/detach-device)
  4. Remove this provision disk on the Harvester UI
  5. make sure both Harvester/Longhorn UI removed the above disk

Nice to Have:

@mergify mergify bot added the conflicts label May 21, 2024
Copy link
Author

mergify bot commented May 21, 2024

introduce the executor to v0.6.x for this backport.

    - that we can drop some dependency

Signed-off-by: Vicente Cheng <[email protected]>
(cherry picked from commit caaaf6b)
Signed-off-by: Vicente Cheng <[email protected]>
(cherry picked from commit ab2c939)
    - We should not skip the inactive/corrupted device by removing op

    - Improve the order of onChange. Now, we will handle the removal
      first and the addition/update later. That helps the logic be clarified.

Signed-off-by: Vicente Cheng <[email protected]>
(cherry picked from commit d699409)
Signed-off-by: Vicente Cheng <[email protected]>
(cherry picked from commit 3a9b364)
    - we need to remove the duplicated disk after we tested

Signed-off-by: Vicente Cheng <[email protected]>
(cherry picked from commit 61307c8)
Signed-off-by: Vicente Cheng <[email protected]>
(cherry picked from commit 969ccdc)
Copy link
Contributor

@tserong tserong left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM.

Question (because I was diffing branches and playing with git cherry -v to review this): should #100 also be backported to v0.6.x? AFAICT the two commits in that PR will be the only ones in master that aren't in v0.6.x once this PR is merged.

@Vicente-Cheng
Copy link
Collaborator

LGTM.

Question (because I was diffing branches and playing with git cherry -v to review this): should #100 also be backported to v0.6.x? AFAICT the two commits in that PR will be the only ones in master that aren't in v0.6.x once this PR is merged.

Yeah, we should backport the PR you mentioned.
Here is the new backport: #108

These two PRs are not related; I prefer to split them. Thanks!

@bk201 bk201 merged commit 810e526 into v0.6.x May 23, 2024
6 checks passed
@mergify mergify bot deleted the mergify/bp/v0.6.x/pr-102 branch May 23, 2024 08:07
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants