-
Notifications
You must be signed in to change notification settings - Fork 102
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve logical volume discovery #87
Conversation
I appreciate you taking the time to dive in and file a PR for this (https://projects.torsion.org/borgmatic-collective/borgmatic/issues/962). However I'm not sure that this particular change will solve the issue without breaking some of the other hooks that use this code. For instance, the released version of this code intentionally mutates the candidate source directories so that successive calls to So I think another approach is probably needed here. Interestingly, both the ZFS and Btrfs hooks already support nested "volumes" as per your use case. (In ZFS it's called "datasets" and in Btrfs it's "subvolumes".) The way they do that is by sorting the "volume" mount point paths from longest to shortest and then running I hope some of that makes sense. Let me know your thoughts! (Also, if you'd like to work on this more, please have a look at how to run automated tests in borgmatic: https://torsion.org/borgmatic/docs/how-to/develop-on-borgmatic/#automated-tests) |
Will https://projects.torsion.org/borgmatic-collective/borgmatic/pulls/964 fix this issue? I'm hitting the same as op. My server uses LVM for it's root and other volumes. And this maked the borgmatic LVM integration always try to snapshot the root volume instead of the proper volume backing the source. |
No, but it should be a relatively straightforward follow-on change either as part of this PR (if @emilsenan is interested in taking that on) or as a separate PR. The ZFS and Btrfs hooks already support nested mounts, for instance. |
Actually it will still not be enough for my use case. I have one LVM setup as PV on top of LUKS and another where LUKS is on top of a PV. I guess I have to live with custom hooks for the time being. But this also allows me to execute other things around the snapshot operations (xfs_freeze and start/stop services) which are super important to reduce downtime. One question though. Is it possible to map source dirs to something else inside the backup? Because when I do snapshotting in the before hook, I need also to use other mount points where the snapshots are mounted. So it would be nice if I can setup the mapping manually. |
If you're interested in pursuing this, would you mind hopping into #962 and explaining why your PV/LUKS use case won't be solved by nested logical volume support in the LVM hook? Thanks!
Yeah, that makes sense. I'm assuming you're referring to your use case mentioned on #790.
The only mapping I know of that Borg supports is via the slashdot hack, which is exactly what borgmatic's filesystem hooks use under the hood. |
I'm not sure #962 has anything to do with LUKS support. So let me write here. I already wrote at a few places, sorry if I didn't hit the right ones. When there is a partition -> LUKS -> LVM Physical Volume -> LVM Volume Group -> LVM Logical Volume everything is simple. You snapshot the LV and mount the snapshot. When there is a partition -> LVM Physical Volume -> LVM Volume Group -> LVM Logical Volume -> LUKS (the LV itself is encrypted, not the whole volume group), then first it's harder to detect which underlying LV has to be snapshotted. And when it is snapshotted, then one has to Now I guess you ask me why would I encrypt the LV instead of the PV. The thing is that I split the large HDDs into 1TB partitions because I was not sure how eventually space would be allocated and whether for my future use cases, having all the storage under LVM will work. It's more like just in case thinking and maybe not a major use case. Although at some point it helped me use the large HDD for also hosting the OS for the backup of the backup server. Where 2 partitions need to be outside LVM and I've put the OS under a separate volume group altogether. And the backup space is a dedicated volume group with a single encrypted LV. Basically something that good planning can prevent, on the other hand I'm experimenting so can't do precise planning.
Exactly. It can be really neat if borgmatic supported LUKS and XFS ( I'll paste my hooks here for completeness: before_backup:
- echo Starting apps backup.
- "systemctl --user -M [email protected] stop immich-server.service immich-machine-learning.service seafile.service mariadb.service immich-pgsql.service"
- xfs_freeze -f /media/dbstore/
- lvcreate --snapshot --size 200GB --name containers_data_borgmatic /dev/mapper/fedora_myhost-containers_data ; exitcode=$? ; xfs_freeze -u /media/dbstore/ ; exit $exitcode
- xfs_freeze -f /media/datastore/
- lvcreate --snapshot --size 47GB --name luks_borgmatic /dev/DataVG/luks_lv ; exitcode=$? ; xfs_freeze -u /media/datastore/ ; exit $exitcode
- "systemctl --user -M [email protected] start immich-server.service immich-machine-learning.service seafile.service mariadb.service immich-pgsql.service"
- cryptsetup open --key-file /etc/luks-key /dev/DataVG/luks_borgmatic datastore_borgmatic
- mount -o nouuid /dev/mapper/fedora_myhost-containers_data_borgmatic /media/borg/dbstore
- mount -o nouuid /dev/mapper/datastore_borgmatic /media/borg/datastore after_backup:
- echo Finished apps backup and clean-up.
- umount /media/borg/dbstore
- umount /media/borg/datastore
- cryptsetup close /dev/mapper/datastore_borgmatic
- lvremove -y /dev/mapper/fedora_myhost-containers_data_borgmatic
- lvremove -y /dev/mapper/DataVG-luks_borgmatic
Thank you so much! Super helpful. I'll have to adjust the mount points to get it working but first I think I'll watch how backup works for awhile. |
Got it, thanks for explaining! Yeah, I think that pre- and post-snapshot command hooks as per #790 would probably be the place to start. That should allow you to use the built-in LVM hook while also doing LUKS / XFS / service manipulation right before and after. The one thing I'm not sure about is that it appears your Just FYI you may want to copy some of this onto #790 so that it's not lost on this GitHub PR. |
Hi guys, unfortunatelly until now I had no time to think about proper implementation to fix the issue with "nested" logical volumes. As mentioned in https://projects.torsion.org/borgmatic-collective/borgmatic/issues/962#issuecomment-9376, I was able to work around the problem by running two separate backups. |
Okay, no worries. I'll close this PR for now and have a look at the other approach when I get the chance. I'm glad you have a workaround for the time being. Thanks! |
The logical volume discovery algorithm is not yet capable to detect such a setup where mount point of one logical volume is located "under" the mount point of another logical volume (i.e. '/' and '/home')
config.yaml
lsblk output
current get_logical_volumes output
expected get_logical_volumes output