pull down to refresh

[user@dom0 ~]$ sudo vgdisplay --- Volume group --- VG Name qubes_dom0 System ID
Format lvm2 Metadata Areas 1 Metadata Sequence No 27203 VG Access read/write VG Status resizable MAX LV 0 Cur LV 118 Open LV 20 Max PV 0 Cur PV 1 Act PV 1 VG Size 237.47 GiB PE Size 4.00 MiB Total PE 60792 Alloc PE / Size 60792 / 237.47 GiB Free PE / Size 0 / 0
The line
Free PE / Size 0 / 0
Is what I'm talking about. You need that to be at least 40GB.
When you're deleting the VMs, you're leaving the LVs sitting there sucking up your logical space. VG is comprised of PVs, and LVs are taken from VGs. Let me know what lvdisplay says and tell me which of them should have been deleted and I can help you delete them.
reply
Interesting. That seems lame that deleting VM doesn't free up the logical space.
lvdisplay shows many logical volumes, for instance, this one is associated with already deleted VM:
--- Logical volume --- LV Path /dev/qubes_dom0/vm-whonix-gw-15-root.tick LV Name vm-whonix-gw-15-root.tick VG Name qubes_dom0 LV UUID o2ftaB-vIuC-FVoc-C6XS-1U24-2rJQ-2ZkENz LV Write Access read only LV Creation host, time dom0, 2020-06-05 13:27:13 -0400 LV Pool name pool00 LV Status available # open 0 LV Size 10.00 GiB Mapped size 23.48% Current LE 2560 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:40
reply
Also, Is there any documentation explaining how this stuff works?
How do I free a logical volume up?
reply
Assuming that is the volume you want to remove, and the partition isn't mounted or being used by a domU, execute this command for each one that doesent belong.
lvremove /dev/qubes_dom0/vm-whonix-gw-15-root.tick
reply
Thank you!
I removed 6 logical volumes (using sudo lvremove command; volumes from already deleted VMs/domains). However, sudo vgdisplay still shows 0 Free PE.
reply
I removed a couple of other VMs but the Free PE is still 0.
$ sudo vgdisplay --- Volume group --- VG Name qubes_dom0 System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 27755 VG Access read/write VG Status resizable MAX LV 0 Cur LV 82 Open LV 14 Max PV 0 Cur PV 1 Act PV 1 VG Size 237.47 GiB PE Size 4.00 MiB Total PE 60792 Alloc PE / Size 60792 / 237.47 GiB Free PE / Size 0 / 0 VG UUID 007hBk-o2Kx-OdMy-970q-g5v7-mWxc-lMi29y
In the Qubes UI (top right corner), clicking the disk icon, it shows 46.8% disk usage (it was over 50% before I removed these VMs).
reply
I used sudo lvremove to delete 6 logical volumes (from already deleted VMs), but sudo vgdisplay still says 0 Free PE. Any more ideas? I'm gonna backup my data and delete more VMs in the meantime...
reply
I removed one more large VM, now the UI says 34.4% disk usage, but vgdisplay still says 0 Free PE, so something else is going on.
reply
It doesn't make any sense that you use lvremove and vgdisplay shows zero free. How are you deleting logical volumes? Do you have multiple PVs? Try pvdisplay and lvs
reply
Here is vgdisplay output again:
$ sudo vgdisplay --- Volume group --- VG Name qubes_dom0 System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 28134 VG Access read/write VG Status resizable MAX LV 0 Cur LV 95 Open LV 29 Max PV 0 Cur PV 1 Act PV 1 VG Size 237.47 GiB PE Size 4.00 MiB Total PE 60792 Alloc PE / Size 60792 / 237.47 GiB Free PE / Size 0 / 0 VG UUID 007hBk-o2Kx-OdMy-970q-g5v7-mWxc-lMi29y
sudo lvs does not show the deleted LV/VMs. UI disk usage (top right corner) shows empty disk as well (the space got released).
I deleted the LVs using the command you adviced (sudo lvremove /dev/qubes_dom0/vm-whonix-gw-15-root.tick etc).
xen uses partitions for its domUs, just like Linux needs partitions on your drive. When you delete a domU, you are only 'killing' the machine. LVM let's you dynamically create and delete arbitrary partitions without needing to rearrange your disk partition table. When you run the domU creation script, it creates a partition (aka volume) for you based on how large you told it to make the VM drive. When you "delete" a domU you aren't really deleting anything, only killing the process. The volume stays there, and you could mount it manually from dom0 in case there's something on there you need.
reply
I think I actually need this. One large VM (~30GB) doesn't boot (can't start terminal or anything), but before I delete it, I want to backup some stuff, so it would be great if I could mount that volume from dom0 manually, right?
$ sudo mount /dev/qubes_dom0/vm-crowphale-private-snap crow-mount mount: wrong fs type, bad option, bad superblock on /dev/mapper/qubes_dom0-vm--crowphale--private--snap, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so.
Any pointers how to do this?
reply