Table of Contents

ZFS Maintenance

ZFS Pool Re-creation

You made a ZFS pool on your Proxmox Backup Server. Now you want to change the RAID level (e.g., from RAID-10 to RAID-5) to gain more usable storage. Or perhaps you want to swap your 2TB drives for 4TB ones without having to spend two days letting them rebuild themselves.

:1: Install a USB backup drive on each node that is large enough to store one copy of all of the VMs on each node.

For details on this see the Making a USB backup pool page.

:2: Make a backup of all VMs to the USB drives.

:3: Once you have a complete set of backups — to cover the remotest chance that something might go wrong with a node while you are doing all of this — then proceed to destroy the current ZFS pool by logging-in to the PBS server's command line, then:

# proxmox-backup-manager datastore list
┌──────────┬─────────────────────────┬──────────────────────┐
│ name     │ path                    │ comment              │
╞══════════╪═════════════════════════╪══════════════════════╡
│ backraid │ /mnt/datastore/backraid │ 2TB x 4 HDDs RAID-10 │
└──────────┴─────────────────────────┴──────────────────────┘

# proxmox-backup-manager datastore show <put the name of your datastore>
# proxmox-backup-manager datastore remove <name of datastore>
# proxmox-backup-manager datastore list

:4: The datastore should now be gone from the list. Next, erase the ZFS pool and clear the datastore's path on the PBS server.

zpool destroy <pool name>
zpool list
You should now see the pool is gone. If you look on the PBS GUI you should likewise see it is gone.

:5: Each of your drives still has a ZFS signature and partitions, so write a new GPT partition table on each drive. For example, in my case my server's boot drive was nvme0n1 and my data drives were sda, sdb, sdc, and sdd, so I did this:

# fdisk /dev/sda
Command (m for help): p
Disk /dev/sde: 3.64 TiB, 4000787030016 bytes, 7814037168 sectors
Disk model: TOSHIBA HDWE140
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 9F6161BD-1B49-984A-A1D6-BBAB0B3FB99D

Device          Start        End    Sectors  Size Type
/dev/sde1        2048 7814019071 7814017024  3.6T Solaris /usr & Apple ZFS
/dev/sde9  7814019072 7814035455      16384    8M Solaris reserved 1

Command (m for help): g
Created a new GPT disklabel (GUID: 15E14DE8-C96B-D44C-A430-25A0D36069EC).

Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.

# fdisk /dev/sdb
... (repeat steps for each drive) ...

:6: Remove the datastore directory or PBS will not let you re-create it again.

rmdir /mnt/datastore/backraid/

:7: Now create the datastore from scratch in the PBS GUI.

ZFS Datastore Removal

In my situation I had multiple ZFS pools on a Proxmox Backup Server version 3. One was 12TB, the other 4TB. I ended up storing everything on the 12TB pool, and now want to remove the 4TB pool (either to replace with larger drives or just get rid of it to reduce power consumption).

:1: Log-in to the backup server via SSH.

:2: Use the proxmox-backup-manager command to remove the datastore's usage. Here are command to show help, list the pool, and remove one.

proxmox-backup-manager datastore
proxmox-backup-manager datastore list
proxmox-backup-manager datastore remove <name-of-pool>

:3: Once the pool is removed from the PBS datastore then you can destroy the ZFS pool like so:

umount /mnt/datastore/<name-of-pool>
zpool destroy <name-of-pool>