First off, I know ultimately I’m the only person who can decide if it’s worth it. But I was hoping for some input from your collective experience.
I have a server I built currently running Ubuntu 22.04. I’m using KVM/qemu to host VMs and have recently started exploring the exciting world of Docker, with a VM dedicated to Portainer. I manage the VMs with a mix of virt-manager via xRDP, cli tools, and (if I’m feeling extra lazy) Cockpit. Disks are spindles currently in software Raid 10 (md), and I use LVM to assign volumes to the KVM VMs. Backups are via a script I wrote to snapshot the LVM volume and back it up to B2 via restic.
It all works. Rather smoothly except when it doesn’t 😀.
I’ve been planning an HD upgrade and was considering using that as an excuse to start over. My thoughts are to either install Debian and continue with my status quo, or to give Proxmox a try. I’ve been reading alot of positive comments about it here and I have longed for one unified web interface to manage my VMs.
My main concerns are:
- Backups. I want to be able to backup to B2 but from what I’ve read I don’t see a way to do that. I don’t mean backup to a local repository and then sync that to B2. I’m talking direct to B2.
- Performance. People rave about ZFS, but I have no experience. Will I get at least equivalent performance out of ZFS and how much RAM will that cost me? Do I even need ZFS or can I just continue to store VMs the way I do today?
Having never used Proxmox to compare I’m really on the fence about this one. I’d appreciate any input. Thanks.
I’ll add my voice to the chorus and recommend Proxmox. I’ve never tried xcp-ng; it looks nice and I’m interested, but Proxmox has worked well for me.
I did a little research (on xcp-ng) since reading @housepanther@lemmy.goblackcat.com’s post. Seems like it has a lot going for it. My main concern, right now, is that it’s built on top of CentOS.
You’ve gotten incorrect information on that front. Proxmox is actually built on top of Debian.
No. I just forgot to put xcp-ng anywhere in my reply to you. 😀
While you’re in your planning stage, I would advocate for Proxmox. I really like it. Another contender would be xcp-ng.
xcp-ng
Not gonna lie, I haven’t looked at Xen in years. xcp-ng looks interesting. I’ll have to dig into that more.
Another vote for Proxmox.
Backups: Proxmox Backup Server (yes, it can run in a Proxmox VM) is pretty great. You can use something like Duplicati to backup the PBS datastore to B2.
Performance: You can use ZFS in Proxmox, or not. ZFS gets you things like snapshots and raidz, but you will want to make sure you have a good amount of RAM available and that you leave about 20% of available drive space free. This is a good resource on ZFS in Proxmox.
Performance-wise, I have clusters with drives running ZFS and EXT4, and I haven’t really noticed much of a difference. But I’m also running low-powered SFF servers, so I’m not doing anything that requires a lot of heavy duty work.
Does Proxmox still sit at the top of the stack if I’m not clustering?
I would say it’s at the “bottom” of the stack - Debian is the base layer, then Proxmox, then your VMs.
Clustering just lets the different nodes share resources (more options with ZFS) and allows management of all nodes in the cluster from the same GUI.
I run Proxmox in a cluster and TrueNAS in a VM on one of the nodes. It’s been really convenient. My nodes run a mix of LXC containers for different things + Docker or regular VM’s for other software.
That was one of the reasons I was thinking of getting bigger disks. I want to retire the qnap I have and spin up a TrueNAS VM.
How are you passing the drives to the TrueNAS VM?
I haven’t done it myself, but I have looked into the process in the past. I believe you do it just like paying any drive through to any Proxmox VM.
It’s fairly simple - you can either pass the entire drive itself through to the VM, or if you have a controller card the drive is attached to, you can pass that entire PCIe device through to the VM and the drive will just “come with it”.
Proxmox wont make backups to B2 easier, but since it is basically a web interface and API for Debian and KVM/QEMU you might be able to use your current backup strategy with very little modification.
As for ZFS, you can expect to use about a GB of RAM for each TB in a ZFS pool. I (only) run 2x 4TB drives in ZFS mirror and it results in about 4-5 GB of RAM overhead.
Another point you might want to consider is automation and the ability to use infrastructure as code. You can use the Proxmox Packer builder and Terraform provider to automate building machine images and cloning virtual machines. If you’re into the learning experience it’s definitely a consideration. I went from backing up entire VM disks to backing up only application data, making it faster and cheaper. It also enabled a lot of automated testing. For a homelab it’s a bit much, the learning experience is the biggest part. It’s an entire rabbit hole.
If you want to see how the automation looks like, check out my example infrastructure repo and the matching tutorial. Also check out my Alpine machine image repo which includes automated tests for image cloning, disk resizing and a CI pipeline.
Proxmox wont make backups to B2 easier, but since it is basically a web interface and API for Debian and KVM/QEMU you might be able to use your current backup strategy with very little modification.
I found this which leads me to believe I may be able to pipe
zfs send
to restic to replicate my current disk backup strategy. Presumably I could fire up a VM and build a zfs storage pool in it to test that theory out.As for ZFS, you can expect to use about a GB of RAM for each TB in a ZFS pool. I (only) run 2x 4TB drives in ZFS mirror and it results in about 4-5 GB of RAM overhead.
So if I were to put 4x4TB in a RAID10 equivalent pool I’d be looking at ~ 8GB not 16, whew.
For a homelab it’s a bit much, the learning experience is the biggest part. It’s an entire rabbit hole.
The rabbit hole is where all the fun is. Templating was something I never really got around to in my current setup. I do have an ansible playbook and set of roles that will take a brand new Ubuntu VM and configure it just how I like it.
Thanks for all the info. I’ll be sure to check out your repo.
My zfs cache for 6x 4tb drives in raidz2 is about 10gb of ram.
I found this which leads me to believe I may be able to pipe zfs send to restic to replicate my current disk backup strategy. Presumably I could fire up a VM and build a zfs storage pool in it to test that theory out.
Replying to myself but I think this is a square peg, round hole, situation.
If I’m starting over with proxmox I likely need to rethink my entire backup strategy.
Please give Proxmox a try! It was such a huge quality of life improvement when I migrated to it. I can’t speak to your backup needs or to the performance of ZFS, since I don’t use either of those. I just think that Proxmox took a lot of the pain out of my homelab management experience without taking away my capabilities to customize it. Highly recommend!
I started with proxmox and I’ll continue to use it because it’s very nice to use. As backup I use an rclone mount that is shared via NFS (everything inside a container) and I set that NFS share as a backup storage in proxmox. I think it is a bit convoluted but works fine enough for now.
Convoluted just means you built it with care. ❤️
I just swapped from Ubuntu to Debian but I don’t use VMs - only containers. I back my files up directly to B2 using autorestic, also running in a container that is scheduled by… another container (chadburn).
No need for any VMs in my house. I honestly can’t see the point of them when containers exist.
Just an FYI to OP: If you’re looking to run docker containers, you should know that Proxmox specifically does NOT support running docker in an LXC, as there is a very good chance that stuff will break when you upgrade. You should really only run docker containers in VMs with Proxmox.
Just for completeness sake - We don’t recommend running docker inside of a container (precisely because it causes issues upon upgrades of Kernel, LXC, Storage packages) - I would install docker inside of a Qemu VM as this has fewer interaction with the host system and is known to run far more stable.
You most likely don’t need Proxmox and its pseudo-open-source bullshit. My suggestion is to simply with with Debian 12 + LXD/LXC, it runs VMs and containers very well.
pseudo-open-source bullshit
What do you mean by this?
As far as I’m aware, everything in Proxmox is open source.
I think some people get annoyed by the Red Hat style paid support model, though. There is a separate repo for paying customers, but the non-subscription repo is just fine, and the official forums are a great place to get support, including from Proxmox employees.
Gotcha. So long as they’re not breaking GPL or holding back security updates for non-paying users. I could care less. Thanks.
As said, they’ve separate repositories, annoying messages asking you for a license all the time etc. At some point you’ll find out that their solution doesn’t offer anything particular of value that you can’t get with other less company dependent solutions like I described before. You may explore the LXD native GUI… or heck even Cockpit or Webmin might be decent options.