I personally think of a small DIY rack stuffed with commodity HDDs off Ebay with an LVM spanned across a bunch of RAID1s. I don’t want any complex architectural solutions since my homelab’s scale always equals 1. To my current understanding this has little to no obvious drawbacks. What do you think?


Just btrfs.
Ha, I went down the whole Ceph and Longhorn path as well, then ended up with hostPath and btrfs. Glad I’m not the only one who considers the former options too much of a headache after fully evaluating them.
Why btrfs and not ZFS? In my info bubble, the btrfs has a reputation of an unstable FS and people ended up with unrecoverable data.
Btrfs used to be easier to install because it is part of the kernel while zfs required shenanigans, though I think that has changed now.
Btrfs also just works with whatever drives of mismatched sizes you throw at it and adding more later is easy. This used to be impossible with zfs pools but I think is a feature now?
Just the 5-6 raid modes are shit. And its weird willingness to let you boot a failed raid without letting you know a drive is borked.
That is apparently not the case anymore, but ZFS is certainly more rich in features and more battle-tested.