I personally think of a small DIY rack stuffed with commodity HDDs off Ebay with an LVM spanned across a bunch of RAID1s. I don’t want any complex architectural solutions since my homelab’s scale always equals 1. To my current understanding this has little to no obvious drawbacks. What do you think?

    • melfie@lemy.lol
      link
      fedilink
      English
      arrow-up
      1
      ·
      12 hours ago

      Ha, I went down the whole Ceph and Longhorn path as well, then ended up with hostPath and btrfs. Glad I’m not the only one who considers the former options too much of a headache after fully evaluating them.

    • MrModest@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      1 day ago

      Why btrfs and not ZFS? In my info bubble, the btrfs has a reputation of an unstable FS and people ended up with unrecoverable data.

      • unit327@lemmy.zip
        link
        fedilink
        English
        arrow-up
        2
        ·
        12 hours ago

        Btrfs used to be easier to install because it is part of the kernel while zfs required shenanigans, though I think that has changed now.

        Btrfs also just works with whatever drives of mismatched sizes you throw at it and adding more later is easy. This used to be impossible with zfs pools but I think is a feature now?

      • ikidd@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        19 hours ago

        Just the 5-6 raid modes are shit. And its weird willingness to let you boot a failed raid without letting you know a drive is borked.

      • non_burglar@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        20 hours ago

        That is apparently not the case anymore, but ZFS is certainly more rich in features and more battle-tested.