Admin: lemux
Issues and Updates: !server_news
Find me:
mastodon: @minnix@upallnight.minnix.dev
matrix: @minnix:minnix.dev
peertube: @minnix@nightshift.minnix.dev
funkwhale: @minnix@allnightlong.minnix.dev
writefreely: @minnix@tech.minnix.dev
According to the device reviews, lots of potential problems.
I was asking them to post their setup so I can evaluate their experience with regards to Proxmox and disk usage.
There is no way to get acceptable IOPS out of HDDs within Proxmox. Your IO delay will be insane. You could at best stripe a ton of HDDs but even then one enterprise grade SSD will smoke it as far as performance goes. Post screenshots of your current Proxmox HDD/SSD disk setup with your ZFS pool, services, and IO delay and then we can talk. The difference that enterprise gives you is night and day.
Yes you don’t need Proxmox for what you’re doing.
ZFS absolutely does not require them in any way.
Who said it does? Also regarding Proxmox:
https://forum.proxmox.com/threads/consumer-grade-ssds.141190/post-632197
Looking back at your original post, why are you using Proxmox to begin with for NAS storage??
For ZFS what you want is PLP and high DWPD/TBW. This is what Enterprise SSDs provide. Everything you’ve mentioned so far points to you not needing ZFS so there’s nothing to worry about.
Yes I’m specifically referring to your ZFS pool containing your VMs/LXCs. Enterprise SSDs for that. Get them on ebay. Just do a search on the Proxmox forums for enterprise vs consumer SSD to see the problem with consumer hardware for ZFS. For Proxmox itself you want something like an NVME with DRAM, specifically underprovisioned for an unused space buffer for the drive controller to use for wear leveling.
ZFS is great, but to take advantage of it’s positives you need the right drives, consumer drives get eaten alive as @scrubbles@poptalk.scrubbles.tech mentioned and your IO delay will be unbearable. I use Intel enterprise SSDs and have no issues.
The GBA SP is still my favorite form factor to this day.
Check out their matrix https://pine64.org/community/
I read long ago you had to get malware on the air gapped machine first to begin with, and then it’s only accessible within a few meters. Also it can’t be accessed through walls. That was years ago though, maybe it’s changed now.
If it’s the same then after installing docker, creating a vaultwarden user, adding said user to docker group, and creating your vaultwarden directories, all that’s left is to curl the install script and answer the questions it asks.
I use bitwarden and the setup was fairly standard with the helper script. I use my own isolated proxy for all my services so that was already built. I haven’t used vaultwarden but if anyone that has used both can tell me the differences I could maybe help out.
It is interesting, but why is it in the Android community?
One of our co-hosts on the Lugcast got one and gave a little review of it. Star Labs responded in the comments https://youtu.be/0MG8c5HJew4?si=UnGhLtcWBkJG2D4M
I would say that is not the best way to keep/restore backups as you are missing the integrity checking features of a true backup system. But honestly what really matters is how important the data is to you.
I did something similar when migrating to 8. Consumer SSDs suck with proxmox so I bought 2 enterprise SSDs on Ebay before the migration and decided to do everything at once. I didn’t have all the moving parts you did though. If you have an issue, you will more than likely not be able to pop back in the old SSDs and expect everything to work as normal. I’m not sure what you’re using to create backups but if you’re not already I would recommend PBS. This way if there is an issue, restoring your VMs is trivial. As long as that PBS is up and running correctly (makes sure to restore a backup before making any changes to make sure it works as intended) it should be ok. I have 2 PBS’s. One on and off site.
PBS will keep the correct IPs of your VMs so reconnecting NFS shares shouldn’t be an issue either.
My mistake, I thought it was this one