You talk like there is not in between containers and VMs. You can use both.
You talk like there is not in between containers and VMs. You can use both.
What exactly are you referring to? ZIL? ARC? L2ARC? And what docs? Have not found that call out in the official docs.
I use a consumer SSD for caching on ZFS now for over 2 years and do not have any issues with it. I have a 54 TB pool with tons of reads and writes and no issue with it.
smart reports 14% used.
You recall wrong. ECC is recommended for any server system but not necessary.
So one of the ones complaining, complained that they should rather implement the feature he needed instead of posting a tweet that took 20 seconds to write?
This person’s block was well deserved.
Who says that it is no longer maintained? https://github.com/containers/podman-compose Looks fine to me?
Surprised Transmission has issues seeding that many, thought Transmission 4.x made improvements in that area. How much RAM does your system have? Maybe at some point you just need more system resources to handle the load.
PS - For what it’s worth you can still stick with Transmission and/or other torrent clients & just spread the torrents among multiple torrent client instances. e.g. run multiple Transmission instances with each seeding 1000 or whatever amount of torrents works for you.
Those are duck tape solutions. Why use them when there is a good solution
There are enough private trackers without the requirement of using a VPN.
There are tunnel protocols like 6to4, 6RD and so on to allow you to get an IPv6 connection tunneled to you. Various routers do support it.
Another option is to ask your ISP if he will supply a IPv6 subnet to you.
Yep. Also claimed “it affects all GNU/Linux” while it only really does CUPS and so on.
Just alone full disclosure is a shit thing to do. Do not even mention the part where it was intended as a responsible disclosure.
Qualcomm did work together with Microsoft and the Vendors closely together before the launch to create those devices.
Linux device vendors probably did not get the same treatment. So give it time. Also, why not buy a windows laptop and put linux on it?
You can disable the web updater in the config which is the default when deploying via docker. The only time i had a mismatch is when i migrated from a nativ debian installation to a docker one and fucked up some permissions. And that was during tinkering while migrating it. Its solid for me ever since.
Again, there is no official nextcloud auto updater, OP chose to use an auto updater which bricked OPs setup (a plugin was disabled).
Docker is kind of a giant mess in my experience. The trick to it is creating backup plans to recover your data when it fails.
Thats the trick for any production service. Especially when you do an update.
They’re releasing a new version every two month or so and dropping them rapidly from support, pinning it with a tag means that in 12 months the install would be exploitable.
The lifecycle can be found with a single online search. Here https://github.com/nextcloud/server/wiki/Maintenance-and-Release-Schedule
Releases are maintained for roughly a year.
Set yourself a notification if you forget it otherwise.
What are you talking about? If you are not manual (or by something like watchtower) pull the newest image it will not update by itself.
I have never seen an auto-update feature by nextcloud itself, can you pls link to it?
The docker image automatically updated the install to nextcloud 30, but the forms app requires nextcloud 29 or lower.
Lol. Do not blame others for your incompetence. If you have automatically updates enabled then that is your fault when it breaks things. Just pin the major version with a tag like nextcloud:29 or something. Upgrading major versions automatically in production is a terrible decision.
That brings me to what’s available. I almost pulled the trigger on Synology DS423+. It looks reasonable powerful, I can put 4 SATA SSDs and 2 M.2… that’s what I thought. But it turned out it’s not possible to use M.2 as storage with anything but Synology’s own overpriced drives that aren’t even available in my country.
You can use a script to make them available. Still a pain.
Since you only need 2 TB, why do you even bother with the m.2 slots?
Why do you think that you need the m.2 in the first place? I guess you are hang up on “sata bad cause m.2 new” (thats btw only the connector not the interface, there are sata m.2 as well)
sata can handle 6 Gbps. That’s 6 times more than most home network connections can even handle. Since you have not mentioned once how many Ethernet ports the systems have and how fast they are, i figure you only have a 1 Gbps LAN.
Yes NVMe SSDs are somewhat cheaper these days, but not that much that i would bother with it. We are only talking about 2 times 2 tb.
Thanks my bad. OP was talking about ethernet in some of his comments so i somehow thought it was about an USB connected NIC.
I agree, all this attention grabbing sound to me as if this is actually not a big deal. But we will see i guess.
No, that would make no sense and is obviously not what i meant.
But you could separate the arr stack from things like pihole with a vm. For example you could pin one thread to that VM so you will not bottleneck your DNS when you are doing heavy loads on the rest of the system. This is just one example what can be done.
Just because you do not see a benefit, does not mean there is none.
Also, VMs are not “heavy” thanks to virtualization technology built into modern hardware, VMs are quite light on the system. Yes they still have overhead but its not like you are giving up big percentages of your potential performance, depending on the setup.