With all the backlash1 surrounding Reddit’s unpopular decision to restrict their API while effectively killing most of the 3rd party apps in the process, many started wondering whether Reddit should remain their “front page of the Internet” and maybe even started to explore alternatives.
Background People who have been on the internet for a while might be currently experiencing a bad case of déjá vu. For those who weren’t around back then, Reddit’s biggest user influx has happened at the same time when the core users of Digg (a popular link aggregator of the 10s) started protesting some changes their platform of choice had recently introduced.
Have you been able to load balance with multiple containers? Im not really familiar with k8s
load balancing is automatic between pods thanks to Services: https://kubernetes.io/docs/concepts/services-networking/
I also use kubernetes to run my Lemmy instance. Sadly,
pictrs
uses their own “database” file which can only be opened by a single pod because it refuse to run if the “database” lock is already acquired by another pod, making scaling up the number of pods impossible. I wish they use postgres instead of inventing their own database. I suspect this is one of the reasons why those large Lemmy instances have difficulty scaling up their server.You mean pictrs can’t scale, or the other pods cannot as well? I separated lemmy-ui, the backend, and pictrs into different pods. Haven’t tried scaling anything yet though, but I did notice the database issue with pictrs when RollingRestart, had to switch to Recreate.
Only pictrs that can’t scale. Lemmy ui and backend seems to be stateless.
Great to hear, that will make it super easy if I start allowing users on my instance.
This is a really interesting observation. Curious if the devs are aware that this breaks simple scalability efforts
I saw that Lemmy container has scheduled jobs. How did you handled that? IDK I’m not sure about is Lemmy really “stateless”.
https://lemmy.world/post/920294
Right, that’s a good point.
So far it’s working quite well, however for a micro-sized instance it’s no surprise. Worst case scenario I can do the same thing as the admins of lemmy.world did: create a dedicated scheduling pod using the same docker image as the normal ones, but exclude it from the Service’s target, so it won’t receive any incoming traffic.
The rest of the pods can then be dedicated to serve traffic with their scheduling functionality disabled.
Do they have a write up on their setup?