If you’re wondering why it has been looking quiet here, we stumbled upon an issue that caused lemdro.id to stop federating out its community content.
It should now be resolved but please let us know if there are any issues!
If you’re wondering why it has been looking quiet here, we stumbled upon an issue that caused lemdro.id to stop federating out its community content.
It should now be resolved but please let us know if there are any issues!
What happened to be the problem?
Basically, the lemmy backend service for some reason marked every instance we federated with as inactive, which caused it to stop outbound federation with basically everyone. I have a few working theories on why, but not fully sure yet.
TL;DR lemmy bug, required manual database intervention to fix
This was a stressful start to a vacation!
For a more detailed working theory…
I’ve been doing a lot of infrastructure upgrades lately. Lemdro.id runs on a ton of containerized services that scale horizontally for each part of the stack globally and according to load. It’s pretty cool. But my theory is that since the backend schedules inactive checking for 24 hours from when it starts that it simply was being restarted (upgraded) before it had a chance to check activity until it was too late.
theory:
scheduled task checks instances every 24 hours
I updated (restarted it) more than every 24 hours
it never had a chance to run the check
???
failure
This isn’t really a great design for inactivity checking and I’ll be submitting a pull request to fix it.
Thanks for all your hard work!
I suspect the readers are a very technical crowd. I would love to know the details.
For anyone who wants to follow the GitHub issue
https://github.com/LemmyNet/lemmy/issues/4039#issuecomment-1858728555
fyi, I replied in this thread!