• 12 Posts
  • 359 Comments
Joined 2 years ago
cake
Cake day: August 10th, 2023

help-circle
  • The cloud, and any form of managed database, inverts this. User accounts are extremely easy, as they are automatically provisioned with secrets you can easily rotate, along with the database itself. There is less of a worry about user rights as well, as you can dedicate one “instance” of a database to certain types of data, instead of having more than one database within one instance.

    And then, traffic is commonly going to be routed through untrusted networks, hence the desire for encryption in transit.


  • There are a few apps that I think fit this use case really well.

    Languagetool is a spelling and grammer checker that has a server client model. Libreoffice now has built in languagetool integration, where it can acess a server of your choosing. I make it access the server I run locally, since archlinux packages languagetool.

    Another is stirling-pdf. This is a really good pdf manipulation program that people like, that comes as a server with a web interface.




  • I’ve seen three cases where the docker socket gets exposed to the container (perhaps there are more but I haven’t seen any?):

    1. Watchtower, which does auto updates and/or notifies people

    2. Nextcloud AIO, which uses a management container that controls the docker socket to deploy the rest of the stuff nextcloud wants.

    3. Traefik, which reads the docker socket to automatically reverse proxy services.

    Nextcloud does the AIO, because Nextcloud is a complex service, but it grows to be very complex if you want more features or performance. The AIO handles deploying all the tertiary services for you, but something like this is how you would do it yourself: https://github.com/pimylifeup/compose/blob/main/nextcloud/signed/compose.yaml . Also, that example docker compose does not include other services, like collabara office, which is the google docs/sheets/slides alternative, a web based office.

    Compare this to the kubernetes deployment, which yes, may look intimidating at first. But actually, many of the complexities that the docker deploy of nextcloud has are automated away. Enabling the Collabara office is just collabara.enabled: true in the configuration of it. Tertiary services like Redis or the database, are included in the Kubernetes package as well. Instead of configuring the containers itself, it lets you configure the database parameters via yaml, and other nice things.

    For case 3, Kubernetes has a feature called an “Ingress”, which is essentially a standardized configuration for a reverse proxy that you can either separate out, or one is provided as part of the packages. For example, the nextcloud kubernetes package I linked above, has a way to handle ingresses in the config.

    Kubernetes handles these things pretty well, and it’s part of why I switched. I do auto upgrade, but I only auto upgrade my services, within the supported stable release, which is compatible for auto upgrades and won’t break anything. This enables me to get automatic security updates for a period of time, before having to do a manual and potentially breaking upgrade.

    TLDR: You are asking questions that Kubernetes has answers to.







  • at just a glance I have some theories:

    1. the project was named after the creator. Maybe they wanted it to seem more community organized

    2. Food reference. Foss developers often name stuff after food, Idk why. Maybe cuz they like mangos (I do too).

    3. Dodges copyright or trademark issues. Certain things, like town names (wayland is a town in the US) are essentially uncopyrightable/trademarkable, so by naming your project after those you eliminate a whole host of potential legal issues.

    These are just theories though. No reason is actually given, at least not that I could find based on 30s of searching.


  • Many helm charts, like authentik or forgejo integrate bitnami helmcharts for their databases. So that’s why this is concerning to me,

    But, I was planning to switch to operators like cloudnativepostgres for my databases instead and disable the builtin bitnami images. When using the builtin bitnami images, automatic migration between major releases is not supported, you have to do it yourself manually and that dissapointed me.


  • I’m on my phone rn and can’t write a longer post. This comment is to remind me to write an essay later. I’ve been using authentik heavily for my cybersecurity club and have a LOT of thoughts about it.

    The tldr about authentik’s risk of enshittification is that authentik follows a pattern I call “supportware”. It’s when extremely (intentionally/accidentally) complex software (intentionally/accidentally) lacks edge cases in their docs,because you are supposed to pay for support.

    I think this is a sustainable business model, and I think keycloak has some similar patterns (and other Red Hat software).

    The tldr about authentik itself is that it has a lot of features, but not all of them are relevant to your usecase, or worth the complexity. I picked up authentik for invites (which afaik are rare, also official docs about setting up invites were wrong, see supportware), but invites may not something you care about.

    Anyway. Longer essay/rant later. Despite my problems, I still think authentik is the best for my usecase (cybersecurity club), and other options I’ve looked at like zitadel (seems to be more developer focused),or ldap + sso service (no invites afaik) are less than the best option.

    Sidenote: Microsoft entra is offers similar features to what I want from authentik, but I wanted to self host everything.



  • So instead you decided to go with Canonical’s snap and it’s proprietary backend, a non standard deployment tool that was forced on the community.

    Do you avoid all containers because they weren’t the standard way of deploying software for “decades” as well? (I know people that actually do do that though). And many of my issues about developers and vendoring, which I have mentioned in the other thread I linked earlier, apply to containers as well.

    In fact, they also apply to snap as well, or even custom packages distributed by the developer. Arch packages are little more than shell scripts, Deb packages have pre/post hooks which run arbitrary bash or python code, rpm is similar. These “hooks” are almost always used for things like installing. It’s hypocritical to be against curl | bash but be for solutions like any form of packages distributed by the developers themselves, because all of the issues and problems with curl | bash apply to any form of non-distro distributed packages — including snaps.

    You are are willing to criticize bash for not immediately knowing what it does to your machine, and I recognize those problems, but guess what snap is doing under the hood to install software: A bash script. Did you read that bash script before installing the microk8s snap? Did you read the 10s of others in the repo’s used for doing tertiary tasks that the snap installer also calls?

    # Try to symlink /var/lib/calico so that the Calico CNI plugin picks up the mtu configuration.

    The bash script used for installation doesn’t seem to be sandboxed, either, and it runs as root. I struggle to see any difference between this and a generic bash script used to install software.

    Although, almost all package managers have commonly used pre/during/post install hooks, except for Nix/Guix, so it’s not really a valid criticism to put say, Deb on a pedestal, while dogging on other package managers for using arbitrary bash (also python gets used) hooks.

    But back on topic, in addition to this, you can’t even verify that the bash script in the repo is the one you’re getting. Because the snap backend is proprietary. Snap is literally a bash installer, but worse in every way.



  • In my opinion, you are starting too big. It’s better to start smaller. Many locations have a “Linux User Group” or “hackerspace” or a “Computing Club”. (Those are exact keywords you can try searching for).

    And often times, those organizations host their own small set of services for their members. For example, when I was searching for help on how to set up something with Kubernetes, I came across this blog, where the blog author hosts services for their “Chaos Computing Club”, like proxmox, nextcloud (has a calendar app), matrix, and forgejo.

    Instead of trying to spin up a set of services for the whole “FOSS Community” start smaller and just host for your local groups. Maybe your local hackerspace already hosts these services.

    To find local meetups, I checked out https://meetup.com/, which has a lot.

    As for me personally, I am trying to put together services for my Cybersecurity club at my school, right now I have centralized identity, and virtual machine hosting for members to access and play with, but I want to also host extra services like the stuff you mentioned, because the reasons why you want them are good.

    On my blog, I discuss my plans and steps: https://moonpiedumplings.github.io/projects/build-server-6/

    I think creating a “FOSS hub” overall is a really really big challenge because all of these groups that make up the FOSS world have a heterogeneous set of overall interests, and an even more heterogeneous set of users.

    A simple example is the language barrier. Fun fact: There exist alternatives to apps that primarily have English as their first language, but in other languages first, centering around the communities those languages are used in. For example, the opendesk docs are in German first. Of course, there are English docs for things like engagement, but the problem is that —

    For something like a FOSS hub, user engagement is critical, and one of the best ways to have engaged users is dogfooding, where users contribute back to this software they use. But with software that treats one language or another as a first class citizen, there is becomes a bump, when users want to dogfood.

    The other problem is that the users themselves have different needs and wants. One user or set of users hates email and never wants to touch it. Another wants to exclusively use plain email for everything, including as an alternative to code forges, discussion platforms, and scheduling systems. One set of users prefers discord, the others prefer irc. They meet in the middle on matrix, but this other set of users hates matrix due to being VC funded and it’s just a clusterfuck.

    You cannot make both groups of users happy. When you try to please everybody, you end up pleasing nobody.

    What you can do, however, is catch the needs of your local groups and slowly expand from there. I think a FOSS Hub is possible, but I think trying to start it as a foss hub is bound for failure because the scope is too large.

    I think the closest thing right now is disroot, which hosts a lot of services, but again Disroot uses XMPP whereas some people may prefer Matrix for this usecase, and plenty of other nitpicks.



  • I’ve tried snap, juju, and Canonical’s suite. They were uniquely frustrating and I’m not interested in interacting with them again.

    The future of installing system components like k3s on generic distros is probably systemd sysexts, which are extension images that can be overlayed onto a base system. It’s designed for immutable distros, but it can be used on any standard enough distro.

    There is a k3s sysext, but it’s still in the “bakery”. Plus sysext isn’t in stable release distros anyways.

    Until it’s out and stable, I’ll stick to the one time bash script to install Suse k3s.