PM_ME_VINTAGE_30S [he/him]

Anarchist, autistic, engineer, and Certified Professional Life-Regretter. If you got a brick of text, don’t be alarmed; that’s normal.

No, I’m not interested in voting for your candidate.

  • 1 Post
  • 53 Comments
Joined 1 year ago
cake
Cake day: July 9th, 2023

help-circle









  • Honest question: Why does it matter if he’s a transphobe when choosing which Fediverse software to use?

    1. Because some people have actually financially supported him. I’m not trans, but I would be devastated to know that my money went to feed someone who wants to destroy me.
    2. I already have trouble convincing transgender people in my social circle that Lemmy as a software is safe for them to use even with the variety of trans-inclusive servers like yours, and will be safe and inclusive in the future.

    A great example of (2) is the fate of PolyMC. Thankfully, the other developers forked it into Prism, but transphobia put that whole project in jeopardy for a bit.

    The software is FOSS and anyone can make their own instance.

    IMO that’s why I’m not immediately dropping my account and running for the hills, but it’s still not good. Most people don’t have the technical skills or the interest in learning them to run their own instance.

    I really want to understand what I might be missing.

    IMO it’s that even though he does not personally control how Lemmy instances are run, and even though we do have a good degree of robustness to transphobia because the software is FOSS, it is still both morally and technically ill-advised to have a transphobe at the helm of an open-source software project.








  • It can use ChatGPT I believe, or you could use a local GPT or several other LLM architectures.

    GPTs are trained by “trying to fill in the next word”, or more simply could be described as a “spicy autocomplete”, whereas BERTs try to “fill in the blanks”. So it might be worth looking into other LLM architectures if you’re not in the market for an autocomplete.

    Personally, I’m going to look into this. Also it would furnish a good excuse to learn about Docker and how SearXNG works.


  • LLMs are not necessarily evil. This project seems to be free and open source, and it allows you to run everything locally. Obviously this doesn’t solve everything (e.g., the environmental impact of training, systemic bias learned from datasets, usually the weights themselves are derived from questionably collected datasets), but it seems like it’s worth keeping an eye on.

    Google using ai, everyone hates it

    Because Google has a long history of doing the worst shit imaginable with technology immediately. Google (and other corporations) must be viewed with extra suspicion compared to any other group or individual because they are known to be the worst and most likely people to abuse technology.

    Literally if Google does literally anything, it sucks by default and it’s going to take a lot more proof to convince me otherwise for a given Google product. Same goes for Meta, Apple, and any other corporations.