• 10 Posts
  • 21 Comments
Joined 5 years ago
cake
Cake day: June 8th, 2019

help-circle











  • They just allocate according to different logic than the mainstream american FOSS ideology. For instance, hackerbros, and you seem to say the same, will tell you that resources should be centralized into the biggest project in its own category to add more and more features to it. Regardless of cooptation from the private sector, this is generally a bad idea. It leads to a monoculture and monoculture leads to critical bugs impacting enormous amount of users. Also it’s predicated on the idea that there should be only a single way to fullfill a specific use-case, and that it’s the same throughout the world, erasing cultural, economic, social, biological and political differences. Optimization requires standardization, standardization requires erasure and suppression of minoritarian voices and it’s therefore oppressive. Maximizing it is not a good idea, both for technical, political and ethical reasons.

    Seeding new projects that better fit local contexts, or simply produce diverse alternatives raises diversity and in turns raises resilience of the software ecosystem as a whole.












  • This paper explain a taxonomy of harms created by LLMs: https://dl.acm.org/doi/pdf/10.1145/3531146.3533088

    OpenAI released ChatGPT without systems to prevent or compensate these harms and being fully aware of the consequences, since this kind of research has been going on for several years. In the meanwhile they’ve put some paper-thin countermeasures on some of these problems but they are still pretty much a shit-show in terms of accountability. Most likely they will get sued into oblivion before regulators outlaw LLMs with dialogical interfaces. This won’t do much for the harm that open-source LLMs will create but at least will limit large-scale harm to the general population.


  • chobeat@lemmy.mlOPtoTechnology@lemmy.mlAI panic is a marketing strategy
    link
    fedilink
    arrow-up
    20
    arrow-down
    4
    ·
    edit-2
    1 year ago

    It’s not from me but from AlgorithmWatch, one of the most famous and respected NGOs in the field of Algorithmic accountability. They published plenty of stuff on these topics and human rights threats from these companies.

    Also this is an ecosystem analysis of political positioning. These companies and think tanks are going on newspapers with their names to say we should panic about AI. It’s not a secret, just open Google News and you fill find a landslide of news on these topics sponsored by these companies with a simple search.





  • chobeat@lemmy.mlOPtoTechnology@lemmy.mlAI panic is a marketing strategy
    link
    fedilink
    arrow-up
    2
    arrow-down
    9
    ·
    edit-2
    1 year ago

    They published a deliberately harmful tool against the advice of civil society, experts and competitors. They are not only reckless but tasked since their foundation with the mission to create chaos. Don’t forget the idea behind OpenAI in the beginning was to damage the advantage that Google and Facebook had on AI by releasing machine learning technology in open source. They definitely did it and now they are expanding their goals. They are not in for the money (ChatGPT will never be profitable), they are playing a bigger game.

    Pushing the AI panic is not just a marketing strategy but a way to build power. The more they are considered dangerous, the more regulations will be passed that will impact the whole sector. https://fortune.com/2023/05/30/sam-altman-ai-risk-of-extinction-pandemics-nuclear-warfare/