• cogitase@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    9
    ·
    1 month ago

    Anytime an article posts shit like this but neglects to include the full context,

    They link directly to the journal article in the third sentence and the full pdf is available right there. How is that not tantamount to including the full context?

    https://arxiv.org/pdf/2411.02306

    • pixxelkick@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      1 month ago

      Cool

      The paper clearly is about how a specific form of training on a model causes the outcome.

      The article is actively disinformation then, it frames it as a user and not a scientific experiment, and it says it was Facebook llama model, but it wasn’t.

      It was a further altered model of llama that was further trained to do this

      So, as I said, utter garbage journalism.

      The actual title should be “Scientific study shows training a model based off user feedback can produce dangerous results”

      • petrol_sniff_king@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 month ago

        I don’t see how this is much different from the sycophancy “error” OpenAI built into its machine to drive user retention.

        If a meth user is looking for reasons to keep using, then a yes-man AI system biased toward agreeing with them will give them reasons.

        Honestly, it’s much scarier than meth addiction; you could reasonably argue the meth user should pull up their bootstraps and simply refuse to use the sycophantic AI.

        But what about flat-earthers? What about Q-Anon? These are not people looking for the treatment of their mental illness, and a sycophantic AI will tell them “You’re on the right track. It’s freedom fighters like you this country needs. NASA doesn’t want people to know about this.”