• Sterile_Technique@lemmy.world
    link
    fedilink
    English
    arrow-up
    67
    arrow-down
    2
    ·
    2 days ago

    The bullshit generators we call ‘AI’ don’t assume, and aren’t frantic: they just regurgitate an output based on as much bullshit input as we can stuff into them.

    The output can be more or less recognizable as bullshit, but the computer doesn’t distinguish between the two.

    • Lvxferre [he/him]@mander.xyz
      link
      fedilink
      arrow-up
      19
      ·
      2 days ago

      Yup, pretty much. And the field is full of red herring terms, so they can mislead you into believing otherwise: “hallucination”, “semantic” supplementation, “reasoning” models, large “language” model…

      • BradleyUffner@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        2 days ago

        Those “reasoning models” are my favorite. It’s basically the equivalent of adding another pass through the generator with the additional prompt “now sprinkle in some text that makes it look like you are thinking about each part of your answer”.

        • Lvxferre [he/him]@mander.xyz
          link
          fedilink
          arrow-up
          13
          ·
          2 days ago

          Do you want my guess? The current “fight” will go on, until the AI bubble bursts. None of the current large token models will survive; they’ll be simply ditched as “unprofitable”. Instead you’ll see a bunch of smaller models popping up, for more focused tasks, being advertised as something else than AI (perhaps as a “neural network solution” or similar).

          So Grok, Gemini, GPT, they’re all going the way of the dodo.

          That’s just my guess though. It could be wrong.

          • snooggums@piefed.world
            link
            fedilink
            English
            arrow-up
            7
            ·
            2 days ago

            Small focused learning models and other forms of AI have been used for decades.

            The current bubble is just trying to make LLMs do literally everything including accurately answering questions despite their core design including randomization to appear more like a human.

            • Lvxferre [he/him]@mander.xyz
              link
              fedilink
              arrow-up
              2
              ·
              2 days ago

              Yes, but I think the ones you’ll see past the bubble burst will be a bit different. For example, incorporating the current attempts of natural language processing, even if in a simplified way.

      • BananaIsABerry@lemmy.zip
        link
        fedilink
        arrow-up
        2
        arrow-down
        1
        ·
        2 days ago

        LLM creators: *feeds an algorithm millions of lines of text

        Some dude on the internet: “language”

        • Lvxferre [he/him]@mander.xyz
          link
          fedilink
          arrow-up
          2
          ·
          2 days ago

          I use those quotation marks because IMO they’re better described as large token models than large language models. They have rather good morphology and syntax, but once you look at the higher layers (semantics and specially pragmatics) they drop the ball really hard. Even if those layers are way more important than the lower ones.

          For a rough analogy, it’s like a taxidermised cat - some layers (the skin and fur) are practically identical to the real thing, but it’s missing what makes a cat a cat, you know? It’s still useful if you want some creepy deco, but don’t expect the taxidermised critter to ruin your furniture or to use your belly as sleeping pad.