Another AI fail. Letting AI write code and modify your file system without sandboxing and buckups. What could go wrong?

  • megopie@beehaw.org
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    1 day ago

    Exactly, They’re just probabilistic models. LLMs are just outputting something that statistically could be what comes next. But that statistical process does not capture any real meaning or conceptualization, just vague associations of when words are likely to show up, and what order they’re likely to show up in.

    What people call hallucinations are just the system functional capability diverging from their expectation of what it is doing. Expecting it to think and understand, when all it is doing is outputting a statistically likely continuation.