Exactly, They’re just probabilistic models. LLMs are just outputting something that statistically could be what comes next. But that statistical process does not capture any real meaning or conceptualization, just vague associations of when words are likely to show up, and what order they’re likely to show up in.
What people call hallucinations are just the system functional capability diverging from their expectation of what it is doing. Expecting it to think and understand, when all it is doing is outputting a statistically likely continuation.
Exactly, They’re just probabilistic models. LLMs are just outputting something that statistically could be what comes next. But that statistical process does not capture any real meaning or conceptualization, just vague associations of when words are likely to show up, and what order they’re likely to show up in.
What people call hallucinations are just the system functional capability diverging from their expectation of what it is doing. Expecting it to think and understand, when all it is doing is outputting a statistically likely continuation.