None of this would happen if people recognized that, at best, AI has the intelligence level of a child. It has a lot of knowledge (some of which is hallucinated, but that’s besides the point) but none of the responsibility that you’d hope an adult would have. It’s also not capable of learning from its own mistakes or being careful.
There’s a whole market for child safety stuff: corner foam, child-proof cabinet locks, power plug covers, etc… You want all of that in your system if you let the AI run loose.
I genuinely considered writing “confabulated” instead of “hallucinated” but decided to stick with the latter because everyone knows what it means by now. It also seems that ‘hallucination’ is the term of art for this: https://en.wikipedia.org/wiki/Hallucination_(artificial_intelligence)
So while I appreciate pedantry and practice it myself, I do stand by my original phrasing in this case.
It isn’t pedantry in the case I’m making. I’m making more of a moral/ethical point in that it’s unfair and probably ableist to people who do actually hallucinate to compare them with something that doesn’t actually do that.
It is robbing the word of any value or meaning and kind of making fun of them in the process, downplaying what they go through.
I see, that’s different from how I interpreted it. Thanks for clarifying.
I don’t really see it that way. To me it’s not downplaying anything. AI ‘hallucinations’ are often disastrous, and they can and do cause real harm. The use of the term in no way makes human hallucinations sound any less serious.
As a bit of a tangent, unless you experience hallucinations yourself, neither you nor I know how those people who do feel about the use of this term. If life has taught me anything, it’s that they won’t all have the same opinion or reaction anyway. Some would be opposed to the term being used this way, some would think it’s a perfect fit and should continue. At some point, changing language to accommodate a minority viewpoint just isn’t realistic.
I don’t mean this as a blanket statement though, there are definitely cases where I think a certain term is bad for whatever reason and agree it should change. It’s a case by case thing. The change from master to main as the default branch name in git springs to mind. In that case I actually think the term master is minimally offensive, but literally no meaning is lost if switching to main and that one is definitely not offensive so I support the switch. For ‘hallucination’ it’s just too good of a fit, and is also IMO not offensive. Confabulation isn’t quite as good.
Exactly, They’re just probabilistic models. LLMs are just outputting something that statistically could be what comes next. But that statistical process does not capture any real meaning or conceptualization, just vague associations of when words are likely to show up, and what order they’re likely to show up in.
What people call hallucinations are just the system functional capability diverging from their expectation of what it is doing. Expecting it to think and understand, when all it is doing is outputting a statistically likely continuation.
None of this would happen if people recognized that, at best, AI has the intelligence level of a child. It has a lot of knowledge (some of which is hallucinated, but that’s besides the point) but none of the responsibility that you’d hope an adult would have. It’s also not capable of learning from its own mistakes or being careful.
There’s a whole market for child safety stuff: corner foam, child-proof cabinet locks, power plug covers, etc… You want all of that in your system if you let the AI run loose.
AI does not hallucinate, since it has no conciousness or thinking.
I genuinely considered writing “confabulated” instead of “hallucinated” but decided to stick with the latter because everyone knows what it means by now. It also seems that ‘hallucination’ is the term of art for this: https://en.wikipedia.org/wiki/Hallucination_(artificial_intelligence)
So while I appreciate pedantry and practice it myself, I do stand by my original phrasing in this case.
It isn’t pedantry in the case I’m making. I’m making more of a moral/ethical point in that it’s unfair and probably ableist to people who do actually hallucinate to compare them with something that doesn’t actually do that.
It is robbing the word of any value or meaning and kind of making fun of them in the process, downplaying what they go through.
I see, that’s different from how I interpreted it. Thanks for clarifying.
I don’t really see it that way. To me it’s not downplaying anything. AI ‘hallucinations’ are often disastrous, and they can and do cause real harm. The use of the term in no way makes human hallucinations sound any less serious.
As a bit of a tangent, unless you experience hallucinations yourself, neither you nor I know how those people who do feel about the use of this term. If life has taught me anything, it’s that they won’t all have the same opinion or reaction anyway. Some would be opposed to the term being used this way, some would think it’s a perfect fit and should continue. At some point, changing language to accommodate a minority viewpoint just isn’t realistic.
I don’t mean this as a blanket statement though, there are definitely cases where I think a certain term is bad for whatever reason and agree it should change. It’s a case by case thing. The change from
master
tomain
as the default branch name in git springs to mind. In that case I actually think the termmaster
is minimally offensive, but literally no meaning is lost if switching tomain
and that one is definitely not offensive so I support the switch. For ‘hallucination’ it’s just too good of a fit, and is also IMO not offensive. Confabulation isn’t quite as good.Exactly, They’re just probabilistic models. LLMs are just outputting something that statistically could be what comes next. But that statistical process does not capture any real meaning or conceptualization, just vague associations of when words are likely to show up, and what order they’re likely to show up in.
What people call hallucinations are just the system functional capability diverging from their expectation of what it is doing. Expecting it to think and understand, when all it is doing is outputting a statistically likely continuation.
A child, on acid and meth. You should never let it run lose, no matter how many safeguards. Not if your code is business critical.
I am never quite sure if the I in AI stands for intelligence or ignorance.