As an Artificial Intelligence proponent, I want to see the field succeed and go on to do great things. That is precisely why the current exaggerated publicity…
Intelligence has been defined in many ways: the capacity for abstraction, logic, understanding, self-awareness, learning, emotional knowledge, reasoning, planning, creativity, critical thinking, and problem-solving.
LLMs are pretty capable of abstraction and understanding.
Though they obviously use logic in that they are constructed from/of it, they are not really capable of actual logical analysis, beyond emulating it.
They can’t really do any of the other attributes of intelligence at all, beyond basically decently to poorly emulating them.
The problem with these definitions is that they are verbal. Some could argue ChatGPT is capable of understanding, while others could do the opposite. I don’t even believe it is capable of abstraction.
The Turing test was novel in that we could test the intelligence of AIs without actually defining intelligence. And it’s still useful because researchers probably can’t agree on a rigorous definition of intelligence.
What’s funny, we complain about the terminology use of AI, but nobody can actually define the intelligence.
https://en.m.wikipedia.org/wiki/Intelligence
LLMs are pretty capable of abstraction and understanding.
Though they obviously use logic in that they are constructed from/of it, they are not really capable of actual logical analysis, beyond emulating it.
They can’t really do any of the other attributes of intelligence at all, beyond basically decently to poorly emulating them.
The problem with these definitions is that they are verbal. Some could argue ChatGPT is capable of understanding, while others could do the opposite. I don’t even believe it is capable of abstraction.
The Turing test was novel in that we could test the intelligence of AIs without actually defining intelligence. And it’s still useful because researchers probably can’t agree on a rigorous definition of intelligence.