By now, you have probably heard of OpenAI’s ChatGPT, or any of the alternatives GPT-3, GPT-4, Microsoft’s Bing Chat, Facebook’s LLaMa or even Google’s Bard. They are artificial intelligence programs that can participate in a conversation. Impressively smart, they can easily be mistaken for humans, and are skilled in a variety of tasks, from writing a dissertation to the creation of a website.
How can a computer hold such a conversation?
Agreed, smartness is about what it can do, not how it works. As an analogy, if a chess bot could explore the entire game tree hundreds of moves ahead, it would be pretty damn smart (easily the best in the world, probably strong enough to solve chess) despite just being dumb minmax plus absurd amounts of computing power.
The fact that ChatGPT works by predicting the most likely next word isn’t relevant to its smartness except as far as its mechanism limits its outputs. And predicting the most likely next word has proven far less limiting than I expected, so even though I can think of lots of reasons why it will never scale to true intelligence, how could I be confident that those are real limits and not just me being mistaken yet again?