Mimicry Is Not Mastery: Why Fooling Humans Doesn't Make Machines Intelligent
As a self-confessed sci-fi aficionado, nothing excites me more than a mind-bending movie that challenges our understanding of Artificial Intelligence. However, it's vital that we keep a clear line between our imagination and reality. In other words, we shouldn't let our fears or aspirations hover over these lines of code we interact with via our browsers. I can't help but notice the growing chatter about whether the next iteration of GPT or Google's bot could be, in fact, "truly" intelligent. Could we really be on the cusp of realizing "artificial general intelligence" given the astonishingly human-like capabilities of these generative AI systems?
My outlook on AI is rather hopeful, viewing it as an incredible tool, an extension of our cognitive faculties such as thinking, planning, decision-making, and memory. Yet, I see a misconception taking root, propagated by certain media personalities, influencers, and tech visionaries: the notion that an AI, once it convincingly behaves like a human, can be dubbed intelligent.
This needs some course correction. This yardstick for intelligence, dubbed the "Turing Test" or more fondly by its creator as the "Imitation Game," was introduced by none other than AI pioneer Alan Turing. Turing, back in 1950, posited:
I believe that in about fifty years’ time it will be possible to programme computers, with a storage capacity of about 10^9, to make them play the imitation game so well that an average interrogator will not have more than 70 percent chance of making the right identification after five minutes of questioning. … I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.
(Turing, A.M. (1950). Computing Machinery and Intelligence. Mind, Volume LIX, Issue 236, October 1950, Pages 433–460, https://doi.org/10.1093/mind/LIX.236.433)
In essence, if you can't distinguish a bot from a human during a conversation, then voilà, it's truly intelligent! But there are a few issues with this premise.
First off, Turing's test essentially measures a machine's ability to imitate human behavior, not its intelligence per se. Emulating human-like responses doesn't necessarily demonstrate understanding, reasoning, or consciousness - the cornerstone traits of human intelligence. An AI might ace the Turing Test by generating human-like responses, but it may not grasp or interpret information the same way we humans do.
Take language translation as an instance. An AI might flawlessly translate a text from English to French, yet it doesn't comprehend the underlying semantics, context, or cultural nuances that weave the text together. It's merely executing algorithms and leveraging vast data sets to spot patterns and generate the most likely translation. Contrarily, in human intelligence, every word carries meaning, emotion, and a network of related concepts, all having causal connections with the world. I, like you, grew up adapting to this world, and my cognitive abilities have developed over time through physical and social interactions. It's a world apart from machine learning to imitate intelligence.
Furthermore, the Turing Test is undeniably anthropocentric, using human-like intelligence as the gold standard. This lens limits our view, glossing over the potential for AI to manifest intelligence in ways starkly different from, and possibly even exceeding, human capabilities. AI could outshine us in areas we find impossible, such as processing colossal volumes of data in the blink of an eye or predicting intricate patterns in weather or stock markets. These are forms of intelligence too, albeit not fitting into the human-like criteria defined by Turing.
In their insightful work, Floridi and Chiriatti remind us that being a founding father of AI doesn't make one infallible. They write:
Newton studied alchemy, possibly trying to discover the philosopher’s stone. Turing believed in true Artificial Intelligence, the kind you see in Star Wars. Even geniuses make mistakes. Turing’s prediction was wrong.
Floridi, L., Chiriatti, M. GPT-3: Its Nature, Scope, Limits, and Consequences. Minds & Machines 30, 681–694 (2020). https://doi.org/10.1007/s11023-020-09548-1
While Turing's contributions to AI remain revolutionary and foundational, it's apparent that his criteria might not do justice to the richness and complexity of machine intelligence. As we traverse the thrilling terrain of AI, it's crucial we shape a more rounded, multifaceted understanding of intelligence—one that embraces comprehension, ethical accountability, and the unique prowess that machine intelligence could bring to the table.