Much attention has been given lately to the success of Artificial Intelligence. The abilities of ChatGPT 3 and Dall E-2 are impressive. Apart from the fact that they sound like droids from Starwars, there is not much to suggest that they are the harbinger of any fundamental advance towards creating a general artificial intelligence. We are no closer to building an artificial human-like intelligence than we have been for the past 5 millennia. What is more, no such progress is made anywhere. We are no closer to being taken over by robots than the day Terminator or the Matrix debuted at cinemas across the world. There is no impending doom from Skynet or Agent Smith turning us into serfs or mere generators of electricity
This is not to belittle the advances of Artificial Intelligence or the potential impact it could have on the world. Indeed I have great admiration and respect for the abilities of these technologies to generate impressive prose or visuals or even recipes for drinks. But these are just technologies and like all technologies such as nuclear power, dynamite, and genetically modified plants, they have a potential utility and risk profile. What they do not have is a path towards Artificial Intelligence as understood by most people, as intelligence with humanlike features of intelligence.
To see clearly why that is the case we have to go back to Charles Darwin’s observation from The Origin of Species: “Intelligence is based on how efficient a species became at doing the things they need to survive” (Darwin, The Origin of Species, 1872). ChatGPT 3 is doing nothing to survive, and neither is Dall E-2. The problem for Artificial Intelligence research and development is an impoverished concept of Intelligence. Darwin got it right. The error is that AI research is only looking at one side of the coin: problem-solving. That is however only superficially the most important part of intelligence.
As I have argued in a recent paper “The Possibility of Artificially Intelligent Systems and the Concept of Intelligence” , the hard problem of AI is how computers find and prioritize problems to solve. As Darwin found out, species need to, not only solve whatever problems they face, but to identify and prioritize the right problems to solve.
A closer analysis shows that to become a problem-finding system, the other side of the coin of intelligence, the system needs to exhibit five properties:
- Unity – to be an integrated system of components and processes
- Boundaries – to have a well-defined boundary between inside and outside the system
- Knowledge representation – a way to represent knowledge of the external world
- Interaction – to be able to interact with the external world
- Self-sustaining – to be self-sustaining in its interaction with the world
Contemporary AI all exhibit the first 3 properties. It is rarer that they exhibit the fourth point, interaction, although this happens in real-world robotics and industrial control systems. None, however, exhibit the fifth property: to be self-sustaining.
This is what Darwin understood as a natural part of intelligence. Species are self-sustaining in so far as they manage to survive. This is what Intelligence is. And no artificial intelligence exhibits anything close to this property yet. The question of achieving General Artificial Intelligence thus becomes entangled with artificial life, because only living systems can exhibit true intelligence.
Instead of worrying about the takeover of superintelligent machines, and losing our jobs to robots, we should sit back and think of how this new type of hammer can help us hit the nails more efficiently while being sure to put proper guardrails around it. We should marvel at the capabilities of our technology and understand have it can best help us. But there is nothing new under the sun and no prospect of a Frankenstein moment anytime soon.