Much current concern revolves around when superhuman intelligence will come and what will happen to humanity once it does. As I have argued in my article The Hard Problem of AI and the Possibility of Social Robots, we are not presently making any progress on AGI because we have not yet started to address the hard problem of AI.
When we talk about superhuman intelligence there is another thing that keeps bothering me about the idea as such. It seems clear enough that it is Artificial (human) Intelligence just somehow super human. But as Amos Tversky, Daniel Kahneman and other behavioral psychologists have shown, human intelligence is made up of a plethora of heuristics that exhibit biases. Examples include the availability heuristic, the anchoring heuristic and confirmation bias. These are not aberrations, they are the very fabric of human intelligence.
Surely a superhuman intelligence or any computer system for that matter would eliminate those biases and heuristic short cuts, but would that not also be eliminating the way of thinking that makes us human? The human aspect of superhuman intelligence is quite quickly dispensed with by eliminating all the perceived “flaws” and biases of human thinking. That would mean that an artificial HUMAN-like intelligence may be a contradiction in terms.
In many ways superhuman intelligence is already here. A computer can already remember, calculate, perceive better than any human. It is not subject to human flaws and errors in terms of evaluating probabilities. But calling it super human, kind of misses the point, because it is not human.
The imagination around what this superhuman intelligence would be like also seems tenuous, because in some ways it selectively exhibits some human traits like greed and domination. It apparently wants to be rich and wield power and for some reason in many renditions, it wants to destroy humanity. It seems odd that it would be able to eliminate only some perceived flaws of human intelligence but not all such as greed and dominance.
From the philosophical perspective of embodied cognition, human intelligence is bound to the human experience. Our way of thinking and feeling is tied to how we experience the environment around us and not just a symbol processing problem solving network inside the skull.
It pervades our language and how we think, abstract concepts and mental representations are grounded in, or derived from, our sensorimotor experiences. For example, concepts like “up,” “down,” or “in/out”, which are fundamental to all linguistic communication and thought itself are thought to be based on our bodily experiences of the world, which has been argued by philosophers such as Marc Johnson, George Lakoff and Gilles Fauconnier.
We also use the environment as an integrated way of thinking, as argued by Andy Clark and David Chalmers in their 1998 paper: “The Extended Mind.” Here they argue that we use the environment as an extended part of our cognition. The simple use of markings or notebooks serve to extend our memory and devices such as a calculator helps us solve problems.
The Enactivist position popularized by Francisco Varela, Evan Thompson, & Eleanor Rosch holds that it is in the engagement with the world that cognition arises. They emphasize that the mind is not a world-modeler, but a world-maker. Cognition is an enaction (bringing forth) of meaning through the living organism’s autonomous, self-organizing activity (autopoiesis) and its history of structural coupling with the environment.
Talking about a computer or Artificial General Intelligence being super human is therefore either a contradiction in terms or an ill-conceived idea based on an incomplete understanding of human intelligence.
That is why it is better to think about the question of future of computational intelligence as one of what kind of computational intelligence we can expect rather than if or when we will have human-like intelligence. We want to understand how we interact with these artificial intelligences, because it will not be human-like. We would probably have to learn to relate to these computational intelligences in much the same way that we relate to other intelligence in the natural world. We have to remember that we are already surrounded by a world of biological intelligence. Every day we relate to plants, viruses, fungus, and animals in an ecosystem that we to a large extent direct. Rather than speculating on when this mythical superhuman intelligence will magically materialise it would be more helpful to think about what kind of intelligence we would invite into our human/digital ecosystems and how we can relate to them, how we would interact with them. This also widens the scope beyond only artificial intelligence but to digital intelligence in general and how we will mold the future human digital environment.
Photo by Urban Vintage on Unsplash
