Meditation #5 The Hard Problem of AI is the Problem

Much energy and resources are being put into Artificial Intelligence currently. Artificial Intelligence is expected to approach and eclipse human intelligence. According to a poll by Nick Bostrom the consensus is that it will happen anytime between a few decades and one hundred years from now with the consensus around mid century. We are making great strides and much fear is associated with this development. However, at the heart of AI is a conceptual problem with real practical consequences that may question the fundamental possibility of an Artificial General Intelligence given the current approach. 

In the interest of conceptual clarity let us first define a few terms that are used to distinguish different flavors of AI. The term Artificial Intelligence (AI) is used for all types.

Artificial Narrow Intelligence (ANI) – these are applications of AI that solve narrow problems like recommendations of products, image recognition or text to speech systems. 

Artificial General Intelligence (AGI) – which is an intelligence on a par with human intelligence and in all respects indistinguishable from humans

Artificial Super Intelligence (ASI) – is similar to AGI but superior particularly with respect to speed

One obvious but clearly central aspect that is rarely the object of reflection is the concept of intelligence. In the context of AI this is a strangely trivial concept and treated as self-evident, while in psychology intelligence has been the subject of intense debate for more than a century. Nevertheless, in contemporary psychology it can hardly be characterized as an area of consensus. The purpose here is not to go into any detailed debate of what is and is not intelligence as this will always be a point of contention and more of a definitional than a substantial problem. After all, anyone is free to define a concept as they prefer as long as that definition is precise and consistent. Rather here, I would like to depart from the standard concept of intelligence as understood in the context of AI. 

What is intelligence then according to AI research? according to the Wikipedia article the following are important traits of intelligence: 

  • Reason – the use of strategy, and ability to solve puzzles
  • Representing and using knowledge – like common sense inference
  • Planning – structuring actions toward a goal
  • Learning – acquiring new skills
  • Communication in natural language – speaking in a way humans will understand

These are focussed on general abilities that are part of intelligence with good reason. They are all represented in one way or other in most psychological theories of intelligence too. These abilities can, however, be tricky to measure. And if we cannot measure them it is difficult to know whether an AI possesses them. Another approach has therefore been to depart from the tests that would determine whether an AI exhibits such abilities. 

The earliest and most famous one is the Turing test developed as a thought experiment in 1940 by Alan Turing. In this test a game is played where the purpose is deceit. If the computer is statistically as successful at deceiving as the human opponent it will be considered to have passed the Turing test and thus exhibited intelligence at the same level as a human. One cannot help but speculate that Turing’s occupation at the time as a code breaker in the second world war might have influenced this conceptualization of intelligence but that is another matter. 

Another more contemporary account is Steve Wozniak’s coffee test in which a machine is required to be able to go into any ordinary American home and brew a cup of coffee. A somewhat more practical concept and one could speculate similarly inspired by the preoccupations of the author of the test. 

Ben Goertzel, an AI researcher, has proposed the so-called robot college student test, where an AI is required to enroll in a university on the same terms as a human and get a degree in order to pass the test. 

While one could discuss whether these tests really test AGI rather than merely ANI, they reveal one core observation about intelligence: that it is entirely conceptualized in the context of problem solving. These tests may focus on different problems to solve, how to deceive, how to brew coffee, how to get a degree, but they all depart from the fact that the problem is already given. 

The same can be said of the abilities that are usually associated with AI mentioned above. 

Reason is problem solving with respect to finding the best solution given a predefined problem such as “how to solve this puzzle” or in the more dystopian inclined accounts: “how to take over the world”

Representing and using knowledge is problem solving with respect to ad hoc problems arising from who knows where? 

Planning is problem solving with regard to structuring a temporal sequence of actions to solve a given problem such as a pre-given goal. 

Learning is problem solving with regards to adapting to a problem and solving it. Learning IS basically problem solving or at least optimizing how to solve problems.

Communication in natural language is problem solving with respect to conveying information between two or more communicators.

Stepping aside for a moment to the philosophy of mind we find a similar problem. David Chalmers in the 90s identified the hard problem of consciousness in the philosophy of mind to be why and how we have conscious experience. Compared to this other problems of the physical explanation of how we process and integrate information were argued to be “easy” problems because all they require is to specify the mechanisms of these functions. They are thus considered easy, not because they were trivial, but because when they have all been solved the hard problem persists: when all cognitive functions have been explained the problem of why and how we have conscious experience remains. In order to understand the distinction and how it relates to our problem it would be fruitful to quote Chalmers at length: 

“Why are the easy problems easy, and why is the hard problem hard? The easy problems are easy precisely because they concern the explanation of cognitive abilities and functions. To explain a cognitive function, we need only specify a mechanism that can perform the function. The methods of cognitive science are well-suited for this sort of explanation, and so are well-suited to the easy problems of consciousness. By contrast, the hard problem is hard precisely because it is not a problem about the performance of functions. The problem persists even when the performance of all the relevant functions is explained. (Here “function” is not used in the narrow teleological sense of something that a system is designed to do, but in the broader sense of any causal role in the production of behavior that a system might perform.)”

And further: 

“​​The easy problems of consciousness are those that seem directly susceptible to the standard methods of cognitive science, whereby a phenomenon is explained in terms of computational or neural mechanisms. The hard problems are those that seem to resist those methods.”

Something analogous is the case in AI. Here we can also discern easy problems and hard problems. As was seen above the concept of intelligence is entirely focused on problem SOLVING. In fact, the different kinds of problem solving we have just reviewed are the easy problems of AI. Even if we solve all of them, we will in fact not have a human-like intelligence. We still miss the flipside of the coin of problem solving: problem FINDING. The hard problem of AI is therefore how an AI finds the right problems to solve.

As was postulated for philosophy of mind by Chalmers, we can solve all the easy problems of AI and have a perfect problem solving machine without having a true AGI or ASI. The problem is that we have a homunculus problem because the problems that the AI is solving, derive ultimately from a human since a human will at some point have created it and set the parameters for the problems the AI will solve. Even if it morphs and starts creating other AIs itself the root problem or problems will have been created by a human that created the first system or seed AI as it is sometimes called. The root of the AI, even if it is indistinguishable or superior in its problem solving abilities to a human, is human and it is therefore not an AGI or ASI. 

Commonly the solution is to assert that the AI comes into the world with a motivation to achieve a goal. From this it somehow finds the problems to solve. Even if we are unclear on how exactly the problems are found this still seems a bit of a stretch if we think it should match human intelligence. Having one goal and pursuing it for humans seems to be the norm in only one realm, that of the coaching and self help industry. In actual human life it is the exception rather than the rule that a human has one goal.

A simple example: humans typically don’t know what they want to be when they grow up. Then they end up becoming a management consultant and despair at the latest around 40 at which point they decide to become an independent quilting artisan. Only to switch back to corporate life as a CFO and then retire to a monastery only to return with the goal of providing the world with poetry. This entails a lot of different competing and changing motivations over the span of a lifetime the dynamics of which are poorly understood. Moreover, it entails a lot of different problems to identify along the way. 

Not until an AI has the ability to identify and formulate such shifting problems can it be called an Artificial General Intelligence. Until then it is an Artificial Narrow Intelligence with the purpose of solving problems pre-set by humans. Consequently, until we solve the hard problem of AI it will remain a mere tool of humans: the intelligence we see is not truly humanlike general and independent but in fact mere reflections of human intelligence and hence not truly artificial. 

This does not mean that doomsday scenarios, which Tegmark, Bostrom and the public spend a great deal of time on, go away. It does however change the status of how we view them. Currently the consensus is that AI poses some sort of fundamentally different problem to us. That does not seem to be the case though. Ever since late industrialization and the coming of advanced technologies like nuclear power plants and chemical factories we have been living with the threat of high risk technologies. These have been treated with great clarity by Charles Perrow and AI falls squarely within this treatment. 

This analysis also points to such scenarios probably being exaggerated in both their severity and timing since we have not even started tackling the hard problem of AI. When we haven’t even begun to understand how problems are found in an environment and dynamically changing with the interactions between the agent and the environment, it is hard to see how human-like intelligence can develop anytime soon. 

Rather than fearing or dreaming about artificial general intelligence we might benefit from thinking about how AI as a technology can benefit humans rather than take the place of humans. We might also start thinking about the hard problem as a way to improve AI. Thinking more about how problems are found could be an avenue to make AI more humanlike or at least more biological since all biological species show this fundamental ability. Today most AI use a brute force approach in solving problems and need hundreds of orders of magnitude more learning cycles than humans in order to learn anything. Perhaps a deeper understanding of problem finding would lead to more efficient and “biological” ability to learn that does not depend on endless amounts of data and learning cycles. 

Until we start tackling the hard problem of AI, for better or worse, progress in AI will stall and scale only with the underlying technological progress of processing power, which does not advance our goal of more human-like AI. 


Posted

in

by

Tags:

en_GBEnglish