AI is Easy – Life is Hard

Artificial Intelligence is easy. Life is hard. This simple insight should collectively caution our expectations. When we look at Artificial Intelligence and the amazing results it has already produced, it is clear that it has not been easy. The most iconic victories are as follows.

  • Deep Blue beats Kasparov
  • Watson beats champions Brad Rutter and Ken Jennings in jeopardy
  • Google beats world champion in Go

Today Autonomous Vehicles (AV) manage to stay on the road and go where they are expected to. Put on top of this multiple implementations of face recognition, speech recognition, translation etc. What more could we want to show us that it is just a matter of time before AI becomes truly human in its ability? Does this not reflect the diversity of human intelligence and how it has been truly mastered by technology?

Actually, no, I don’t think so. From a superficial point of view it could look like it, but deep down all these problems are if not easy, then hard in an easy way in the sense that there is a clear path to solving them.

 

AI Is Easy 

The one thing that holds true for all of these applications is that the goals are very clear. Chess, Jeopardy and Go: you either win or you don’t. Facial, speech and any other kind of recognition: you recognize something or you don’t. Driving an autonomous vehicle: It either drives acceptably according to the traffic rules or it doesn’t. If only human life were so simple.

Did you know when you were born what you wanted to work with? Did you know the precise attributes of the man/woman you were looking for? Did you ever change your mind? Did you ever want to do two or more mutually exclusive things (like eating cake for breakfast and live a healthy life)?

Humans are so used to constantly evaluate trade offs, with unclear and frequently changing goals that we don’t even think about it.

 

An AI Thought Experiment 

Let me reframe this in the shape of a tangible existing AI problem: Autonomous Vehicles (AV). Now that they are very good or even perfect at always staying within the traffic rules, how do they behave when conditions are not as clear? Or even in situations where the rules might be conflicting?

Here is a thought experiment: the self-driving car is driving on a sunny spring afternoon through the streets of New York. It is a good day and it is able to keep a good pace. On its right is a sidewalk with a lot of pedestrians (as is common in New York), on its left is a traffic lane going the opposite direction as they do on two-way streets (which are more rare but not altogether absent). Now suddenly a child runs out into the road in front of the car and it is impossible for it to brake in time. The autonomous vehicle needs to make a choice. It either runs over the child makes an evasive maneuver to the right hitting pedestrians or the left hitting cars going the other direction?

How do we prepare the AI to make that decision? Now, the goals are not so clear as in a jeopardy game. Is it more important not to endanger children? Let’s just say for the sake of argument this was the key moral heuristic. The AI would then have to calculate how many children were on the sidewalk and in a given car on the opposite side of the road. It may kill two children on the sidewalk or in another car. What if there were two children in the Autonomous Vehicle itself? Does the age factor in to the decision? Is it better to kill old people than younger? What about medical conditions? Would it not be better to hit a terminal cancer patient than a healthy young mother?

The point of this thought experiment is just to highlight that even if the AI could make an optimal decision it is not simple what optimal means. It may indeed differ across people, that is, regular human beings who would be judging it. There are hundreds of thousands of similar situations where there just by definition is no one right solution, and consequently no clear goal for the AI to optimize towards. What if we had an AI as the next president? Would we trust it to make the right decisions in all cases? Probably not, politics is about sentiment, subjectivity and hard solutions. Would we entrust an AI that would be able to go through all previous court cases, statistics, political objectives to make fair rulings and sentencing? No way, although it probably could.

 

Inside the AI Is the Programmer 

As can be seen from this the intelligence in an AI must be explained by another intelligence. We would still have to instill the heuristics and the tradeoffs in the AI, which then leads back to who programs the AI. This means that suddenly we will have technology corporations and programmers making key moral decisions in the wild. They will be the intelligence inside the Artificial Intelligence.

In many ways this is already the case. A more peaceful case in point is online dating: a programmer has essentially decided who should find love and who shouldn’t through the matching algorithm and the input used. Inside the AI is the programmer making decisions no one ever agreed they should. Real Artificial Intelligence is as elusive as ever; no matter how many resources we throw at it. Life will throw us the same problems as it always has and at the end of the day the intelligence will be human anyway.


Posted

in

by

Tags:

en_GBEnglish