AI is Easy – Life is Hard

Artificial Intelligence is easy. Life is hard. This simple insight should collectively caution our expectations. When we look at Artificial Intelligence and the amazing results it has already produced, it is clear that it has not been easy. The most iconic victories are as follows.

  • Deep Blue beats Kasparov
  • Watson beats champions Brad Rutter and Ken Jennings in jeopardy
  • Google beats world champion in Go

Today Autonomous Vehicles (AV) manage to stay on the road and go where they are expected to. Put on top of this multiple implementations of face recognition, speech recognition, translation etc. What more could we want to show us that it is just a matter of time before AI becomes truly human in its ability? Does this not reflect the diversity of human intelligence and how it has been truly mastered by technology?

Actually, no, I don’t think so. From a superficial point of view it could look like it, but deep down all these problems are if not easy, then hard in an easy way in the sense that there is a clear path to solving them.

 

AI Is Easy 

The one thing that holds true for all of these applications is that the goals are very clear. Chess, Jeopardy and Go: you either win or you don’t. Facial, speech and any other kind of recognition: you recognize something or you don’t. Driving an autonomous vehicle: It either drives acceptably according to the traffic rules or it doesn’t. If only human life were so simple.

Did you know when you were born what you wanted to work with? Did you know the precise attributes of the man/woman you were looking for? Did you ever change your mind? Did you ever want to do two or more mutually exclusive things (like eating cake for breakfast and live a healthy life)?

Humans are so used to constantly evaluate trade offs, with unclear and frequently changing goals that we don’t even think about it.

 

An AI Thought Experiment 

Let me reframe this in the shape of a tangible existing AI problem: Autonomous Vehicles (AV). Now that they are very good or even perfect at always staying within the traffic rules, how do they behave when conditions are not as clear? Or even in situations where the rules might be conflicting?

Here is a thought experiment: the self-driving car is driving on a sunny spring afternoon through the streets of New York. It is a good day and it is able to keep a good pace. On its right is a sidewalk with a lot of pedestrians (as is common in New York), on its left is a traffic lane going the opposite direction as they do on two-way streets (which are more rare but not altogether absent). Now suddenly a child runs out into the road in front of the car and it is impossible for it to brake in time. The autonomous vehicle needs to make a choice. It either runs over the child makes an evasive maneuver to the right hitting pedestrians or the left hitting cars going the other direction?

How do we prepare the AI to make that decision? Now, the goals are not so clear as in a jeopardy game. Is it more important not to endanger children? Let’s just say for the sake of argument this was the key moral heuristic. The AI would then have to calculate how many children were on the sidewalk and in a given car on the opposite side of the road. It may kill two children on the sidewalk or in another car. What if there were two children in the Autonomous Vehicle itself? Does the age factor in to the decision? Is it better to kill old people than younger? What about medical conditions? Would it not be better to hit a terminal cancer patient than a healthy young mother?

The point of this thought experiment is just to highlight that even if the AI could make an optimal decision it is not simple what optimal means. It may indeed differ across people, that is, regular human beings who would be judging it. There are hundreds of thousands of similar situations where there just by definition is no one right solution, and consequently no clear goal for the AI to optimize towards. What if we had an AI as the next president? Would we trust it to make the right decisions in all cases? Probably not, politics is about sentiment, subjectivity and hard solutions. Would we entrust an AI that would be able to go through all previous court cases, statistics, political objectives to make fair rulings and sentencing? No way, although it probably could.

 

Inside the AI Is the Programmer 

As can be seen from this the intelligence in an AI must be explained by another intelligence. We would still have to instill the heuristics and the tradeoffs in the AI, which then leads back to who programs the AI. This means that suddenly we will have technology corporations and programmers making key moral decisions in the wild. They will be the intelligence inside the Artificial Intelligence.

In many ways this is already the case. A more peaceful case in point is online dating: a programmer has essentially decided who should find love and who shouldn’t through the matching algorithm and the input used. Inside the AI is the programmer making decisions no one ever agreed they should. Real Artificial Intelligence is as elusive as ever; no matter how many resources we throw at it. Life will throw us the same problems as it always has and at the end of the day the intelligence will be human anyway.

AI and the City

Artificial Intelligence is currently being touted as solution to most problems. Most if not all energy is put into conjuring up new and even more exotic machine learning models and ways of optimizing these. However, the primary boundary for AI is currently not technical as it used to be. It is ecological. Here I am not thinking about the developer ecosystem, but the ecosystem of humans that have to live with the consequences of AI and interact with machines and systems driven by it. While AI lends itself beautifully to the concept of smart cities this is also one of the avenues where this will most clearly play out because the humans that stand to benefit and potentially suffer from the consequences of AI are also voters. Voters vote for politicians and politicians decide to fund AI for smart cities.

How Smart is AI In A Smart City Context?

At a recent conference I had an interesting discussion where we were talking about what AI could be used for. Someone suggested that Machine Learning and AI could be used for smart cities. Working for a city and having worked with AI for a number of years, my question was “for what?” One suggestion was regulating traffic.

So, let us think through this. In New York City we have on occasion a lot of traffic. Let us say that we are able to construct a machine learning system that could indeed optimize traffic flow through the city. This will not be simple or easy, but not outside the realm of the possible. Let us say that all intersections are connected to this central AI algorithm that provides the city as a whole with optimal traffic conditions. The algorithm works on sensor input that counts the number of cars at different intersections based on existing cameras. Probably this will not mean that traffic always flows perfectly but certainly on average does better.

Now imagine during one of these congestions a fire erupts in downtown Manhattan and fire trucks are delayed in traffic due to congestion. 50 people die. The media then finds out that the traffic lights are controlled by an artificial intelligence algorithm. They ask the commissioner of transportation why 50 people had to die because of the algorithm. This is not a completely fair question but media have been known to ask such questions. He tries to explain that the algorithm optimizes the overall flow of traffic. The media are skeptical and ask him to explain how it works. This is where it gets complicated. Since this in part is a deep learning algorithm no one can really tell how it works or why there was congestion delaying the fire trucks at that particular time. The outrage is palpable and headlines read “City has surrendered to deadly AI” and “Incomprehensible algorithm leads to incomprehensible fatalities”

Contrast this to a simple algorithm that is based on clear and simple rules that are not as effective overall but work along the lines of 30 seconds one way 30 seconds another way. Who would blame the commissioner of transportation for congestion in that case?

Politics And Chaos

Media aside there could be other limiting factors. Let us stay with our idea of an AI system controlling the traffic lights in New York City. Let us further assume that the AI system gets a continuous input about traffic flow in the city. Based on this feed it can adapt the signals to optimize the flow. This is great but due to the fact that we have now coupled the system with thousands of feedback loops it enters into the realm of complex or chaotic systems and will start to exhibit properties that are associated with that kind of systems. Typical examples of such properties are: erratic behavior, path dependency, and limited possibility for prediction.

Massively scalable AI cannot counteract these effects easily and even if we could, the true system dynamics would not be known until the system goes live. We would not know how many cars would be running red lights or speed up/slow down compared to today. Possibly the system could be trimmed and be made to behave, but then we basic politics. Which responsible leader would want to experiment with a city of more than 10 million peoples daily lives. Who would want to face these people and explain to them that the reason they are late for work or for their son’s basket ball game is trimming an AI algorithm?

The Limits Of AI

So, the limits to AI may not be primarily of a technical nature. They may have just as much to do with how the world behaves and what other non-data-scientist-humans will accept. Even if it is better to loose 50 people in a fire in Manhattan once every 10th year and reducing the number of traffic deaths by 100 every year, the stories written are about the one tragic event not about the general trend. Voters remember the big media stories and will never notice a smaller trend. Consequently, regardless of the technical utility and precision of AI, there will be cases where the human factor will constrain the solutions more than any code or infrastructure.

Based on this thought experiment I think the most important limits to adoption of AI solutions at city scale are the following

  • Unclear benefits – what are the benefits of leveraging AI for smart cities? We can surely think up a few use-cases but it is harder than you think. Traffic was one but even here the benefits can be elusive.
  • Algorithmic transparency – if we are ready to let our lives be dominated by AI in any important area citizens who vote will want to understand precisely how the algorithms work. Many classes of AI algorithms are incomprehensible in their nature and constantly changing. How can we prove that no one tampered with them in order to gain an unfair advantage? Real people who are late for work or are denied bail will want to know that and some times Department Of Investigation will want to know as well.
  • Accountability – whatever an algorithm is doing, people will want to hold a person accountable for it if something goes wrong. Who is accountable for malfunctioning AI? Or even well functioning AI with unwanted side effects? The buck stops with the responsible on the top, the elected or appointed official.
  • Unacceptable implementation costs – real world AI in a city context can rarely be adequately tested in advance as we are used to for enterprise applications. Implementing and trimming a real world system may have too many adversarial effects before it starts to be beneficial. No matter how much we prepare we can never know exactly how the human part of the system will behave at scale until we release it in the wild.

Artificial Intelligence is a great technological opportunity for cities but we have to develop the human side of AI in order to arrive something that is truly beneficial at scale.