Infinity War and People Focused AI

In a recent post on LinkedIn I saw former NATO Secretary General Anders Fogh Rasmussen talked about AI. He said:

“For once I agree with Putin, whoever becomes the AI leader will lead the world. We must ensure the winner is the world’s democracies, led by the United States”.

At first, I was thinking “my god he is mixing up reality with the plot of Infinity War” where Putin is Thanos, NATO is the Avengers led by Donald Trump (Captain America) in a battle for the world to acquire all the infinity stones. I am still not completely convinced that is not the case, but he did seem to be able to distinguish the Marvel universe and ours just fine in the rest of the post. 

A day later I read that Margrethe Vestager had stated “the benefits of using AI [have] no limits”  So, that got me thinking whether they were drinking the same Kool-Aid. I have tremendous respect for both but fear that they may have an inadequate understanding of what AI is and how we can wield its power. While politicians’ call to action in the shape of research and investment in technology is never something I would argue against, I think we should take a deep breath first and look at the situation from above. 

Investments in China, US and Europe 

If we look at Rasmussen and Vestager’s underlying problem, that Europe is behind China and the US, it is interesting to dig a little deeper here to understand the context of AI investments. China is a technocratic, authoritarian surveillance state that optimizes AI for controlling people. Whatever they come up with, however superior and fancy, I have a hard time seeing it as having a market in the rest of the world outside of other authoritarian surveillance states. In the US, AI is made to optimize corporate profit and pay dividends to investors by using the people. This works in a free market economy where people are left to fend for themselves but is increasingly seeing a head wind as Google and Apple will tell you. 

That leaves us with Europe’s path. Now, we have a window of opportunity to nourish growth of an AI that is optimized for people: not corporate profit and not the state. This is Europe’s unique opportunity. In order to do this, we need a more holistic view of AI than the US and China. In my forthcoming book “Demystifying Smart Cities” (out on Apress in December, ideal for a Christmas present!) I talk about applying AI in a city context. This is a good example of people focused AI, since success of AI in a city context critically depends on the residents who live and vote there in free democracies. 

The 7 primary forces of AI 

While AI is not similar to the 6 infinity stones there are, I believe, 7 distinct forces that impact the success of AI based technologies. The seven forces are human nature, transparency, political realities, ethical choices, technical possibilities, ecology and system dynamics. These forces determine the success or failure of AI. Let me briefly explain what I mean by them individually.

Ethical Choices
The seven forces of AI

Human Nature

How we respond and interact with artificially intelligent systems is not straight forward and logical. For example, you might expect that better, faster and more efficient solutions are always good. Indeed, that seems a reasonable assumption given the history of technological innovation until now. However, that is not the case.

Elsewhere I have called this the optimization paradox. Sometimes making a system more efficient does not make it better. This happens as an example when the solution violates our stochastic liberty. This refers to the fact that in some cases we want to have freedom of chance rather than the tyranny of efficiency. This is the case in traffic monitoring systems. They could be incredibly efficient and fine everyone for even the slightest speeding. Another example is knowing whether or not you have a genetic disease. Some people don’t want to have that information forced upon them. They want the freedom of chance. 

But this is just one example of how human nature intersects with AI systems, the wider field of behavioral economics has unearthed a plethora of surprising biases and behavior patterns that will also interact with AI technology. If we don’t understand how human nature will interact with AI, we will not have the full understanding.


Especially for AI systems that make decisions understanding how that decision is reached is critical for the perceived fairness and values of the system.

For example, let’s say we develop the perfect system that can decide whether you can borrow money or not. You input your request because you need to borrow money for a new house because your wife is pregnant, and the studio apartment will be too small when the baby arrives. The system responds “no”. Now imagine the same is the case for a system to calculate the prison sentence you should receive, one person gets 3 years, another 2 months for the same crime. In these cases, the efficiency of the system will become completely irrelevant for the systems overall success. If we don’t know why it arrived at the decision, we don’t want to use it (incidentally, after I wrote this, the story of the Apple credit card surfaced, where Apple is accused of exactly that: not being transparent in how they decide customer’s credit limit). This, of course, is a force that is rarely affected in contexts where there has not earlier been any transparency as is the case in China. 

Political Realities

There is always the odd chance that political realities interfere with any AI solution. This could be the case for example when it has resulted in an injury or just that it was highlighted in a media story. It could also be the case that political priorities shifted. Political reality runs its own system of logic and the dynamics are never straight forward. A current case in point relating to technology is the 5G network. Although, Huawei has the most patents and are able to deliver the cheapest solution, they are barred from implementing the system in parts of the world. This is due to political realities, not technical. Exactly the same is and will be the case for AI solutions.

Ethical Choices

When technological systems powered by AI are becoming agents out there in the real world, they will also assume ethical responsibilities like the humans they substitute. Already now this is a topic of intense debate. For example, there is a focus on eliminating bias in AI systems.

Another aspect is that when a system acts it will always have to do it according to rules or patterns and where do these patterns come from. Who decides how an autonomous vehicle should act in a situation where a child runs out in front of it? Should it evade and drive to the right thereby potentially killing a couple of senior citizens on the sidewalk, or drive to the left to the other lane potentially killing a mother and child in an approaching vehicle and the passengers of the vehicle itself or continue and hit the child that ran out in front of it?

There is no right answer to this and what’s more is that we will never agree on the decision. This example highlights another aspect of the human condition, that we are very rarely in agreement on what is the correct ethical choice. 

Technical Possibilities

The technical possibilities of AI systems are another factor of great importance. Great advances have already been made and continue to be made. These have to do with hardware as well as software. Universities and research institutions dedicate significant resources to discover new and optimize existing algorithms. Ever deeper and faster neural networks are produced.

Private companies invest heavily in implementing AI in their solutions and develop new ones. The technical boundaries are being pushed every day making AI faster and more precise. This is a pattern we see for any new technology. Think of Moore’s law in relation to the CPU. It states that processing speeds will double every two years. It also seems to hold for memory capacity. We should expect the same optimizations to occur for AI. 


Today, most technology systems don’t exist in isolation. At the very least they need to be supported, upgraded and maintained by someone. This means that there will be one or more organizations invested in and related to the function of the system. It will also often have technical interfaces to other systems, which allows inputs and outputs to the system. These systems interact with other systems.

This means that there is an ecology. If we have a solution using AI, we also need to understand who can implement it, who can support and upgrade it. Some vendors may have a rich partner network while others may not. We also need to understand how we can connect it to other systems. 

For example, we might have a decision optimization system that needs data and need a technical interface for that. If there is no one to help implement and upgrade the system or we can’t interact with it then it limits the utility of the system. The point here is that the ecosystem is just as important for the functioning and value of the AI system as the technical properties of the system itself. 

System Dynamics

Most AI systems are complex systems. These are notoriously difficult to handle. Just think of nuclear power plants or space flights as examples. Complex systems are characterized by non-linear, and therefore often unpredictable, behavior.

Most systems we are used to interacting with are linear: if we turn up the heat on the cooker the temperature increases. If we turn it down, it decreases. If we push the speeder in our car it accelerates and pushing the breaks will slow it down immediately. 

AI does not necessarily work like that. Imagine we have an intelligent traffic control system. It may create congestion at times and places where there was none before. AI may also introduce completely new system dynamics as was the case with the “Flash Crash” in 2010 where algorithmic trading systems decided to sell thereby creating a trillion dollar crash. This has become so normal that we are now talking about quant quakes as a recurring phenomenon.

Building Holistic AI Focused on People

There are 7 primary forces determining the success of AI of which the technical possibilities are just one. When we talk about how far ahead the US and China are, we are really only talking about one force: the technical possibilities.

I see no evidence for example that the Chinese are experts in how human nature relates to AI technologies. It is also not clear that the US is breaking new ground in the ethical choices of AI technologies. To put it bluntly, China and the US are oblivious to 6 out of the 7 forces of AI success. Europe on the other hand has an advantage in all the other 6 forces that affect AI adoption. 

While it would be a great idea to invest in AI, we have to be wise and not just run after the ball like first graders in a school yard football game. The “leaders” in AI in the world are not paying attention to most of the forces that shape AI adoption. The opportunity is therefore to spark a research agenda that takes this into consideration. 

Maybe we want to have political scientists study policy frameworks and give their recommendations. Biologists could pitch in with general insights into ecosystems. Physicists and engineers are experts in understanding system dynamics and could help frame AI in this context. Even the good old permanently unemployed philosophers may enlighten us with ethical perspectives while psychologists should provide insights in how we interact with AI. If we do this in an interactive fashion with the business environment and government, we might have an explosive mix that will get Europe ahead in the AI game. 

The race is far from over. It is just getting started and Europe has a unique opportunity to win the race by not thinking like our global competitors. We should spark a more holistic investment that makes sure that AI is focused on people rather than the state or corporate profits. We need to build a research, policy and investment agenda that differs from China and the US and we need to build it based on values and competences that are our strengths such as political science, psychology, anthropology, philosophy, democracy, inclusions, protection of the individual. By building AI solutions that work for the people we automatically further democracy and liberal western values.