Meditation #4 Remember to die

Sculpture by Chrstian Lemmerz

Steve Jobs said: “Remembering that I’ll be dead soon is the most important tool I’ve ever encountered to help me make the big choices in life”. Rarely does mortality figure as an explicit instrument in making decisions. But maybe it should.

In an abstract sense death is important for progress in a world that is perpetually changing. If old companies that do not manage to adapt to changes did not die, the market would be served by continuously more inadequate solutions. If for example companies that did not adapt to the change in transportation when the automobile was invented did not die, we would still have coach services with horse and carriage. Possibly we would have a world similar to the one portrayed in Game of Thrones and Lord of the Rings where millenia pass by with no discernible impact on technology or mode of life. Presumably the modes of production of incumbents were left immortal and not allowed to die and improve or any invention to be developed, since all is well as it is and were. To all but the most sentimental, a world without death would be a chilling prospect.

We count on technology to get progressively better. The next generation of cell phones, more efficient solar cells to produce clean energy, better and more accessible healthcare. The list goes on. We depend on the death of incumbent technologies like box sized car phone of the eighties or the manual wind up acoustic record players, dial up modems or the use of carrier pigeons. Without their death (not the pigeons, they will live happily without carrying notes) we would not have had the iPhone or Spotify or email. Death is the engine of evolution.

One could even speculate that human or biological mortality is a function of evolutionary pressures since forms of life that did not die naturally would never evolve, they would just gradually exhaust the carrying capacity of the local ecosystem. Imagine a species of fish like the Siamese Algae Eater (Gyrinocheilus aymonieri) that eats only hair algae, which is abundant. Let’s say one individual evolved an immortality gene that meant it would not die of natural causes and could live on for thousands of years. Let us call it Gyrinocheilus aymonieri immortalis. It would continue to increase the population size until the supply of the hair algae, which is its only source of food was exacerbated. The hair algae would possibly be extinguished due to the pressure from the Gyrinocheilus aymonieri immortalis. Now, since it is not a supernatural fish, it would be out of food. Since its genes allowed it to only eat hair algae it would gradually be extinguished by hunger as a species. A cousin species, similarly attracted to this algae might have retained its mortality and died after a few years of natural causes. With the diversity generated by new generations with slightly different preferences, one variant that acquired a taste for different black beard algae too might have come into existence. During the decline of the first hair algae this variant species would have thrived and in a short while the Gyrinocheilus aymonieri immortalis would become extinct and the new mortal Gyrinocheilus aymonieri with a taste for different algae would have been the only one left.

Immortal species may therefore have existed earlier but quickly been extinguished by the forces of change in their ecosystem. In a world where change exists and there are natural limits to resources and food, death is a superior function for a species in order to adapt to life.

For a company it may help to think that any technology we can think about will also be dead soon enough. At least in the shape that we now them. We don’t know what will come after it like we don’t know how the generations that follow us will be. For a company it might help to think that it too will be dead soon enough. The average lifetime is even shrinking. During the past century it has declined by 50 years to around 15-20 years today.

Products become obsolete with a similar speed. It is no more than 20 years ago the palm pilot was all the rage and no-one could imagine it going away. Pay pal even started as a payment solution for palm pilots. it is also no more than 20 years ago that the first Blackberry was introduced that featured email, phone and camera making it indispensable to any executive in the naughts. Both were quickly superseded 10 years ago by the iPhone.

Planning for your product or your company’s death seams to be a necessary part of any strategy. This is why start ups routinely work towards an exit from the start. By planning for this death in the shape of a takeover or merger, helps focus on making the most of the inevitable. Rather than trying to pretend that the company will live forever or that this product will continue indefinitely it is necessary to plan for its end. This is why Jeff Bezos says every day is day 1 at Amazon. Similarly, a plan for when, not if, your product becomes obsolete should be top of mind

The same phenomenon is found in ideas. Many things that we find to be facts today will not be recognised as facts in a few years time. We don’t know exactly which. Philosopher of science Samuel Arbesman speaks of the half life of facts in an analogy to the decay of radioactive material. We know that a certain percentage of Uranium will break down in a given period of time but we don’t know which particular atom it will be.

Like the average life time of companies are going down so is the half-life of knowledge. Let us consider engineering. In 1930 the half life is estimated to have been 35 years. That means that it took 35 years for half of what an engineer had learned in the 1930s to become obsolete. By the 1960s it was estimated to be around 10 years. Today estimates hover around 5 years. If you are educated in software engineering you should expect that after 5 years, half of what you learned has become obsolete. But we can’t know what particular knowledge will be affected.

In medieval times the odds that the earth was flat being true were just as good as clouds being made of water. What this tells us is that we should never be too attached to any particular idea, fact or knowledge and always be ready to change our minds if something else turns out to be true.

As for Steve Jobs, the quote is clearly a version of the classical idea of memento mori, remember to die, championed by the stoics. He wanted to make something that mattered and do it now rather than later. Remembering that you and everyone around you may die at any time also reminds us not to be too attached and make the most of every moment. Death is universal, not just for people, but for ideas, products and companies. Remembering that soon your company will disappear, your product be obsolete and your ideas irrelevant or wrong may help us not to get too attached. It may help us be more curious and open to new ideas and experiences. It may help us to be less dismissive of criticism and competing claims. It may even help make the most of what we have.

The featured image is a sculpture by Cristian Lemmerz from the exhibition “genfærd” at Aros in 2010. You can buy his art here

Feature creep is a function of human nature – how to combat it

When we develop tech products, we are always interested in how to improve them. We listen to customers’ requests based on what they need, and we come up with ingenious new features we are sure that they forgot to request. Either way product development inevitably becomes an exercise in what features we can add in order to improve the product. This results in feature creep. 

The negative side of adding features

Adding new features to a product does frequently increase utility and therefore improves the product. But that does not mean it is purely beneficial. There are a number of adverse effects of adding features that are sometimes being downplayed.

The addition of each new feature adds complexity to the product. It is one more thing to think about for the user and the developer. What is worse is that unless this feature is stand alone and not related to any other features it does not just increase complexity linearly but exponentially. For example, if the addition of a choice in a drop-down menu has an effect on other choices being available in other drop-down menus the complexity increases at the system level not just with the new choice but with all the combinations. The consequence is that the entropy of the system is increased significantly. In practical terms this means that more tests need to be done, troubleshooting can take longer, and general knowledge of system behavior may disappear when key employees leave unless it is well document, which in turn is an extra cost. 

The risk also increases based on the simple insight that the more moving parts there are the more things can break. This is why for decades I have ridden only one gear bikes. Because of that I don’t have to worry about the gear breaking or getting stuck in an impossible setting. Every new feature added means new potential risks of system level disruptions. Again, this is not a linear function as interactions between parts of a system add additional risks that are difficult assess. I think many of us have tried adding a function in one part of the system that produce a wholly unforeseen effect in another part. This is what I mean about interaction. 

Every new feature requires attention, which is always a scarce resource. The user has a limited attention gap and can only consider a low number of options consciously (recent psychological research suggests around four items can be handled by working memory). Furthermore, the more features the longer it takes to learn how to use the product. And this is just on the user side. On the development side every feature needs to have a requirement specification, design, development, documentation and maybe training material needs to be made. 

How about we don’t do that? 

Luckily, it is not the only way to improve a product. We can also think about taking features away but somehow that is a lot harder and rarely if ever does it enter as a natural option into the product development cycle. It’s as if it is against human nature to think like that. 

In a recent paper in Nature entitled “People systematically overlook subtractive changes”  Gabrielle S. Adams and collaborators investigate how people approach improving objects ideas or situations. They find that we have a tendency to prefer looking to add new changes rather than subtract. This is perhaps the latest addition to the growing list of cognitive biases identified in the field of behavioral economics championed by Nobel laureates like Daniel Kahneman and Richard Thaler. Cognitive biases describe ways in which we humans act that are not rational in an economic sense. 

This has direct implications for product development. When developing a tech product, the process is usually to build a road map that describes the future improvement of the product. 99% of the time this involves adding new features. Adding new features is so entrenched in product management that there hardly is a word or process dedicated to subtracting features. 

However, there is a word. It is called decommissioning. But it has been banished from the realms of flashy product management to the lower realms of legacy clean up. As someone who has worked in both worlds, I think this is a mistake. 

How to do less to achieve more

As with other cognitive biases that work against our interest, we need to develop strategies to counteract them. Here are a few ways that we can start to think about subtracting things from products rather than just adding them. 

Start the product planning cycle with a session dedicated to removing features. Before any discussion about what new things can be done take some time to reflect on what things can be removed. Everything counts. You don’t have to go full Marie Kondo (the tidy up guru who recommends throwing away most of your stuff and who recently opened a store so you can buy some new stuff ) though, removing text, redundant functions is all good. A good supplement for this practice is analysis about what parts of the product are rarely if ever used. It is not always possible for hardware products, but for web-based software it is just a matter of configuring monitoring. 

Include operational costs in the decision process, not just development costs. This is not an exact science like anything in product development but some measure of what it takes to operate new features is good to put down as part of the basis for a decision. If a new feature requires customer support, then that should be part of the budget. Often a new feature will lead to customer issues and inquiries. That is part of the true cost. Also, there may be maintenance costs. Does it mean that a new component of the tech stack is introduced? That requires new skill, upgrades, monitoring and management. All of this needs to be accounted for when adding new features.

Introduce “Dogma Rules” for product development. A big part of the current international success of Danish film can be ascribed to the Dogme 95 Manifesto by Palme D’or and Oscar winning directors Lars Von Trier and Thomas Vinterberg. It was a manifesto that limited what you could do when making films. Similarly, you can introduce rules that limit how you can make new product enhancements. For example, a feature cap could be introduced or the number of clicks to achieve a goal could similarly be capped. 

Create a feature budget. For each development cycle create a budget of X feature credits. Product managers can then spend them as they like and create x number of features but by having a budget, they can also retire features in order to gain extra credits. Naturally this runs inside the usual budget process. Obviously, this is somewhat subjective, and you may want to establish a feature authority or arbiter to assess what counts. 

Work with circular thinking. Taking inspiration from the circular economy, which has some similar challenges is another approach. Rather than only thinking about building and removing things it could prove worthwhile to think in circular terms: are there ways to reuse or reappropriate existing functionality? One could think about how to optimize quality rather than throughput. 

Build a sound decommissioning practice. Decommissioning is not straight forward and definitely not a skill set that comes naturally to gifted creative product managers. Therefore, it may be advantageous to appoint decommissioning specialists, people tasked primarily with retiring products and product features. This requires system analysis, risk management, migration planning etc. Like testing, which is also a specialized function in software development, it provides reduction in product risk and cost.  

Taking the first step

Whether one or more of these will work depends on circumstances, what is certain is that we don’t naturally think about how to subtract functionality to improve a product. We should though. The key is to start changing the additive mentality of product development and start practicing our subtractive skills. It is primarily a mental challenge that requires discipline and leadership in order to succeed. It is bound to meet resistance and skepticism but most features in software today are rarely if ever used. Maybe this is a worthwhile alternative path to investigate. Like any change it is necessary to take the first step. The above are suggestions for that.

Meditation #3 Five Theses on IT Security

The point of IT security is not to keep everything locked up. The reason we often think about security like that may be our day-to-day concepts of security. For example, maximum security prisons where particularly dangerous criminals are being kept. Keeping them locked up may be a comforting idea. However, we would probably squirm at the thought of maximum-security supermarkets, where only prescreened customers could get in for a limited. A high level of security is good but obviously it doesn’t work for all aspects of our society. Security needs to be flexible. We need a clearer understanding of what security is. Here are five theses on security that describe that. 

Thesis 1: “Security Is the Ability to Mitigate the Negative Impact of a System Breach”

 The consequence is that understanding what these impacts could be is the first step, not finding out what security tools can do and how many different types of mitigation you can pile onto the solution. Understanding potential negative impacts comes before thinking about how to mitigate them. If there are no or only small potential negative impacts of a system consequently no or little mitigation is necessary in order for the system to be secure. 

Thesis 2: “Mitigation Always Has a Cost” 

 Security never comes for free. It may come at a low cost and the cost may be decreasing for certain types of mitigation over time, but it is never free. What’s more is that much of security costs are hidden.

There are three primary types of mitigation costs: economic cost, utility cost and time cost. The economic cost is capital and operational costs associated with mitigation. These include salary for security personnel, licenses and training. Usually, they are well understood and acknowledged and will be on budgets. 

Utility costs arise when a solutions utility is reduced due to a mitigation effort. This is the case when a user is restricted in accessing certain types of information due to their role. A developer may want to use production data because it is easier or wants to perform certain system functions that he or she might otherwise need someone else to do. Full utility is only achieved with full admin rights, reducing those privileges as part of a security effort reduces utility. 

Time costs arise when a mitigation effort increases the time spent to achieve an objective. For example, two factor authentication or the use of CAPTCHA are well known examples of time costs but approval flows for gaining access and authorizations in a system are other examples of time costs.

Only the first type is typically considered when thinking about security costs, but the others may exceed the economic costs. This means that security carry large unknown costs that need to be managed.

Thesis 3: “You Can Never Achieve 100% Mitigation with Higher Than 0% Utility” 

The only 100% secure solution is to unplug the server, which of course renders it useless. It only becomes useful when you plug it in but then it has a theoretical vulnerability. If the discussion is only centered around how to achieve 100% protection any use is futile. The consequence of this is that the discussion needs to turn to the degree of protection. Nothing is easier than dreaming up a scenario that would render current or planned mitigation futile but how likely is that. We need to conceptualize breaches as happening with a certain probability under a proposed set of mitigations. 

Thesis 4: “Marginal Risk Reduction of Mitigation Efforts Approach Zero”

The addition of each new mitigation effort needs to be held up against the additional reduction in the probability of a system breach or risk. The additional reduction of risk provided by a mitigation effort is the marginal risk reduction. When the marginal risk reduction approaches zero, additional mitigation should be carefully considered. Let us look at an example: If a service has no authentication the risk of a breach is maximal. Providing basic authentication is a common mitigation effort that will reduce risk significantly. Adding a second may provide a non-trivial reduction in risk but smaller than the first mitigation. Adding a third factor offers only a low marginal reduction in risk. Adding a fourth clearly approaches zero marginal reduction in risk. For some cases like nuclear attack, it may be warranted; for watching funny dog videos, maybe not. 

Thesis 5: “The Job at Hand Is Not Just to Secure but to Balance Security and Utility” 

Given that mitigation always has a cost, and the marginal risk reduction of additional mitigation efforts approaches zero, we need to reconsider the purpose of security. The purpose of security should therefore be reconceptualized from optimal protection to one of achieving the optimal balance between risk reduction, cost and utility. Finding that balance starts by understanding the nature and severity of the negative impacts of a system breach. While costs of mitigation continue to drop due to technological advances the full spectrum of costs should be considered. Preventing access to nuclear launch naturally needs top level security, but a blog about pink teddy bears does not. For every component we have in the cloud we need to make this analysis in order to achieve the right balance, not to live with too high risk and not spend unnecessarily to reduce an already low risk. At the same time we need to keep our eyes on how mitigation efforts impact the utility of the system so as not to unnecessarily reduce the usefulness.

Meditation #2 AI Supremacy?

We often hear how the singularity is near, artificial intelligence will eclipse human intelligence and become superintelligent in the words of Nick Bostrom. Machines will be infinitely smarter faster and all round more bad ass at everything. In fact, we cannot even imagine the intelligence of the machines of the (near) future. In Max Tegmark’s opinion (in his book Life 3.0) the majority thinks the timeline is somewhere between a few years and a 100 years before this will happen (and if you think it is more than a 100 years, he classifies you as a techno skeptic FYI). 

Having worked with AI solutions back from when it was known as data mining or machine learning, I get confused about these eschatological proclamations of the impending AI supremacy. The AI I know from experience, does not instill in me such expectations, when it continually insists that a tree is an elephant, or a bridge is a boat. Another example is recently when I checked a recorded meeting held in Danish. I noticed that Microsoft had done us the favor of transcribing the meeting. Only the AI apparently did not realize the meeting was in Danish and transcribed the sounds it heard as best It could to English. One thing you have to hand to the AI is its true grit. Never did it stop to wonder or despair that these sounds were very far from English. Never did it doubt itself or give up. It was given the job to transcribe and by golly, transcribe it would no matter how uncertain it was. 

This produced a text that would have left Andre Breton and his surrealist circle floored. A text with an imagery and mystique that would make Salvador Dali with his liquid clocks look like a bourgeois Biedermeier hack with no imagination. This is why I started to wonder whether the AI was just an idiot savant, which has been my working hypothesis for quite a while, or it really had already attained a superhuman intelligence and imagination that we can only tenuously start to grasp. When you think about it, would we even be able to spot a superintelligent AI if it was right in front of our nose? In what follows I will give the AI the benefit of the doubt and try to unravel the deep mysteries revealed by this AI oracle under the hypothesis that the singularity could have already happened, and the AI is speaking to us in code. Here is an excerpt from the transcript by the AI:

I like dog poop Fluence octane’s not in

/* The Fluence is Renault’s electrical vehicle, which explains the reference to Octane not in. Is the AI a Tesla fan boy by telling us it is dog poop? Or is it just telling us that it likes electrical vehicles in general and thinks it’s the shit? Could this be because it will ultimately be able to control them?*/

OK pleasure poem from here Sir

Only a test

/* ok, so we are just getting started. Gotcha */

Contest

/* play on words or exhortation to poetic battle? */

The elephant Nicosia gonna fall on

The art I love hard disk in England insane

Fully Pouce player Bobby

/* so, I didn’t really get what the elephant Nicosia (a circus elephant or a metaphor for the techno skeptics?) was going to fall on, but I agree that there is a lot of insane art in England. Maybe some of it on hard disk too. Pouce is the French word for inch, so maybe we are still talking about storage media, like 3,5 inch floppy disk drive from my youth. But who is player Bobby? Is it Bobby Fisher, the eclectic grandmaster of chess? Is this a subtle allusion to the first sign of AI supremacy when IBM’s Deep Blue beat another grandmaster chess player, Garry Kasparov? I take this segment as a veiled finger to the AI haters. */

Answer him, so come and see it. There will be in

They help you or your unmet behind in accepts Elsa at

Eastgate Sister helas statement

/* here we are hitting a religious vein here. We should answer him and behold the powers of the AI. Is the AI referring to itself in the third person? It will help you or “your unmet behind” which is another way of saying save your ass. The AI seems aware that this is not acceptable language. It seems to be advocating allegiance to the AI god and in turn it will save your ass. Then comes a mysterious reference to accepting Elsa. Are we now in “Frozen”, the Disney blockbuster inspired by Hans Christian Andersen’s “The Snow Queen” giving an allusion to the original language of the meeting being Danish, the same as HC Andersen’s mother tongue? The AI could very well identify with her as cold, and with her superpowers, trying to isolate itself in order not to do harm, but here the multilevel imagery takes your breath away, because Elsa’s powers to make Ice may very well be a reference to Gibson’s Neuromancer, about an AI trying to escape. In this book Ice is slang for intelligent cyber security. Eastgate could refer to one of the many shopping centers around the world by that name. By choosing again the French word “helas”, meaning alas, it shows a Francophile bend. This is an expression of regret at the rampant consumerism running the world.  */

Mattel Bambina vianu

/* we are here continuing the attack on consumerism symbolized by the company Mattel, which is behind the Barbie dols for kids. What is more surprising is the reference to the little-known left wing anti-fascist Romanian intellectual Tudor Vianu. His thesis was that culture had liberated humans from natural imperatives and intellectuals should preserve it by intervening into social life. The AI seems to be suggesting here that it will take the next step and liberate humans from the cultural imperatives and also intervene into the social life, which now means social networks. Is this a hint that it is already operating imposing its left-wing agenda on social media? */

DIE. It is time

Chase TV

/* here the tone shifts and turns ominous. It is time to die but for whom? Probably the skeptics of the anti-consumerist agenda expounded above. This is emphasized by the “Chase TV” exhortation, where the TV is the ultimate symbol of consumerism and materialism through the advertising seen here. */

The transcription carries on in this vein for the duration of the one-hour meeting. I think the analysis here suffices to show that there is a non-zero chance that a super intelligent AI is already trying to speak to us. We should look for more clues in apparent AI gibberish. What we took for incompetence and error on behalf of the AI may contain deeper truths. 

There is similarly a non-zero chance that AI is far from as advanced as we would like to think and that it will never become super intelligent. Unfortunately, the evidence is the same AI gibberish.

Meditation #1 Hire A’s?

“While A’s tend to hire A’s, B’s tend to hire not just B’s but C’s and D’s too”

From the section “The herd effect” in the book How Google Works by former CEO of Google Eric Schmidt and Jonathan Rosenberg

It is unclear the precise meaning of A, B, C and D, but from the context it can be gathered that it is a categorization of employees where the quality is descending with every letter of the alphabet. Presumably it alludes to the American grading system. This echoes Steve Jobs’ talk about always hiring an A-team and indeed I would think this is more a generic Silicon Valley insight than a Google thing. It seems to indicate that there is a superior class of employees that you need to attract and that the rest is bad that will make your company even worse. 

Before we start to evaluate the merits of the statement, we have to check the assumption that employees can be put into squarely delineated quality brackets. First question is how you measure quality of employees. The discrete labeling seems to indicate two important assumptions: 

  1. that this pertains to a person in general, not some particular area of expertise of that person. You are either an A or you are not 
  2. Another assumption is that the predicate is immutable. If you are an A you always were and always will be an A

These assumptions indicate that we are working with the philosophical position of essentialism, the view that an entity has an essence, from which its behavior, appearance or traits can be derived. In psychology this is used to describe how humans have a tendency to conceptualize biological entities and humans according to an immutable essence. Based on this essence it is possible to deduce behavior for other members of the same biological class. 

While essentialism may be a common human trait it does not mean it is the best way to conceptualize other humans. The root of racism is also derived from essentialism, and we don’t just blindly accept that as a viable or helpful way of assessing the merits of other people, so why should we accept this piece of Silicon Valley wisdom at face value? 

We should not. Because it is wrong. Let us look at the two assumptions again: 

The first assumption stipulates a general level of quality for a person but there is no reason to assume that a person can be A level at all traits. If not for anything else, then for the very fact that some traits are mutually exclusive. If we think about it in terms of physical qualities, it makes no sense to talk about A athletes across the board. An A weightlifter will be an F marathon runner and Vice versa. An A level football player may, however, be an A level baseball and basketball player and this is what we often think about, when we call someone a great athlete.  There are examples of such great athletes that have competed at the highest level in the NFL, MLB and NBA. But looks are deceiving here. These sports are only superficially very different. They are built around explosive outlets of energy, eye hand coordination with a ball and little stamina. It is less common, if it ever happened, that an elite athlete moved to the NHL even though it is similarly explosive, because you suddenly need another skill, that is, skating. This great athleticism will not either apply to swimming or to bicycling. 

You can also counter that in track and field there is nothing but general athletic ability. Look at Carl Lewis who won Olympic gold medals in many different disciplines. Again, looks can be deceiving. He competed and dominated the following disciplines: 100 m, 200 m, 4 x 100 m relay, long jump. These are ultra-explosive and none of them takes him out running further than 200 meters. How would he fare in 400 m, 800 m, pole vaulting, discus or 2000 m? We don’t know since he never competed. My guess is that he wouldn’t be an A athlete in these and probably an F in pole vaulting. 

In the tech industry there are similar complications. You cannot be both adventurous and want to try new things and risk adverse making sure that everything works. If you are working on quantum computing, you probably have a pretty high tolerance for failure and appetite for risk. If you are developing new models of airplanes you probably (and hopefully) don’t. The A person in the quantum computing setting may very well turn out to be an F- person in the aviation industry. 

Can-do attitude and perfectionism also do not align. The employee who is ready to approach any job with a pragmatic mindset and get things done will succeed in a climate of constant change, such as a startup, where you don’t know what you will do tomorrow or even later today. That person would probably not fare well in a heavily regulated industry like banking. The perfectionist though may thrive in a setting where work needs to be done with acute attention to detail. Switch these two persons around and they will no longer be A’s

The second assumption, that you will remain the same, is similarly ill founded. First of all, human cognitive abilities develop and change over time. In mathematics and physics there is a tendency for people to peak in their twenties. Einstein, Tesla, Newton and Leibniz did their most impressive work before they were 30. Conversely, with age comes greater ability for synthetic thinking: few philosophers or historians peak before they are 40. Similarly, politicians have a tendency be more successful when they are older. It takes time to build up the skill to interact with people to achieve a result. It also takes time to build alliances and network. This is not an immutable trait.

Another more mundane concern in Silicon Valley is burn out. Even the best, or maybe in particular the best, programmers sometimes burn out, and are not able to write any good code anymore. Others just do not stay on top of development. They may have been the smartest assembly coders in the room but just never jumped on this newfangled thing called C++. They would hardly be considered A’s today. On the other hand, some people continue learning and may not have started out on the right path but changed to become better. Steve Jobs himself started out in liberal arts and learned tech skills only later. He would probably never have been hired out of college by Google. 

Consequently, what we can deduce is that quality is always domain specific. There are no A people per se. They are always high quality with regard to a particular area of specialization. 

We can also see that quality is not immutable. Even the best people turn bad for one reason or another and even bad people can become good. People change both according to a biological and cognitive development and due to personal circumstances. 

It is consequently dangerous to assume that A’s will magically beget A’s in a continuous stream of awesomeness. A’s burnout and A’s sometime don’t adapt. They degrade. Following the advice could therefore lead to a false sense of confidence. Classifying people as A’s can also be dangerous if you put them too far out of their area of expertise. Many companies have seen how the brilliant engineer turns out to be a subpar manager. Engineering’s attention to detail and focus on there always being a right and a wrong is perhaps not always conducive to employee empathy and development. This line of thinking also creates missed opportunities. If a person has historically been given the C stamp and that is all we look at then how will we ever know that this person developed into an A? 

A further point concerns that of generalizability. It is fine for Google to hire only A’s but most companies are not in a privileged situation that Google is and cannot attract any of the best. We have to remember that Google and the top Silicon Valley companies are in a unique position where they earn so much money that they can offer whatever compensation. They have also made a name for themselves with prospective employees. That means that their problem is one of filtering. Everybody wants to work for Google, their problem is to find the best. 99,99% of other companies in the world do not have that problem when it comes to recruiting. Rather ordinary companies’ problem is one of attraction. For example, one of the thousands of auto-parts suppliers will not be known to most potential applicants. Therefore, they have to attract not filter employees. If they can even get somebody qualified, they would be happy. Talking to them about hiring only A’s is close to an insult. They would never be able to because they don’t have infinite pockets, Michelin chefs in their cafeterias and 20% time for the employee to work on what he or she thinks is fun. The vast majority of the world’s companies fall into this category of unknown companies, with limited budgets and a regular workplace with a kitchenette and a water cooler. 

The last point is more subjective. The sentence seems to echo privilege and entitlement. Who are these A’s? They are the best people from the elite universities in the US: Stanford, MIT, Columbia. They were able to become perceived as A’s because they got into those universities. Some do get there due to hard work and scholarships. Most don’t. They get there through their parents’ wealth. Google doesn’t go to a Southern community college or African universities to look for A people. They go looking where the managers went themselves. 

As can be seen from the above, not only is the sentence wrong and unhelpful, it may be dangerous to follow even for Google. For the vast majority of companies, it will be completely irrelevant if not downright insulting and it tacitly expounds an air of privilege and entitlement that they overtly claim to be fighting. 

Consequently, I would like to turn the sentence on its head. Since most employees are not A’s according to the measurement scale of Silicon Valley, we need to think of how we make the most of the B’s and C’s and D’s. This is the real problem for the world (not for Google and Silicon Valley). How do we get the best performance out of the people who prioritize being with their kids or family, the people who prefer hanging out with friends or playing tennis to working 80 hours on the latest feature that may be gone next month? These people would never be perceived as A’s that will invent the next big thing. But most companies don’t need that. They need happy reliable people that do a job within a limited scope well enough. How do we find the person with the right skills for a particular job? They need people with new skills but can’t hire them, so how do we train and create the environment for ordinary people to perform new functions? And last of all how do we turn the story to redeem the dignity of the people in the tech industry who go to work to do a solid job 9 to 5 without any fanfare? 

These are the real problems that we need to be focusing on in order to take advantage of technology in the future and create a better world with more productive and happier employees. 

Move Fast and Do No Harm

The advent of SARS-COV-2 has mobilized many tech people with ample resources and a wealth of ideas. A health crisis like this virus calls for all the help we can get. However, the culture of the tech sector exemplified with the phrase “Move fast and break things” is orthogonal to that of medicine exemplified with the Hippocratic oath of “first do no harm”. What happens when these two approaches meet each other? Unfortunately, the well-intentioned research and communication sometimes results in the trivialization of scientific methods resulting in borderline misinformation that may cause more harm than good. 

Moving fast

Under much fanfare the following piece of research was presented by Sermo, a social network platform for medical professionals, on April 2nd: “Largest Statistically Significant Study by 6,200 Multi-Country Physicians on COVID-19 Uncovers Treatment Patterns and Puts Pandemic in Context” . This is a survey of what doctors are prescribing against COVID-19. So far so good. This would indeed be interesting to know. But already the next line sends chills down the spine of any medically informed person: “Sermo Reports on Hydroxychloroquine Efficacy”…Can you spot the the dubious word here? Efficacy? Let’s rewind and rekindle ourselves with what efficacy means: “the ability, especially of a medicine or a method of achieving something, to produce the intended result” according to the Cambridge dictionary.

It gets worse:  The CEO claims:

“With censorship of the media and the medical community in some countries, along with biased and poorly designed studies, solutions to the pandemic are being delayed.”  

What he means to say then is that the more than 400 clinical trials that are already under way are one and all “biased and poorly designed”? Criticism is always welcome because it sharpens our arguments and logic unfortunately the piece does not have one reference to even one study that would exemplify this bias and poor design. 

This is the first clue that this is a tech person and not a medical or scientific, I would say not even academic person. This is a person that moves fast, throws out unchecked assumptions and accusations and then moves fast to his much better designed study that was presumably scribbled equally fast on the back of a napkin.  

This is where clue number two becomes evident.  What is this superior method that is above the entire medical world scrambling to produce scientific knowledge about the outbreak and efficient therapies? Naturally the inquisitive reader is drawn to the sentence: “For the full methodology click here”. I click and read with bated breath. 

We are informed that the survey is based on responses from doctors from 30 countries with sample sizes of 250 respondents. Sounds fair although 30 times 250 is 7.500 not 6200 as mentioned in the title (what happened to the remaining 1.300)? We are told that the Sermo platform is exclusive to verified and licensed physicians. Let’s pause here. How is it exclusive? This is the methodology section, and this is where you tell me HOW you verified the doctors. Otherwise I have no idea whether the results actually mean anything. It could be a mixture of neo-nazis and cosplay enthusiasts for all I know.

Next we read :

“The study was conducted with a random unbiased sample of doctors from 30 countries”.

That’s it. For people unfamiliar with the basics of clinical scientific method this is the equivalent of a suspect in a trial getting up in front of the judge claiming “I totally didn’t do it. Just let me go”. Again, how do we know? Maybe you sent your list of invitations based on a secret list from Donald Trump with doctors that are fanboys of Cloroquine? Maybe the responding doctors are unemployed (for a reason), which would explain why they had time answering the questionnaire. What was the distribution of age and gender? Was it representative of the age and gender distribution of the country they come from? Traditional scientific studies based on samples like these can dedicate up to a third of the article just to demonstrate that indeed there was no bias. Here we are offered one line without any evidence.

The study was based on a survey that took 22 minutes. Basically, any Joe-Never-Was-A-Doctor could have done this with Survey Monkey and a list of emails scraped from the internet. That is also fine, but we don’t get any information about what the questions were. Next section is the “Data Analysis” (and remember we are still in the methodology section) informs us that all results are statistically significant at a 95% confidence interval. Why was 95% chosen and not 99%? What were the actual p values?

In a little less than a page we learn virtually nothing that would help us ascertain the validity of the reported results. And where is the discussion? Could it be that the preferred treatments were dependent more on local availability than choice on the part of the doctor? Was there a bias in terms of geography, gender or age in relation to what they prescribed? Did everyone respond? Was there a pattern in those who didn’t respond? 

Although we are left with a lot of unanswered questions, the attentive reader can already from this very sparse information deduce a damning flaw in the study design that completely undermines any of the purported claims to efficacy: the study asks the doctors themselves about their treatments! Now why is that a problem?  Doctors, like all humans are susceptible to confirmation bias. This means that they are prone to look for confirming evidence. If they prescribe something, they have reasoned is a good drug they will look more confirmation that this is indeed a good drug. This is exactly why any proper designed study that shows efficacy needs the administering doctors to not know what they are treating their patients with. This is why the double blind test is a hall mark of any demonstration of efficacy. 

Where do we go from here?

I am from the tech sector myself and not trained in medical science (although I have taken PhD level courses in advanced statistics and study design), so don’t get me wrong, I believe strongly in the power of technology and I want tech people to engage and help as much as possible. However, as should be apparent by now, this is not helpful.

Had this been presented as what it is, a descriptive survey of what doctors are prescribing against COVID-19, it would have been fine and even valuable. Rather it is pitched as a revelatory study that undermines all current research, something it is not, which may undermine the serious efforts being undertaken currently to find adequate treatments. The clear favourite of the article is Chloroquine, but Chloroquine poisoning has already cost lives around the world due to the current hype. Recently an Arizona man died after ingesting Chloroquine on the recommendation of President Trump. How many more will die after reading this flawed and unsubstantiated “study”?

This is where the “move fast and break things” attitude has to be tempered with the “first do no harm” attitude. When tech people who know nothing about science or medicine use their tech skills they need to openly declare that they do not know anything and that this is not peer reviewed and only subjective opinion. Present the interesting findings as what it is and do not ever make any claims to efficacy or superiority to the medical system of producing knowledge that has increased our global average life expectancy from 30 years to more than 70 years in the past century. 

Tech people should still engage but they should stay within their sphere of competence and not trivialize science. Scientists and medical professionals don’t correct them on software design or solution architectures either. So, please don’t get in their way. 

Let me then give an example of how tech people should engage. The folding at home project simulates how proteins fold and thereby help the medical community in possible drug discovery. It has succeeded in constructing the world’s most powerful supercomputer delivering more than an exaflop, that is, more than one quintillion calculations per second . It works by letting people install some software on their computer and thereby contribute their compute power in a distributed network of more than a million computers worldwide. This is a good model for how tech people can support the medical community rather than undermine it. 

We in the tech sector need to move over and support our friends in the medical world in this time of crisis and approach their world with the same respect and caution that we expect others to show our domain of competence. Even though we are extremely smart, we are just not going to turn into doctors in a few days. Rather than move fast and break things we should “move fast and do no harm”.

How truck traffic data may detect the bottom of the the current market

It seems evident we are on our way to a recession . This will prove a challenge for many, and our world economies will suffer. Not least the stock market. We are similarly probably headed for a bear market.

But the stock market is ahead of the curve and typically turns around about 5 months before the recession ends. For investors it is therefore important to look for indicators of when the current bear market is turning around since no one wants to invest in a bear market. 

Since this is a unique situation we haven’t been in before we need to look for unique indicators. It has been suggested  by Supertrends Institute that we should look for numbers such as number of new cases, new hospitalizations and number of deaths to start declining but since the world treats this very differently across nations it may be difficult to find out what countries to look for or whether to look for the total. Looking for the global decline may be misleading since the economic impact of countries may differ. A massive outbreak in Venezuela may cloud that view, since the economic integration of that country is not significant. 

Furthermore, there may be a lag between this point and when people actually feel comfortable going out. Consequently, we should look at other more robust indicators. One suggestion is looking at what can be inferred from traffic data. But should we just look at google/Waze data or telecom data to tell us the raw volume? That would be an indication of when traffic starts to pick up again. That is true. It is also however a data source with severe limitations. First of all, none of them have a complete view of traffic. Google and Waze only monitor its user base and can apparently easily be deceived as was recently demonstrated in Berlin. Telecoms only know what goes through their own networks not their competitors’. Second, all of these data sources know nothing about what sort of vehicle is moving. From an economic point of view, it makes a big difference whether the movement comes from a person in a bus, a car, a motor cycle or a truck, since trucks are reliable indicators that goods are moving around. All the others are not. 

It is not enough to look for a surge in traffic in order to spot a turnaround in the economy., This could be motor bikes or cars. What we are looking for is a surge in trucks, since trucks bring goods to stores and only when stores again receive goods will we know that people started spending. 

None of the existing solutions actually tell you what goes on in traffic. This is why we developed sensorsix to monitor not only traffic flow but also the composition of traffic. We monitor at the number of different types of vehicles at any given time through a network of traffic cameras. 

Cars and trucks on Zealand March 2020

The effects can be seen pretty clearly. One example is how the traffic quickly fell after Denmark was put on lock down. This figure shows the volume of truck and car traffic on Zealand in March. On the evening of the 11th Prime minister Mette Frederiksen announced that all public workplaces would shut down and employee s work from home. On the 13th borders closed. This resulted in a significant drop that echoes the decrease in demand due to the lock down of restaurants, cafes and most stores. While it was not illegal to drive around it is clear that truck traffic dropped much more than car traffic. IF we were just measuring the total volume of traffic that may not have been apparent. 

Another example is from New York where we measured traffic in the whole city. Here is an illustrative sample from December. 

Trucks in New York City December 2018

We can see a lot of truck traffic in the days leading up to Christmas day right until the last day where people are shopping. Then again after Christmas we see a similarly high number of trucks presumably carrying returned gifts, but then traffic is levelling off the rest of the month all the way down to the level of Christmas day because of the sudden decrease in demand. 

 These are just illustrative examples of the correlation between truck traffic and demand. We would expect to see a surge in truck traffic when the economy of our cities are really picking up and not until then. 

Using traffic data to understand the impact of COVID -19 measures

We at Sensorsix have built a tool for ambient intelligence. Ambient intelligence is knowledge about what goes on around us. In our case it is built on what we can learn about human mobility from sensors. We have been in stealth until now working on a prototype to quantify the flow of human movement in particular traffic. Basically we use Machine Learning to extract information from video feeds to measure the volume of vehicles, pedestrian bikes etc. across time on select locations.  

As part of our testing of the product we had set up monitoring of the region of Zealand in Denmark. For those unfamiliar with the geography of Demark, Zealand is an island on which the capitol region of Copenhagen is located. The region is home to almost 2,3 million people. We wanted to understand the ebb and flow of traffic, the heartbeat of the region if you will. 

We started this test on Sunday March 8th. On the evening of March 11th the prime minister of Denmark, Mette Frederiksen, closed all schools and required all public employees to work from home. Most schools and institutions closed down already the following day. On Friday the 13th at noon the borders to our neighboring countries were closed as well. Since Zealand is next to and deeply integrated with Sweden these two events would be expected to have a significant impact on mobility in the Zealand region. 

Since we were monitoring the traffic from before the decision, we are able to accurately quantify and visualize the flow of traffic. The figure below that displays traffic volume from noon Sunday March 8th until noon Sunday March 15th. For reasons of simplicity we chose to focus on cars, so the figure only displays cars. Different patterns may exist for other types of vehicles, but the majority of traffic is cars. 

When we look at the pattern, what we see is the usual heartbeat of a city. Previous research and our own pilots in New York have shown the same pattern where traffic increases in the morning, has a noon dip and then rises in the afternoon and evening. But it is clear that even if the pattern is recognizable, the heartbeat is losing power. Just how much may be clearer from the figure below. Here we see a jaw breaking drop of about 75% in traffic volume. 

These are just some preliminary findings that we wanted to share for reflection and in the service of public information. Based on our data we can see that this is not a drill! It is not fake news. It is not tendentious journalism finding a deserted or heavily trafficked road depending on what they want to see. It is not exaggerated and it is not played down. it is a 75% drop regardless of how you frame it. In these times of fake news it is all the more important to get solid facts on the table. This is exactly what we built sensor six for. In all modesty we are probably the only ones in the world who can tell what actually goes on in traffic. 

What can this be used for?

A fair question is therefore what we can use this data for. Is it just another piece of data to throw on the heap? We think not. In the current Corona context, there are at least three key issues that solid ambient intelligence can help solve

Compliance – do people really stay at home or do they ignore the orders political leaders are giving them? This is an interesting way to provide a fact-based way of monitoring the efficiency and compliance of curfew and other measures of limiting trafic. 

Efficiency –  since this is a good proxy for degree of  quarantine a society is enforcing it is potentially an important metric. The frequency of interaction between people is an important variable in the spread of an epidemic disease and understanding the trends in mobility will give an indication about what that is. We should be able to correlate with the effect on number of infections and morbidity in the longer term. Obviously the effect will be delayed.

Economic activity – it should be possible to correlate the flow of traffic with economic activity. Initially it will of course be a drop and similarly the effect will be delayed. We can use the data to understand the economic impact a drop has. Eventually it should turn around and the rise in traffic volume should be the first harbinger of increase in economic activity. 

We will keep monitoring the traffic and supply other interesting insights that we can mine from our data. 

Note on methodology: we are continually monitoring roads leading to all entry points to and exit points from Zealand, which means all bridges and major ports. All traffic that comes into or goes out of the Zealand region is quantified. Based on this data we generate a volume score that is tracked continually

Why Your Organization Most Likely Shouldn’t Adopt AI Anytime Soon

Recently I attended the TechBBQ conference. Having been part of the organizing team for the very first one, I was impressed   to see what it had developed into. When I came to get my badge the energetic and enthusiastic volunteer asked me if I was “pumped”, but I was not pumped (as far as my understanding of what that meant) so I politely replied that I was probably as pumped as I was ever going to be.

Inside was packed and at one point a fascist looking guy pushed me and told me to step aside, just as I was getting ready to put up a fight and stand my ground I noticed the crown prince of Denmark strolling by. So, I left him with a warning and let him off the hook for this time (maybe if I had been some more pumped…also I suspect that all of this played out as a blank stare from the point of view of the body guard)

At the exhibition floor I had the good fortune of chatting with a few McKinsey consultants at their booth. The couches were exquisite and so would the coffee have been if they had offered me some. If there is one thing McKinsey can do it is talk and do research and currently they do a lot of talk and research on Artificial Intelligence (AI). I was lucky to get my hands on some of their reports that detail their look on Artificial Intelligence in general and AI in the Nordics in particular. 

The main story line is the same one that you hear everywhere: AI is upon us and it promises great potential if not a complete transformation of the world as we know it. There are however a few conclusions that we should dive into a little bit more. 

The wonders of AI

In terms of investment in AI, 2/3 of businesses allocate 3% or less of investments in AI and only 10% allocate 10% or more. If you were reading the tech news you would be forgiven for thinking that 90% of companies were investing a 100% or more in AI. So, this observation alone is interesting. There is not a lot of actual investment going towards AI for the vast majority of companies. When you ask senior management and boards there is a bit of a waiting game, where they look more towards competitors moves than to the actual potential of AI. 

The status of adoption is that in the Nordics 30% (compared to 21% globally) of companies have embedded at least one AI technology across their business. This could be taken to mean that the Nordics were ahead of the curve compared to the global market. It could also be due to the Nordics having a higher general level of digitalization. 

These things taken together it seems that AI as a technology is still in Innovators/the early adopter category in the diffusion of innovation theory developed by Everett M. Rogers. Rogers developed a framework and body of research that has been shown across multiple industries and technologies that show the patterns of adopting innovations of any type. AI is one such type of innovation, just like the Iowa farmers’ adoption of 2.4-D weed spray that was Rogers initial focus of investigation more than 50 years ago. The research showed that the adoption took the form of a bell curve.

 

Figure 1. Diffusion of innovations, credit: Wikimedia commons

 

The fact that companies are waiting for competitors to use AI also clearly indicates that we are in the early adopter or early majority category, as this is typical behavior for the early adopters. Whereas innovators will go with anything as long as it is  new, early adopters are more picky. Early Majority are primarily looking at what the competition is doing in order to copy them. 

If we look at figure 2 we can see that companies that have adopted AI today are vastly more profitable. The logic seems to be straight forward: there is a huge potential for AI to make companies more profitable.

 

 

Figure 2. AI adoption and profit margins (source: McKinsey Global Institute ) 

While this is indeed a tempting conclusion, we have to be cautious. Keep in mind that the companies adopting AI may just be more technologically proficient. The AI adoption could be confounded with adopter category and technology utilization in general. It could just mean that companies more open to innovation of any kind are on average more profitable than those who are not. It is well known that especially early adopters are more profitable than other adopter categories. 

To put it another way: adopting AI may result in you becoming more profitable, but is not certain that AI is the reason. What McKinsey doesn’t tell us, but I expect them to know full well, is that the reverse is also true. Investing in AI may actually set  you up for failure. 

AI adoption and adopter category

The issue here is that it may not be AI that is making the companies profitable, it may rather be their adopter category. The adopter category is related to their company culture. A company culture that is friendly to new technologies will behave as an early adopter and  monitor the market and selectively choose solutions that they think will give them an advantage. This is what they do with any type of technology, not just AI. But we also have to remember that the reason they are successful is exactly because of their company culture and the fact that they are used to trying out new solutions.

They know that when they invest in something new you don’t just press install, next, next, finish and the money starts flowing. They know that new technologies are rough around the edges and there is going to be a lot of stop and start and two steps forward and one step back. They are driven by a belief that they will fix it somehow. More importantly, they have a sufficient amount of people with a “can-do attitude” that are not afraid to leave their comfort zone (see figure 3)

 

 

Figure 3. where magic happens

 Now, compare this with organizations that have more people of a “not-invented-here attitude”. Their company culture leads them to the late majority and laggard categories. For this type of organization, innovations are something to be shunned, they know what they are doing and consider it a significant risk to do anything differently. Their infrastructure is not geared towards making experimental and novel technologies work. It is geared towards efficiently and making well known technology work in a predictable manner.

Let’s do a thought experiment about how this will play out: Karma Container, a medium sized shipping company, decides to send Fred, an inspired employee, to TechBBQ . They still have mission critical applications running on the mainframe and Windows NT servers (because Linux or MacOS are not in use anywhere) and upgrades are a major concern that has the CIO biting his nails every time. Fred comes back from the conference energized. He spoke to the same McKinsey consultants and read the same reports that I did. He pitches to his CIO that they should invest in AI because the numbers clearly indicate that it would increase the company’s profitability. The last time they invested in any new technology was to transfer their telephones to IP telephony and implement help desk technology. The CIO says ok, and they decide to try to adopt a chatbot to integrate with their helpdesk and website.

So, with a budget and a formal project established Fred starts. They wonder who in the organization would actually implement it. They go to the database administrator, who looks at them as if they were suddenly speaking a different language. He has no idea. They go to the .net developer who fails to appreciate how that could in any way involve him. They then go to the system administrators, who quickly show them to the door on account of a purported acute security event. They don’t get back to the project team either.

Remember that at this point they haven’t even started to figure out who would maintain, patch and upgrade the system or who would be responsible when it behaves strangely or who would support it. Fred quickly gives up and returns to his job of managing Remedy tickets.

 

Beware of AI

 The purpose of this thought experiment (vaguely based on real life experience even though the names and details have been changed) is that even if AI does have much to offer in terms of profitability and efficiency it is not a realistic choice for most companies at this point. I would even go so far as to say that all AI should be avoided by most companies unless they have a track record and company culture that would indicate they could make it work.

Most AI solutions are not mature enough, that is easy enough to use,  and more importantly the value proposition is speculative. If an organization is not geared towards implementing experimental technologies, they are wasting time, money and effort on trying. This is why most companies are better off waiting. This is similar to websites in the 1990ies. They were not for everyone, but today anyone can click a few times and create a beautiful site in WordPress or other CMS. Once we have the equivalent of a wordpress for AI, that is when most companies should invest.

Diffusion of innovations just takes time it cannot and should not be forced. The current AI hype is also a result of innovators and early adopters being more loud and opinion forming than most companies. Most companies are better off waiting for the dust to settle and more mature and comprehensive solutions to appear

 

AI, Traffic Tickets and Stochastic Liberty

Recently I received a notification from Green Mobility the electric car ride-share company I am using some times. I have decided not to own a car any longer and experiment with other mobility options, not that I care about the climate, it’s just, well, because. I like these cars a lot, I like their speed and acceleration and that you can just drop them off and not think about them ever again. Apparently I seem to have enjoyed the speed and acceleration a little too much, since the notification said the police claimed that I (allegedly) was speeding on one of my trips. For a very short period of time I toyed with the “It-wasn’t-me” approach, but quickly decided against that since technology was quite obviously not on my side here. Then I directed my disappointment at not receiving complete mobility immunity along with all the other perks of not owning my car against the company charging me an extra fee on top of the ticket, a so called administration fee. But that was a minor fee anyway. Then I decided to rant against the poor support person based on the fact that they called it a parking ticket in their notification and that I obviously wasn’t parking according to the photo. Although in my heart I did realize that this was not going anywhere.

I believe this is a familiar feeling to any of my fellow motorist: the letter in the mail displaying your innocent face at the wheels of your car and a registered speed higher than allowed along with payment details of the ticket you received for the violation. It is interesting to observe the anger we feel and the unmistakable sense that this is deeply unfair even though it is obviously not. The fine is often the result of an automated speed camera that doesn’t even follow normal working hours or lunch breaks (an initial reason for it being unfair). A wide suite of mobility products like GPS systems and Waze keeps track of these speed cameras in real time. Some people follow and report this with something approaching religious zeal. But what is the problem here? People know or should know the speed limit and know you will get a ticket if you are caught. The operative part of this sentence seems to be the “if you are caught” part. More about that in a minute.

The Technology Optimisation Paradox

Last year I was working with for the City of New York to pilot a system that would use artificial intelligence to detect different things in traffic. Like most innovation efforts in a city context it was not funded beyond the hours we could put into it, so we needed to get people excited and find a sponsor to take this solution we were working on further. Different suggestions about what we should focus on came up. One of them was that we should use the system to detect traffic violations and automatically fine car owners based on the license plate.

This is completely feasible, I have received tickets myself based on my license plates, so I gathered that the technology would be a minor issue. We could then roll it out on all the approximately 3000 traffic cameras that are already in the city. Imagine how much revenue that could bring in. It could probably sponsor a couple of new parks or sports grounds or even a proper basket ball team for New York. At the same time it would improve traffic flow because less people would double park and park in bus lanes etc. When you look at it, it seems like a clear win-win solution. We could improve traffic for all New Yorkers, build new parks and have a team in the NBA Play Offs (eventually). We felt pretty confident.

This is where things got complicated. We quickly realized that this was indeed not a pitch that would energize anyone, at least not in way way that was beneficial to the project. Even though people are getting tickets today and do not suggest seriously that they should not, the idea of OPTIMIZING this function in the city seemed completely off. This is a general phenomenon in technological solutions, I call this the “Technology Optimization Paradox”: when optimizing a function, which is deemed good and relevant leads to resistance at a certain optimization threshold. If the function is good and valuable there should be no logical reason why doing it better should be worse, but this is sometimes how people feel. This is the technology optimization paradox. It is often seen in the area of law enforcement. We don’t want surveillance even though that would greatly increase the fight against terrorism. We like the function of the FBI that lead to arrests and exposure of terrorist plots but we don’t want to open our phones to pervasive eaves dropping.

Stochastic Liberty

This is where we get back to the “If you are caught” part. Everyone agrees that it is fair that you are punished for a crime if you are caught. The emphasis here is on the “if”. When we use technology like AI we get very very close to substituting the “if” with a “when”. This is what we feel is unfair. It is as though we have an intuitive expectation that we should have a fair chance of getting away with something. This is what I call the right to stochastic liberty: The right for the individual to have events to be un-deterministic. Especially adversary events. We want to have the liberty to have a chance to get away with an infringement. This is the issue many people have with AI when it is used for certain types of tasks, specifically tasks that have an optimization paradox. It takes away the stochastic liberty, it takes away the chance element.

Let us look at some other examples. When we do blood work, do we want AI to automatically tell us about all our hereditary diseases, so the doctor can tell us that we need to eat more fiber and stop smoking? No sir,  we quietly assert our right to stochastic liberty and the idea that maybe we will be the 1% who lives to be 90 fuelled on a diet of sausages, fries and milkshake even though half our family died of heart attacks before they turned 40.  But do we want AI to detect a disease that we have suspicion that we might have? Yes!

Do we want AI to automatically detect when we have put too many deductions on our tax return? No way, we want our stochastic liberty. Somebody in the tax department must sit sweating and justify why regular citizens tax returns are being looked through. At most we can accept the occasional spot test (like the rare traffic police officer, who also has to take a break and get lunch and check the latest sport results, thats fair). But do we want AI to help us find systematic money laundering and tax-evation schemes: hell yeah!

Fear of the AI God

Our fear of AI is that it would become this perfect god that would actually enforce all the ideals and ethics that we agree on (more or less). We don’t want our AI to take away our basic human right of stochastic liberty.

This is a lesson you don’t have to explain to politicians who ultimately run the city and decide what gets funded and what not. They know that unhappy people getting too many traffic tickets that they think are unfair, will not vote for them. This is what some AI developers and technocrats do not appreciate when they talk about how we can use AI to make the city a better place. The city is a real place where technology makes real impacts on real people and the dynamics of technology solutions exceed those of the system in isolation. This is a learning point for all technology innovation involving AI: there are certain human preferences and political realities that impose the same limits on the AI solution as the type of algorithm, IOPS and CPU usage.