Why Your Organization Most Likely Shouldn’t Adopt AI Anytime Soon

Recently I attended the TechBBQ conference. Having been part of the organizing team for the very first one, I was impressed   to see what it had developed into. When I came to get my badge the energetic and enthusiastic volunteer asked me if I was “pumped”, but I was not pumped (as far as my understanding of what that meant) so I politely replied that I was probably as pumped as I was ever going to be.

Inside was packed and at one point a fascist looking guy pushed me and told me to step aside, just as I was getting ready to put up a fight and stand my ground I noticed the crown prince of Denmark strolling by. So, I left him with a warning and let him off the hook for this time (maybe if I had been some more pumped…also I suspect that all of this played out as a blank stare from the point of view of the body guard)

At the exhibition floor I had the good fortune of chatting with a few McKinsey consultants at their booth. The couches were exquisite and so would the coffee have been if they had offered me some. If there is one thing McKinsey can do it is talk and do research and currently they do a lot of talk and research on Artificial Intelligence (AI). I was lucky to get my hands on some of their reports that detail their look on Artificial Intelligence in general and AI in the Nordics in particular. 

The main story line is the same one that you hear everywhere: AI is upon us and it promises great potential if not a complete transformation of the world as we know it. There are however a few conclusions that we should dive into a little bit more. 

The wonders of AI

In terms of investment in AI, 2/3 of businesses allocate 3% or less of investments in AI and only 10% allocate 10% or more. If you were reading the tech news you would be forgiven for thinking that 90% of companies were investing a 100% or more in AI. So, this observation alone is interesting. There is not a lot of actual investment going towards AI for the vast majority of companies. When you ask senior management and boards there is a bit of a waiting game, where they look more towards competitors moves than to the actual potential of AI. 

The status of adoption is that in the Nordics 30% (compared to 21% globally) of companies have embedded at least one AI technology across their business. This could be taken to mean that the Nordics were ahead of the curve compared to the global market. It could also be due to the Nordics having a higher general level of digitalization. 

These things taken together it seems that AI as a technology is still in Innovators/the early adopter category in the diffusion of innovation theory developed by Everett M. Rogers. Rogers developed a framework and body of research that has been shown across multiple industries and technologies that show the patterns of adopting innovations of any type. AI is one such type of innovation, just like the Iowa farmers’ adoption of 2.4-D weed spray that was Rogers initial focus of investigation more than 50 years ago. The research showed that the adoption took the form of a bell curve.

 

Figure 1. Diffusion of innovations, credit: Wikimedia commons

 

The fact that companies are waiting for competitors to use AI also clearly indicates that we are in the early adopter or early majority category, as this is typical behavior for the early adopters. Whereas innovators will go with anything as long as it is  new, early adopters are more picky. Early Majority are primarily looking at what the competition is doing in order to copy them. 

If we look at figure 2 we can see that companies that have adopted AI today are vastly more profitable. The logic seems to be straight forward: there is a huge potential for AI to make companies more profitable.

 

 

Figure 2. AI adoption and profit margins (source: McKinsey Global Institute ) 

While this is indeed a tempting conclusion, we have to be cautious. Keep in mind that the companies adopting AI may just be more technologically proficient. The AI adoption could be confounded with adopter category and technology utilization in general. It could just mean that companies more open to innovation of any kind are on average more profitable than those who are not. It is well known that especially early adopters are more profitable than other adopter categories. 

To put it another way: adopting AI may result in you becoming more profitable, but is not certain that AI is the reason. What McKinsey doesn’t tell us, but I expect them to know full well, is that the reverse is also true. Investing in AI may actually set  you up for failure. 

AI adoption and adopter category

The issue here is that it may not be AI that is making the companies profitable, it may rather be their adopter category. The adopter category is related to their company culture. A company culture that is friendly to new technologies will behave as an early adopter and  monitor the market and selectively choose solutions that they think will give them an advantage. This is what they do with any type of technology, not just AI. But we also have to remember that the reason they are successful is exactly because of their company culture and the fact that they are used to trying out new solutions.

They know that when they invest in something new you don’t just press install, next, next, finish and the money starts flowing. They know that new technologies are rough around the edges and there is going to be a lot of stop and start and two steps forward and one step back. They are driven by a belief that they will fix it somehow. More importantly, they have a sufficient amount of people with a “can-do attitude” that are not afraid to leave their comfort zone (see figure 3)

 

 

Figure 3. where magic happens

 Now, compare this with organizations that have more people of a “not-invented-here attitude”. Their company culture leads them to the late majority and laggard categories. For this type of organization, innovations are something to be shunned, they know what they are doing and consider it a significant risk to do anything differently. Their infrastructure is not geared towards making experimental and novel technologies work. It is geared towards efficiently and making well known technology work in a predictable manner.

Let’s do a thought experiment about how this will play out: Karma Container, a medium sized shipping company, decides to send Fred, an inspired employee, to TechBBQ . They still have mission critical applications running on the mainframe and Windows NT servers (because Linux or MacOS are not in use anywhere) and upgrades are a major concern that has the CIO biting his nails every time. Fred comes back from the conference energized. He spoke to the same McKinsey consultants and read the same reports that I did. He pitches to his CIO that they should invest in AI because the numbers clearly indicate that it would increase the company’s profitability. The last time they invested in any new technology was to transfer their telephones to IP telephony and implement help desk technology. The CIO says ok, and they decide to try to adopt a chatbot to integrate with their helpdesk and website.

So, with a budget and a formal project established Fred starts. They wonder who in the organization would actually implement it. They go to the database administrator, who looks at them as if they were suddenly speaking a different language. He has no idea. They go to the .net developer who fails to appreciate how that could in any way involve him. They then go to the system administrators, who quickly show them to the door on account of a purported acute security event. They don’t get back to the project team either.

Remember that at this point they haven’t even started to figure out who would maintain, patch and upgrade the system or who would be responsible when it behaves strangely or who would support it. Fred quickly gives up and returns to his job of managing Remedy tickets.

 

Beware of AI

 The purpose of this thought experiment (vaguely based on real life experience even though the names and details have been changed) is that even if AI does have much to offer in terms of profitability and efficiency it is not a realistic choice for most companies at this point. I would even go so far as to say that all AI should be avoided by most companies unless they have a track record and company culture that would indicate they could make it work.

Most AI solutions are not mature enough, that is easy enough to use,  and more importantly the value proposition is speculative. If an organization is not geared towards implementing experimental technologies, they are wasting time, money and effort on trying. This is why most companies are better off waiting. This is similar to websites in the 1990ies. They were not for everyone, but today anyone can click a few times and create a beautiful site in WordPress or other CMS. Once we have the equivalent of a wordpress for AI, that is when most companies should invest.

Diffusion of innovations just takes time it cannot and should not be forced. The current AI hype is also a result of innovators and early adopters being more loud and opinion forming than most companies. Most companies are better off waiting for the dust to settle and more mature and comprehensive solutions to appear

 

AI, Traffic Tickets and Stochastic Liberty

Recently I received a notification from Green Mobility the electric car ride-share company I am using some times. I have decided not to own a car any longer and experiment with other mobility options, not that I care about the climate, it’s just, well, because. I like these cars a lot, I like their speed and acceleration and that you can just drop them off and not think about them ever again. Apparently I seem to have enjoyed the speed and acceleration a little too much, since the notification said the police claimed that I (allegedly) was speeding on one of my trips. For a very short period of time I toyed with the “It-wasn’t-me” approach, but quickly decided against that since technology was quite obviously not on my side here. Then I directed my disappointment at not receiving complete mobility immunity along with all the other perks of not owning my car against the company charging me an extra fee on top of the ticket, a so called administration fee. But that was a minor fee anyway. Then I decided to rant against the poor support person based on the fact that they called it a parking ticket in their notification and that I obviously wasn’t parking according to the photo. Although in my heart I did realize that this was not going anywhere.

I believe this is a familiar feeling to any of my fellow motorist: the letter in the mail displaying your innocent face at the wheels of your car and a registered speed higher than allowed along with payment details of the ticket you received for the violation. It is interesting to observe the anger we feel and the unmistakable sense that this is deeply unfair even though it is obviously not. The fine is often the result of an automated speed camera that doesn’t even follow normal working hours or lunch breaks (an initial reason for it being unfair). A wide suite of mobility products like GPS systems and Waze keeps track of these speed cameras in real time. Some people follow and report this with something approaching religious zeal. But what is the problem here? People know or should know the speed limit and know you will get a ticket if you are caught. The operative part of this sentence seems to be the “if you are caught” part. More about that in a minute.

The Technology Optimisation Paradox

Last year I was working with for the City of New York to pilot a system that would use artificial intelligence to detect different things in traffic. Like most innovation efforts in a city context it was not funded beyond the hours we could put into it, so we needed to get people excited and find a sponsor to take this solution we were working on further. Different suggestions about what we should focus on came up. One of them was that we should use the system to detect traffic violations and automatically fine car owners based on the license plate.

This is completely feasible, I have received tickets myself based on my license plates, so I gathered that the technology would be a minor issue. We could then roll it out on all the approximately 3000 traffic cameras that are already in the city. Imagine how much revenue that could bring in. It could probably sponsor a couple of new parks or sports grounds or even a proper basket ball team for New York. At the same time it would improve traffic flow because less people would double park and park in bus lanes etc. When you look at it, it seems like a clear win-win solution. We could improve traffic for all New Yorkers, build new parks and have a team in the NBA Play Offs (eventually). We felt pretty confident.

This is where things got complicated. We quickly realized that this was indeed not a pitch that would energize anyone, at least not in way way that was beneficial to the project. Even though people are getting tickets today and do not suggest seriously that they should not, the idea of OPTIMIZING this function in the city seemed completely off. This is a general phenomenon in technological solutions, I call this the “Technology Optimization Paradox”: when optimizing a function, which is deemed good and relevant leads to resistance at a certain optimization threshold. If the function is good and valuable there should be no logical reason why doing it better should be worse, but this is sometimes how people feel. This is the technology optimization paradox. It is often seen in the area of law enforcement. We don’t want surveillance even though that would greatly increase the fight against terrorism. We like the function of the FBI that lead to arrests and exposure of terrorist plots but we don’t want to open our phones to pervasive eaves dropping.

Stochastic Liberty

This is where we get back to the “If you are caught” part. Everyone agrees that it is fair that you are punished for a crime if you are caught. The emphasis here is on the “if”. When we use technology like AI we get very very close to substituting the “if” with a “when”. This is what we feel is unfair. It is as though we have an intuitive expectation that we should have a fair chance of getting away with something. This is what I call the right to stochastic liberty: The right for the individual to have events to be un-deterministic. Especially adversary events. We want to have the liberty to have a chance to get away with an infringement. This is the issue many people have with AI when it is used for certain types of tasks, specifically tasks that have an optimization paradox. It takes away the stochastic liberty, it takes away the chance element.

Let us look at some other examples. When we do blood work, do we want AI to automatically tell us about all our hereditary diseases, so the doctor can tell us that we need to eat more fiber and stop smoking? No sir,  we quietly assert our right to stochastic liberty and the idea that maybe we will be the 1% who lives to be 90 fuelled on a diet of sausages, fries and milkshake even though half our family died of heart attacks before they turned 40.  But do we want AI to detect a disease that we have suspicion that we might have? Yes!

Do we want AI to automatically detect when we have put too many deductions on our tax return? No way, we want our stochastic liberty. Somebody in the tax department must sit sweating and justify why regular citizens tax returns are being looked through. At most we can accept the occasional spot test (like the rare traffic police officer, who also has to take a break and get lunch and check the latest sport results, thats fair). But do we want AI to help us find systematic money laundering and tax-evation schemes: hell yeah!

Fear of the AI God

Our fear of AI is that it would become this perfect god that would actually enforce all the ideals and ethics that we agree on (more or less). We don’t want our AI to take away our basic human right of stochastic liberty.

This is a lesson you don’t have to explain to politicians who ultimately run the city and decide what gets funded and what not. They know that unhappy people getting too many traffic tickets that they think are unfair, will not vote for them. This is what some AI developers and technocrats do not appreciate when they talk about how we can use AI to make the city a better place. The city is a real place where technology makes real impacts on real people and the dynamics of technology solutions exceed those of the system in isolation. This is a learning point for all technology innovation involving AI: there are certain human preferences and political realities that impose the same limits on the AI solution as the type of algorithm, IOPS and CPU usage.

 

The Challenges of Implementing AI Solutions – A Personal Journey

For about a decade I have been involved in various system development efforts that involved Artificial Intelligence. They have all been challenging but in different ways. Today AI is rightfully considered a game changer in many industries and areas of society, but it makes sense to reflect on the challenges I have encountered in order to asses the viability of AI solutions.

10 years of AI

About 10 years ago I designed my first AI solution, or Machine Learning as we typically called it back then. I was working in the retail industry at that time and was trying to find the optimal way of targeting individual customers with the right offers at the right time. Lots of thought went into it and I worked with an awesome University Professor (Rune Møller Jensen) to identify and design the best algorithm for our problem. This was challenging but not completely impossible. This was before TensorFlow or any other comprehensive ML libraries were developed. Never the less everything died due to protracted discussions about how to implement our design in SQL (which of course is not possible: how do you do a K-means clustering algorithm in SQL), since that was the only language known to the BI team responsible for the solution.

Fast forward a few years I find myself in the financial services industry trying to build models to identify potential financial crime. Financial crime has a pattern and this time the developers had an adequate language to implement AI and were open to use the newest technologies such as Spark and Hadoop. We were able to generate quite a few possible ideas and POCs but everything again petered out. This time the challenge was the regulatory wall or rather various more or less defined regulatory concerns. Again the cultural and organizational forces against the solution were too big to actually generate a viable product (although somehow we did manage to win a big data prize)

Even more fast-forward until today. Being responsible for designing data services for the City of New York the forces I encountered earlier in my career are still there, but the tides are turning and more people know about AI and take courses preparing them for how it works. Now I can actually design solutions with AI that will get implemented without serious internal forces working against it. But the next obstacle is already waiting and this time it is something particular to government and not present in the private industry. When you work for a company it is usually straightforward to define what counts as good, that is, something you want more of like say, money. In the retail sector, at the end of the day all they cared about were sales. In the Financial Services sector it was detecting financial crime. In the government sector that is not as straight forward.

What drives government AI adoption?

Sure, local, state and federal government will always say that they want to save money. But really the force driving everything in government is something else. What drives government is public perception, since that is what gets officials elected and elected officials define the path, appoint directors who hire the people who will ultimately decide what initiatives get implemented. Public perception is only partially defined by efficiency and monetary results. There are other factors that interfere with success such as equity, fairness, transparency etc.

Let me give some examples to explain. One project I am working on has to do with benefits eligibility. Initially City Hall wanted to pass legislation that would automatically sign up residents for benefits. However, after intervention by experts this was changed to doing a study first. The problem is that certain benefits interfere with other benefits and signing you up for something that you are eligible for may affect you negatively because you could loose another benefit.

While this is not exactly artificial intelligence it is still an algorithm that displays the same types of structural characteristics: the algorithm magically makes your life better. Even if we could make the algorithm count the maximum benefit of all available benefits and sign the resident up for the optimal combination, we still would not necessarily thrill everyone. Since benefits are complex it might be that some combination will give you more in the long term rather than the short term. What then if the resident prefers something in the short term? What if the system fails and a family gets evicted and has to live in a shelter because the system failed to detect eligibility due to bad master data?

When I was in the retail industry that would amount to a vegetarian getting an offer for steaks. Not optimal but also not critical if we could just sell 10 more steaks. In the financial services industry it would amount to a minor Mexican drug lord successfully laundering a few hundred thousand dollars. Again, this is not great but also not a critical issue. In government a family being thrown out on the street is a story that could be picked up by the media to show how bad the administration is. Even if homelessness drops 30% it could be the difference between reelection and loss of power.

What does a success look like?

So, the reward structures are crucial to understand what will drive AI adoption. Currently I am not optimistic about using AI in the City. Other recent legislation has mandated algorithmic transparency. The source code for every algorithm that affects decisions concerning citizens needs to be open to the public. While this makes sense from a public perception perspective it does not from a technical one. Contrary to popular belief I don’t think AI will catch on in the government any time soon. I think this can be generalized to any sector where the reward function is multi-dimensional, that is, where success cannot be measured by just one measure.

Do Product Managers Need to have Programming Experience?

Do you need to be able to program to be a good product manager? Opinions differ widely here.

Full disclosure: I have very little if any meaningful command of any programming language. If you feel you need to be able to program in order to have an informed opinion, you have already answered the question yourself and can safely skip this and read on.

So, just to get my answer out of the way: “no”

I would say just as you don’t need to know how to lay bricks in order to be an architect or be a veterinarian in order to ride a horse.

When I hear people, who answer “yes” to the question I always want to counter: is it necessary to know anything about humans in order to build tech products for humans? Very few, if any, make products that do not crucially depend on and interact with humans, but it has always been curious to me why that part of the equation is always assumed to be trivial and not requiring any sort of experience or education.

This is even more puzzling when you consider that the prevalent cause of product failure seems to be the human part of it. Let me just mention three examples.

Remember google glass? That was a brilliant technology, but a failed product due to a lack of understanding of what normal humans think is creepy. I wrote about this back in 2014 and observed

A product has to exist in an environment beyond its immediate users. Analysis of this environment and the humans that live in it could have revealed the emotional reactions.

 

Remember autonomous vehicles? Perfect technology, but unfortunately not necessarily considered as such by the humans that run the imperceptible risk of being killed by it and who lives with the result of the actions of the AI, which will eventually be traced back to humans somewhere. This is something I touched on in a recent blog post.

We would still have to instill the heuristics and the tradeoffs in the AI, which then leads back to who programs the AI. This means that suddenly we will have technology corporations and programmers making key moral decisions in the wild. They will be the intelligence inside the Artificial Intelligence.

 

Similarly for features of products like the number of choices you have. You might assume that the more choice the more value to the product, but keep in mind that if the product is used by humans you have to think about the constraints humans bring:

In general the value of an extra choice increases sharply in the beginning and then quickly drops off. Given the choice of apples, oranges, pears, carrots and bananas are great, but when you can also choose between 3 different sorts of each the value of being offered yet another type of apple may even be negative. The reason for this phenomenon has to do with the limits of human consciousness.

 

The root cause of product failure is typically not technical but human, so rather than asking a product manager for his command of programming languages maybe do a check on where he or she falls on the autism spectrum. Maybe ask whether he has ever studied anything related to the human factors like psychology, anthropology, sociology or similar topics that would allow him to make products that work well for humans.

 

This post is based on my response on Quora to the question: “Is it necessary for a product manager to know programming language?

 

AI is Easy – Life is Hard

Artificial Intelligence is easy. Life is hard. This simple insight should collectively caution our expectations. When we look at Artificial Intelligence and the amazing results it has already produced, it is clear that it has not been easy. The most iconic victories are as follows.

  • Deep Blue beats Kasparov
  • Watson beats champions Brad Rutter and Ken Jennings in jeopardy
  • Google beats world champion in Go

Today Autonomous Vehicles (AV) manage to stay on the road and go where they are expected to. Put on top of this multiple implementations of face recognition, speech recognition, translation etc. What more could we want to show us that it is just a matter of time before AI becomes truly human in its ability? Does this not reflect the diversity of human intelligence and how it has been truly mastered by technology?

Actually, no, I don’t think so. From a superficial point of view it could look like it, but deep down all these problems are if not easy, then hard in an easy way in the sense that there is a clear path to solving them.

 

AI Is Easy 

The one thing that holds true for all of these applications is that the goals are very clear. Chess, Jeopardy and Go: you either win or you don’t. Facial, speech and any other kind of recognition: you recognize something or you don’t. Driving an autonomous vehicle: It either drives acceptably according to the traffic rules or it doesn’t. If only human life were so simple.

Did you know when you were born what you wanted to work with? Did you know the precise attributes of the man/woman you were looking for? Did you ever change your mind? Did you ever want to do two or more mutually exclusive things (like eating cake for breakfast and live a healthy life)?

Humans are so used to constantly evaluate trade offs, with unclear and frequently changing goals that we don’t even think about it.

 

An AI Thought Experiment 

Let me reframe this in the shape of a tangible existing AI problem: Autonomous Vehicles (AV). Now that they are very good or even perfect at always staying within the traffic rules, how do they behave when conditions are not as clear? Or even in situations where the rules might be conflicting?

Here is a thought experiment: the self-driving car is driving on a sunny spring afternoon through the streets of New York. It is a good day and it is able to keep a good pace. On its right is a sidewalk with a lot of pedestrians (as is common in New York), on its left is a traffic lane going the opposite direction as they do on two-way streets (which are more rare but not altogether absent). Now suddenly a child runs out into the road in front of the car and it is impossible for it to brake in time. The autonomous vehicle needs to make a choice. It either runs over the child makes an evasive maneuver to the right hitting pedestrians or the left hitting cars going the other direction?

How do we prepare the AI to make that decision? Now, the goals are not so clear as in a jeopardy game. Is it more important not to endanger children? Let’s just say for the sake of argument this was the key moral heuristic. The AI would then have to calculate how many children were on the sidewalk and in a given car on the opposite side of the road. It may kill two children on the sidewalk or in another car. What if there were two children in the Autonomous Vehicle itself? Does the age factor in to the decision? Is it better to kill old people than younger? What about medical conditions? Would it not be better to hit a terminal cancer patient than a healthy young mother?

The point of this thought experiment is just to highlight that even if the AI could make an optimal decision it is not simple what optimal means. It may indeed differ across people, that is, regular human beings who would be judging it. There are hundreds of thousands of similar situations where there just by definition is no one right solution, and consequently no clear goal for the AI to optimize towards. What if we had an AI as the next president? Would we trust it to make the right decisions in all cases? Probably not, politics is about sentiment, subjectivity and hard solutions. Would we entrust an AI that would be able to go through all previous court cases, statistics, political objectives to make fair rulings and sentencing? No way, although it probably could.

 

Inside the AI Is the Programmer 

As can be seen from this the intelligence in an AI must be explained by another intelligence. We would still have to instill the heuristics and the tradeoffs in the AI, which then leads back to who programs the AI. This means that suddenly we will have technology corporations and programmers making key moral decisions in the wild. They will be the intelligence inside the Artificial Intelligence.

In many ways this is already the case. A more peaceful case in point is online dating: a programmer has essentially decided who should find love and who shouldn’t through the matching algorithm and the input used. Inside the AI is the programmer making decisions no one ever agreed they should. Real Artificial Intelligence is as elusive as ever; no matter how many resources we throw at it. Life will throw us the same problems as it always has and at the end of the day the intelligence will be human anyway.

AI and the City

Artificial Intelligence is currently being touted as solution to most problems. Most if not all energy is put into conjuring up new and even more exotic machine learning models and ways of optimizing these. However, the primary boundary for AI is currently not technical as it used to be. It is ecological. Here I am not thinking about the developer ecosystem, but the ecosystem of humans that have to live with the consequences of AI and interact with machines and systems driven by it. While AI lends itself beautifully to the concept of smart cities this is also one of the avenues where this will most clearly play out because the humans that stand to benefit and potentially suffer from the consequences of AI are also voters. Voters vote for politicians and politicians decide to fund AI for smart cities.

How Smart is AI In A Smart City Context?

At a recent conference I had an interesting discussion where we were talking about what AI could be used for. Someone suggested that Machine Learning and AI could be used for smart cities. Working for a city and having worked with AI for a number of years, my question was “for what?” One suggestion was regulating traffic.

So, let us think through this. In New York City we have on occasion a lot of traffic. Let us say that we are able to construct a machine learning system that could indeed optimize traffic flow through the city. This will not be simple or easy, but not outside the realm of the possible. Let us say that all intersections are connected to this central AI algorithm that provides the city as a whole with optimal traffic conditions. The algorithm works on sensor input that counts the number of cars at different intersections based on existing cameras. Probably this will not mean that traffic always flows perfectly but certainly on average does better.

Now imagine during one of these congestions a fire erupts in downtown Manhattan and fire trucks are delayed in traffic due to congestion. 50 people die. The media then finds out that the traffic lights are controlled by an artificial intelligence algorithm. They ask the commissioner of transportation why 50 people had to die because of the algorithm. This is not a completely fair question but media have been known to ask such questions. He tries to explain that the algorithm optimizes the overall flow of traffic. The media are skeptical and ask him to explain how it works. This is where it gets complicated. Since this in part is a deep learning algorithm no one can really tell how it works or why there was congestion delaying the fire trucks at that particular time. The outrage is palpable and headlines read “City has surrendered to deadly AI” and “Incomprehensible algorithm leads to incomprehensible fatalities”

Contrast this to a simple algorithm that is based on clear and simple rules that are not as effective overall but work along the lines of 30 seconds one way 30 seconds another way. Who would blame the commissioner of transportation for congestion in that case?

Politics And Chaos

Media aside there could be other limiting factors. Let us stay with our idea of an AI system controlling the traffic lights in New York City. Let us further assume that the AI system gets a continuous input about traffic flow in the city. Based on this feed it can adapt the signals to optimize the flow. This is great but due to the fact that we have now coupled the system with thousands of feedback loops it enters into the realm of complex or chaotic systems and will start to exhibit properties that are associated with that kind of systems. Typical examples of such properties are: erratic behavior, path dependency, and limited possibility for prediction.

Massively scalable AI cannot counteract these effects easily and even if we could, the true system dynamics would not be known until the system goes live. We would not know how many cars would be running red lights or speed up/slow down compared to today. Possibly the system could be trimmed and be made to behave, but then we basic politics. Which responsible leader would want to experiment with a city of more than 10 million peoples daily lives. Who would want to face these people and explain to them that the reason they are late for work or for their son’s basket ball game is trimming an AI algorithm?

The Limits Of AI

So, the limits to AI may not be primarily of a technical nature. They may have just as much to do with how the world behaves and what other non-data-scientist-humans will accept. Even if it is better to loose 50 people in a fire in Manhattan once every 10th year and reducing the number of traffic deaths by 100 every year, the stories written are about the one tragic event not about the general trend. Voters remember the big media stories and will never notice a smaller trend. Consequently, regardless of the technical utility and precision of AI, there will be cases where the human factor will constrain the solutions more than any code or infrastructure.

Based on this thought experiment I think the most important limits to adoption of AI solutions at city scale are the following

  • Unclear benefits – what are the benefits of leveraging AI for smart cities? We can surely think up a few use-cases but it is harder than you think. Traffic was one but even here the benefits can be elusive.
  • Algorithmic transparency – if we are ready to let our lives be dominated by AI in any important area citizens who vote will want to understand precisely how the algorithms work. Many classes of AI algorithms are incomprehensible in their nature and constantly changing. How can we prove that no one tampered with them in order to gain an unfair advantage? Real people who are late for work or are denied bail will want to know that and some times Department Of Investigation will want to know as well.
  • Accountability – whatever an algorithm is doing, people will want to hold a person accountable for it if something goes wrong. Who is accountable for malfunctioning AI? Or even well functioning AI with unwanted side effects? The buck stops with the responsible on the top, the elected or appointed official.
  • Unacceptable implementation costs – real world AI in a city context can rarely be adequately tested in advance as we are used to for enterprise applications. Implementing and trimming a real world system may have too many adversarial effects before it starts to be beneficial. No matter how much we prepare we can never know exactly how the human part of the system will behave at scale until we release it in the wild.

Artificial Intelligence is a great technological opportunity for cities but we have to develop the human side of AI in order to arrive something that is truly beneficial at scale.

 

Data Is the New Oil – Building the Data Refinery

“Data Is the New Oil!”

Mathematician and IT Architect Clive Humby seems to have been the first to coin the phrase in 2006 where he helped Tesco develop from a fledgling UK retail chain to an inter continental industry titan only rivaled be the likes of Walmart and Carrefour through the use of data through the Tesco reward program. Several people have reiterated the concept subsequently. But the realization did not really hit primetime until the economist in May 2017 claimed that data had surpassed oil as the most valuable resource

Data, however, is not just out there and up for grabs. Just like you have to get oil out of the ground first, data poses similar challenges: you need to get it out of computer systems or devices first. When you do get the oil out of the ground it is still virtually useless. Crude oil is just a nondescript blob of black goo. Getting the oil is just a third of the job. This is why we have oil refineries. Oil refineries turn crude oil into valuable and consumable resources like gas or diesel or propane. It splits the raw oil into different substances that can be used for multiple different products like paint, asphalt, nail polish, basketballs, fishing boots, guitar strings and aspirin. This is awesome; can you imagine a world without guitar strings, fishing boots or Aspirin? That would be like Harry Potter just without the magic…

Similarly even if we can get our hands on it, raw data is completely useless. If you have ever glanced at a webserver log, a binary data stream or other machine generated code you can relate to the analogy of crude oil as a big useless blob of black goo. All this data does not mean anything in itself. Getting the raw data is of course a challenge in some cases, but making it useful is a completely different story. That is why we need to build data refineries. Systems that turn the useless raw data into components that we can build useful data products from.

Building the data refinery

For the past year or so, we have worked to design and architect such a data refinery at New York City. The “Data as a Service” program is the effort to build this refinery for turning raw data from the City of New York into valuable and consumable services to be used by City agencies, residents and the rest of the world. We have multiple data sources in systems of record, registers, logs, official filings and applications, inspections and hundreds of thousands of devices. Only a fraction of this data is even available today. When it is available it is hard to discover and use. The purpose of Data as a Service is to make all the hidden data available and useful. We are turning all this raw data into valuable and consumable data services.

A typical refinery processes crude oil. This is done through a series of distinct processes and results in distinct products that can be used for different purposes. The purpose of the refinery is to break down the crude oil to distinct useful by-products. The Data as a Service refinery has five capability domains we want to manage in order to break the raw data down into useful data assets:

  • Quality is about the character and validity of the data assets
  • Movement is how we transfer and transform data assets from one place to another
  • Storage deals with how we retain data assets for later use
  • Discovery has to with how we locate the data assets we need
  • Access deals with how we allow users and other solutions to interact with data assets

Let us look at each of these in a bit more detail.

Quality

The first capability domain addresses the quality of the data. The raw data is initially of low quality like the crude oil. It may be a stream of bits or characters, telemetry data, logs or CSV files.

The first thing to think about in any data refinery is how to assess and manage the quality of the data. We want to understand and control the quality of data.  We want to know how many data objects there are if they are of the right format or if they are corrupted. Simple descriptive reports like the number of distinct values, type mismatch, number of nulls etc. can be very revealing and important when considering how it can be used by other systems and processes.

Once we know the quality of the data we may want to intervene and do something about it. Data preparation formats the data from its initial raw form. It may also validate that the data is not corrupted and can delete, insert and transform values according to preconfigured rules. This is the first diagnostic and cleansing of the data in the DaaS refinery.

Once we have the initial data objects lined up in an appropriate format Master Data Management is what allows us to work proactively and reactively with improving the data. With MDM we will be able to uniquely identify data objects across multiple different solutions and format them into a common semantic model. MDM enables an organization to manage data assets and produce golden records, identify and eliminate duplicates and control what data entities are valid and invalid.

Data movement

Once we have made sure that we can manage the quality of the data we can proceed to the next phase. Here we will move and transform the data into more useful formats. We may, however, need to move data differently. Sometimes it is all well to move it once a day, week or even month, but more often we want the data immediately.

Batch is movement and transformation of large quantities of data from one form and place to another. A typical batch program is executed on a schedule and goes through a sequence of processing steps that transforms the data from one form into another. It can range from simple formatting changes and aggregations to complex machine learning models. I should add that what is sometimes called Managed File Transfer, where a file is simply moved, that is, not transformed can be seen as a primitive form of batch processing, but in this context it is considered a way of accessing data and described below.

The Enterprise Service Bus is a processing paradigm that lets different programmatic solutions interact with each other through messaging. A message is a small discrete unit of data that can be routed, transformed distributed and otherwise processed as part of the information flow in the Service Bus. This is what we use when systems need to communicate across city agencies. It is a centralized orchestration.

But some data is not as nicely and easily managed. Some times we see use cases where the processing can’t wait for batch processing and the ESB paradigm does not scale with the quantities. Real time processing works on data that arrives in continuous streams. It has limited routing and transformation capabilities, but is especially geared towards handling large amounts of data that comes in continuously either to store, process or forward it.

Storage

Moving the data naturally requires places to move it to. Different ways of storing data have different properties and we want to optimize the utility by choosing the right way to store the data.

One of the most important and widespread ways to store data is the Data Warehouse. This is a structured store that contains data prepared for frequent ad hoc exploration by the business. It can contain pre-aggregated data and calculations that are often needed. Schemas are built in advance to address reporting needs. The Data Warehouse focuses on centralized storage and consequently data, which has a utility across different city agencies.

Whereas Data Warehouses are central stores of high quality validated data, Data Marts are similar local data stores. They are similar to Data Warehouses in that the data is prepared to some degree, but the scope is more local for an agency to do analytics internally. Frequently the data schema found are also more of an ad hoc character that may not be designed for wide spread consumption. It also serves as a user driven test bed for experiments. If an agency wants to create a data source and figure out if it has any utility, the data mart is a great way to quickly and in a decentralized manner create value in an agile manner.

Where Data Warehouses and Data Marts store structured data, a data lake is primarily a store for unstructured data, like csv, XML, log files as well as binary formats like video and audio. The data lake is a place to throw data first and then think about how to use it later. There are several zones within the data lake with varying degrees of structure: like the raw, analytical, discovery, operational and archive zones. Some parts like the analytical zone can be as structured as Data Marts and be queried with SQL or similar syntax (HiveQL), where others like the raw zone requires more programming to extract meaning. The data lake is a key component in bringing in more data and transforming it to something useful and valuable.

The Operational Data Store is in essence a read replica of an operational database. It is used in order not to unnecessarily tax an operational, transactional database with queries.

The City used to have real warehouses filled with paper archives that burned down every now and then. The reason for this is that all data has a retention policy that specifies how long is should be stored. This need is still there when we digitize data. Consequently we need to be in complete control of all data assets’ lifecycle. The archive is where data will be moved when there is no more need to access the data frequently. Consequently data access can have a long latency period. Archives are typically used in cases where regulatory requirements warrant data to be kept for a specific period of time.

Discovery

Now that we have ways to control the quality, move the data and store it we also need to be able to discover it. Data that cannot be found are useless. Therefore we need to supply a number of capabilities for finding the data we need.

If the user is in need of a particular data asset, search is the way to locate it. Based on familiar query functions the user can use single words or strings. We all know this from on line search engines. The need is the same here: to be able to intelligently locate the right data asset based on an input string.

When the user does not know exactly what data assets he or she is looking for we want to be able to supply other ways of discovering data. In a data catalog the user can browse existing data sources and locate the needed data based on tags or groups. The catalog also allows previews as well as additional meta-data about the data source, such as descriptions, data dictionaries and experts to contact. This is good if the user does not know exactly what data asset is needed.

In some cases a user group knows exactly what subset of data is needed. The data may not all reside in the same place or format. By introducing a virtual layer between the user and the data sources it is possible to create durable semantic layers that remain even when data sources are switched. It is also possible to tailor specific views of the same data source tailored to a particular audience. This way the view of the data will cater to the needs of individual user groups rather than a catch all lowest common denominator version, which is particularly convenient since access to sensitive data is granted on a per case basis. The data virtualization will make it possible for users to discover only the data they are legally mandated to view.

Access

Now that we are in control of the quality of data and who can use it, we also need to think about how we can let users consume the data. Across the city there are very different needs for consuming data.

Access by applications is granted through an API and supplies a standardized way for programmatic access by external and internal IT solutions. The API controls ad hoc data access and also supplies documentation that allows developers to interact with the data through a developer portal. Typically the data elements are smaller and involve a dialogue between the solution and the API.

When files need to be moved securely between different points without any transformation a managed file transfer solutions is used. This is also typically accessed by applications, but a portal also allows humans to upload or download the file. This is to be distinguished from document sharing sites like sharepoint, work docs, box and google docs where the purpose is for human end users to share files with other humans and typically cooperate on authoring them.

An end user will sometimes need to query a data source in order to extract a subset of the data. Query allows this form of ad hoc access to underlying structured or semi structured data sources. This is typically done through SQL. An extension of this is natural language queries thorough which the user can interrogate a data source through questions and answers. With the advent of colloquial interfaces like Alexa, Siri and Cortana this is something we expect to develop further.

A stream is a continuous sequence of data that applications can use. The data in a stream is supplied as a subscription to streams in a real time fashion. This is used when time and latency is of the essence. The receiving system will need to parse and process the stream by itself.

Contrary to this, events are already processed and are essentially messages that function as triggers from systems that indicate that something has happened or should happen. Other systems can subscribe to events and implement adequate responses to them. Similar to streams they are real time, but contrary to streams they are not continuous. They also resemble APIs in that it is usually smaller messages, but differs in that they implement a push pattern.

Implementing the refinery

Naturally some of this has already been built, since processing data is not something new. What we try to do with the Data as a Service program is to modernize existing implementations of the above-mentioned capabilities and plan for how to implement the missing ones. This involves a jigsaw puzzle of projects, stakeholders and possibilities. Like most other places we are not working from a green field and there is no multi million-dollar budget for creating all these interesting new solutions. Rather we have to continuously come up with ways to reach the target incrementally. This is what I have previously described as pragmatic idealism . What is important for us, as I suspect it will be for others, is to have a bold and comprehensive vision for where we want to go. That way we can hold up every project or idea against this target and evaluate how we can continuously progress closer to our goal. As our team’s motto goes “Enterprise Architecture – One solution at the time”

Information System Modernization – The Ship of Theseus?

The other day I was listening to a podcast by Malcolm Gladwell. It was about golf clubs (which he hates). Living next to two Golf Courses and frequently running next on them this was something I could relate to. The issue he had was that they did not pay proper tax. This is due to a California rule that property tax is frozen on pre 1978 levels unless more than 50% of ownership had changed.

The country clubs own the golf courses and the members own the country club. Naturally more than 50% of the members had been changed since then. However, according to the tax authorities this does not mean that 50% of the ownership had changed. The reason is that the gradual change of the membership means that the identity of the owning body had not changed. This is to some people a peculiar philosophical stance but not without precedence. It is known through the ancient Greek writer Plutarch’s philosophical paradox known as Theseus ship, here quoted from Wikipedia:

“The ship wherein Theseus and the youth of Athens returned from Crete had thirty oars, and was preserved by the Athenians down even to the time of Demetrius Phalereus, for they took away the old planks as they decayed, putting in new and stronger timber in their places, in so much that this ship became a standing example among the philosophers, for the logical question of things that grow; one side holding that the ship remained the same, and the other contending that it was not the same.”

For Gladwell it was not clear that the gradual replacement of members in a country club constituted no change in ownership. Be that as it may, the story made me think about information system modernization, which is typically a huge part of many enterprises and government IT project portfolios. Information systems are like Theseus ship, you want to keep it floating but you also want to maintain and make it better. The question is just: is information system modernization Theseus ship?

The Modernization effort

Usually a board, CEO, CIO, commissioner or other bodies with responsibility for legacy systems realize that it is time to do something about them. Maybe the last person who knows about it is already long overdue his retirement, the operational efficiency has significantly declined, costs expanded or the market demands requirements that cannot be easily implemented in the existing legacy system. Whatever the reason, a decision to modernize the system is made: retire the old and replace with the new.

Now this is where it gets tricky because what exactly should the new be? Do we want a car or a faster horse? For many the task turns into building a faster horse by default. Because we know what the system should do, right? It just has to do it a bit faster or do a little bit better. The problem is that we are sometimes building Theseus a new rowboat with carbon fiber planks where we could instead have gotten a speedboat with an outdoor kitchen and a bar.

When embarking on a legacy modernization project, there are a few things I believe we should observe. I will use a recent project that we have done to modernize the architecture of a central integration solution at New York City Department of IT and Telecommunication. This legacy system is itself a modernization of an earlier mainframe based system (yes, things turn legacy very fast these days).

Some of the things to be conscious of in order not to end up in the trap of Theseus’ ship when modernizing systems are the following.

Same or Better and Cheaper

A modernized legacy system has to fulfill three general criteria: It should do the same or more as today, with the same or better quality at a cheaper price. It is that simple. When I say that it should do the same as today I would like to qualify that: if the system today sends sales reports to matrix printers and fax machines around the country, we probably don’t need that even if it is a function today. The point is that all important functions and processes that are supported today should also be supported.

When we talk about quality we mean the traditional suite of non-functional requirements: Security, Maintainability, Resilience, Supportability etc. Quite often it is exactly the non-functional requirements that need to be improved, for example maintainability or supportability.

At a cheaper price is pretty straightforward. It is not always possible, such as when you are replacing a system that was custom coded with a modern COTS or SaaS solution. Nevertheless, I think it is an important ambition and realistic because most legacy technology that used to be state of the art is now commodities due to open source and general competition. An example is Message Queueing software. That used to be offered at a premium by legacy vendors, but due to open source products like Active MQ and Rabbit MQ as well as cost efficient cloud offerings, it has become orders of magnitude cheaper.

Should the system even be doing this in the new version?

Often there is legacy functionality that has become naturally obsolete. One example I found illustrates this. The integration solution we had is based on an adapter platform that takes data from a source endpoint, formats it and puts it on a queue. At the center a message broker routes it to queues that are read by other adapter platforms. They then format and write the messages to the target endpoint. This is a fine pattern, but if you want to move a file, it is not necessarily the most efficient way since the file has to be parsed into multiple bits to be put on a queue and then assembled again on the other side. This is a process that can easily go wrong if one message fails or is out of order. Consequently multiple checks and operational procedures need to be in place. Rather than having the future solution do this, one could look to see if other existing solutions are more appropriate, such as a managed file transfer solution. Similarly when the system merely wraps web calls, an API management solution may be more appropriate.

Why does the system do it in this way?

Was this due to technological or other constraints when it was built? When modernizing it can pay off to look at each characteristic of the legacy system and understand why it is implemented in that way rather than just copying it. For example, our integration solution puts everything on a queue. Why is that? It may be because we want guaranteed delivery.

This is a fair answer but also a clue to how we can make it better, because what better way is there to make sure that you don’t loose a message than to just store it in an archive for good as soon as you get it? In a queue that message is deleted. This presumably has to do with message queueing’s origin on the mainframe where memory is a scarce resource.

It is not any more so rather than use a queue, lets just store the message and publish an event on a topic and let subscribers to the topic process it at their convenience. This way the integration can also be rerun even if a down stream process fails such as the target adapters writing to a DB. If this were a queue-based integration the message would be lost because it would have been deleted off the queue. With this architecture any process can at any time access the message again. Now a message is truly never lost.

What else can the system do going forward?

Keep an eye out for opportunities that present themselves with rethinking the architecture and the possibilities of modern technologies. To continue on our example with the message store, we can now use the message archive for analytical solutions by transforming the messages subsequently from the archive into a Data Warehouse or Datamart. This is also known as an ELT process.

Basically we have turned our legacy queue based architecture into a modern ELT analytics type architecture on the side. What’s more is that we can even query the data in the message store with SQL. One way is to make it accessible as a HIVE table. Imagine what that would take in the legacy world: for every single queue we would have had to build an ETL process and put it into a new schema that would have to be created in advance.

Being open minded and having a view to adjacent or related use cases is important to spot these opportunities. This may take a bit of workaround the institutional silos if such exists though. That is just another type of constraint, a non-technical constraint, which is often tacitly built into the system.

 

Remember that we wanted the modernized system to be “Same or better and cheaper”. Now we can still get all of the functional benefits of a queue, just better, since we can always find a message again. On top of that we have offered new useful functionality in an analytics solution that is sort of a by-product of our new architecture. Deploying it in the cloud allows us to have better resilience, performance, monitoring and even security. Add to that the cost, which is guaranteed to be significantly less than what we were paying for our legacy vendor’s proprietary message queueing system.

A Citywide Mesh Network – Science Fiction or Future Fact?

I recently finished Neal Stephenson’s excellent “Seveneves”. The plot is that the moon blows up due to an unknown force. Initially people marvel at the now fragmented moon, but due to the intelligent analysis of one of the protagonists it becomes clear that these fragments will keep fragmenting and eventually rain down on earth. The lunar debris turns into comets that start making the earth a less than pleasant and very hot place to live. In order to survive the human race decides to build a space station composed of a number of individual pods (designed by the architects!). This design is chosen in order to have the opportunity to evade incoming debris like a shoal of fish evades a shark.

Naturally there is no Internet in space but the natural drive towards having a social network (called spacebook) forces the always inventive human race to find another way to implement the internet. The resulting solution is a mesh network.

The principle of a mesh network:

“is a local network topology in which the infrastructure nodes (i.e. bridges, switches and other infrastructure devices) connect directly, dynamically and non-hierarchically to as many other nodes as possible and cooperate with one another to efficiently route data from/to clients”

The good thing about mesh networks is that every node can serve as a router and even if one or a few nodes fail (as they might in a space orbit filled with lunar debris) the network would still work. Contrast this with a network typology where one or even a few pods had central routers, like our present day Internet which is based on the hierarchical Domain Name System where traffic depends on a few top-level DNS servers. If these were all taken out the whole network would not work. With a wireless mesh network, the network would continue to work as long as there are nodes that can reach each other. But enough of the science fiction let’s get back to the real world.

The City Wide Mesh Network

New York City, where I work, has had its own share of calamities. Not quite the scale of the moon blowing up, but September 11, 2001, was still a significant disaster. The effect was that the cell network broke down due to overload. This greatly reduced first responders’ ability to communicate. In order for this not to happen again, NYC built its own wireless network: we call this NYCWIN. For years this network has served the City well, but the cost of maintaining a dedicated citywide wifi network is high compared to the price and quality of modern commercial cell networks.

However, the cellular network is also patchy in some parts of the city, as most New Yorkers have noticed. It is also expensive if we want to supply each IoT device in the City with its own cellular subscription. Typically a cellular connection will have a lot more bandwidth than most devices will ever use anyway. So, might it be possible to rethink the whole network structure and gain some additional benefits in the process? What if we created a citywide mesh network instead? It could function in the following way:

A number of routers would be set up around the city. Each would be close enough to reach at least one other router. When one router fails there are others nearby to take over the network traffic. These routers would form the fabric of the citywide mesh network.

Some of these primary routers would be connected to Internet routers either through cables or cellular connections. These special routers would serve as gateways to the internet. In this way the network would effectively be connected to the Internet and we would have a mesh Internet. This is actually not something new, in fact it already exists! It has been implemented by a private group called NYC Mesh: They have created their own proprietary routers for this, but wouldn’t it be cool if the City scaled a similar solution to use by all New Yorkers and visitors. Free of charge, like the LinkNYC stands. Oh and could they not maybe be the Internet gateways we thought of above? Think about it, what if wifi was just pervasive in the air of the City for everyone to tap into?

Better than LTE

The beauty of this is that this network may even be better than the cellular network, since it can better be extended to parts of the city that have patchy coverage from cell towers. We would just have to set up routers in those areas and make sure there was a line of connection to nodes in the existing network or an internet gateway. It would even be possible to extend the network indoor, even to the subway.

With thousands of IoT devices coming online in the future years, costs will increase significantly for Smart City solutions. Today it is not cheap to have a device connected through a cellular carrier to the Internet. Since it is essentially a cell phone connection, it also costs about the same typically. This may economically make sense compared to alternatives for the number of devices connected today. But scaling towards millions of devices, this approach is untenable in the long run. The City Wide Mesh Network could be a scalable low cost alternative for all of the City’s IoT devices to connect to the Internet.

Building and maintaining the network

It is quite an effort to implement this network and maintain it, but there is also a way to get around that. Today it is possible for commercial carriers to put up cellular antennas on City property if permission is granted. What if we made all permissions contingent on setting up a number of mesh routers for the citywide mesh network? Then, for every time a cellular or other antenna was set up, the citywide mesh network would be strengthened.

It could simply be made the obligation of the carriers that are granted use of City property for commercial uses, that they maintain their part of a free city wide mesh network. The good thing about a mesh network is that there is no central control and making it operational would just entail following some standards and add and replace network nodes. The City would have to decide on the standards to put in place: what equipment, what protocols etc. Not an easy task perhaps, but also not impossible.

In order to maintain the health and operation of the network monitoring would have to be in place. We could see in real time what nodes were failing and replace them. It would also be possible to elastically provision nodes when traffic patterns and utilization makes it necessary.

World Wide Standard

Now here is where it could get interesting, because the issue today in Mesh networking as in most other IoT is that there are no common standards. Vendors have their own proprietary standards and no interest in making it compatible. History has shown us that the only way to impose standards on any industry is through governmental mandate. New York could of course not mandate a standard, but what if the City forced all vendors who wanted to sell to the New York City Wide Mesh Network to comply with a given standard? The industry would have to develop their products to this common standard. Since New York has the size to create a critical mass this could possibly be the start of a new Mesh Network standard.

New York works together with a lot of other cities that often take inspiration from us in issues of technology. An example is open data, which originated in New York, but is now spread to virtually every city of notable size. The same could be the case for the City Wide Mesh Network design and standards used. That way, cities could have a blueprint for bringing pervasive low cost wifi to all citizens and visitors.

Fiction or Fact?

If a similar catastrophe to 9/11 were to ever happen again, then the mesh network would adapt and through healthy nodes still be able to send data around, possibly slower, but it would not fail. Only the particular nodes that were hit would be out, but the integrity of the network would be intact. It is, of course, possible that islands without connectivity would appear but that is to be expected. As long as the integrity of the network is unaffected it is ok.

It is actually possible to create a robust low cost, citywide network that would be developed and maintained by third parties with better coverage than cell phones all the while helping the world by forcing the industry to implement standards that would improve interoperability for IoT devices. This is not necessarily science fiction: everything is within the realm of possibilities.