Much attention has been given lately to the success of Artificial Intelligence. The abilities of ChatGPT 3 and Dall E-2 are impressive. Apart from the fact that they sound like droids from Starwars, there is not much to suggest that they are the harbinger of any fundamental advance towards creating a general artificial intelligence. We are no closer to building an artificial human-like intelligence than we have been for the past 5 millennia. What is more, no such progress is made anywhere. We are no closer to being taken over by robots than the day Terminator or the Matrix debuted at cinemas across the world. There is no impending doom from Skynet or Agent Smith turning us into serfs or mere generators of electricity
This is not to belittle the advances of Artificial Intelligence or the potential impact it could have on the world. Indeed I have great admiration and respect for the abilities of these technologies to generate impressive prose or visuals or even recipes for drinks. But these are just technologies and like all technologies such as nuclear power, dynamite, and genetically modified plants, they have a potential utility and risk profile. What they do not have is a path towards Artificial Intelligence as understood by most people, as intelligence with humanlike features of intelligence.
To see clearly why that is the case we have to go back to Charles Darwin’s observation from The Origin of Species: “Intelligence is based on how efficient a species became at doing the things they need to survive” (Darwin, The Origin of Species, 1872). ChatGPT 3 is doing nothing to survive, and neither is Dall E-2. The problem for Artificial Intelligence research and development is an impoverished concept of Intelligence. Darwin got it right. The error is that AI research is only looking at one side of the coin: problem-solving. That is however only superficially the most important part of intelligence.
A closer analysis shows that to become a problem-finding system, the other side of the coin of intelligence, the system needs to exhibit five properties:
Unity – to be an integrated system of components and processes
Boundaries – to have a well-defined boundary between inside and outside the system
Knowledge representation – a way to represent knowledge of the external world
Interaction – to be able to interact with the external world
Self-sustaining – to be self-sustaining in its interaction with the world
Contemporary AI all exhibit the first 3 properties. It is rarer that they exhibit the fourth point, interaction, although this happens in real-world robotics and industrial control systems. None, however, exhibit the fifth property: to be self-sustaining.
This is what Darwin understood as a natural part of intelligence. Species are self-sustaining in so far as they manage to survive. This is what Intelligence is. And no artificial intelligence exhibits anything close to this property yet. The question of achieving General Artificial Intelligence thus becomes entangled with artificial life, because only living systems can exhibit true intelligence.
Instead of worrying about the takeover of superintelligent machines, and losing our jobs to robots, we should sit back and think of how this new type of hammer can help us hit the nails more efficiently while being sure to put proper guardrails around it. We should marvel at the capabilities of our technology and understand have it can best help us. But there is nothing new under the sun and no prospect of a Frankenstein moment anytime soon.
Iterative or agile development in one flavor or other has become the standard for IT development today. It is in many contexts an improvement on plan based or waterfall development, but it inherits some of the same basic weaknesses. Like plan-based development it is based on decomposing work into atomic units of tasks with the purpose of optimizing throughput and thereby delivering more solutions faster. In most formulations from SAFe over kanban to DevOps the basic analogy is the production line and often the actual source of inspiration is from the manufacturing world with titles such as Don Reinertsen’s “Managing the Design Factory”. Similarly, the plot of Gene Kim’s 2013 DevOps novel “The Phoenix Project” revolves around learning from factory operations to save a troubled company’s IT development process. While iterative approaches spring from advances in manufacturing processes like Lean, Six Sigma, TQM and others, they are stuck in the same mental prison that waterfall was: a linear mode of thought where the world is a production line through which atomic units move and become assembled. To understand why let’s dig a bit deeper.
The origins of iterative development’s linear mode of thought
Like most things development practices don’t spring from a vacuum. They have roots in the culture in which they emerge. The anthropologist Bradd Shore has argued that the most pervasive cultural model underpinning everything from sports over education to fast food in American culture is modularization. A cultural model is a way of structuring our experiences and how we think about problems and solutions. According to the modular model things are broken down to isolated component parts with a specific function. Through the outsized influence of American culture on modernity globally this model has been disseminated in various forms to the whole world. It can however be traced back to the production line.
As early as 1948 the British anthropologist Geoffrey Gorer noted in a study of the American national character how the pervasive atomism of American institutions could be traced to the great success of the production line in American industry. According to Gorer the industrial metaphors became the basis of a distinctively American view of human activity. It is indeed so pervasive that it persists today also in the foundations of how we view the activity of IT development, an activity where no physical goods move through any physical space but nevertheless, we have chosen this as the model to conceptualize it while others could have been chosen. The basic unit are tasks that are worked on one at a time by a specialist. The deliverable passes through different specialists as it passes through the production line. From design, through development, test, to deployment.
It is perhaps not surprising that the state of the art in development globally is based on the model of the production line since it has been immensely successful and, in many ways, transformed our world to what we see today, but the question is whether that continues to be helpful. Is the production line really the best model of conceptualizing IT development?
Another, more circular, approach
In the new millennium a new design philosophy emerged that challenged these assumptions that had been so pervasive in modern culture. It focused on circularity rather than linearity. It was developed in a number of different books such as McDonough and Braungarts Cradle to Cradle: Remaking the Way We Make Things that converged on a model that valued circularity rather than linearity. This is why it is commonly known as circular economy. One of the main ideas about circular economy is that we should think differently about waste, not as something to throw away but rather as a potential resource. Another important idea is to think in systems related to each other. The systems perspective requires us to think about feedback loops and dynamics of the whole system that a solution is part of. It is not enough to think about the production line because is embedded in bigger systems, like the labor force, politics, the ecosystem and the energy system. What is an improvement in a linear view may not be when we consider the wider system-effects. The circular economy takes inspiration in biology where metabolism is a key concept. This leads to a focus on flows of materials, energy and water in order to understand the metabolism of cities or countries. Superficially it could seem like agile development is similar in so far as here we also find a focus on flows, but that is deceiving. In agile the flow is only one of throughput, where something comes in, goes through the process and something else comes out, which marks the end of the scope of interest. The agile version is a linear focus on flows without any interest in systemic effects.
A circular form of development
The circular economy has made an impact in physical product design, city planning and management and production of physical goods. This shift taking place in the wider culture also has the potential to help us break out of the mental models that are a consequence of the production line of the industrial revolution and impact the way we develop tech products too. By moving from the linear mode to a circular mode of thinking we may harness many of the same beneficial effects that the circular economy does. Let us look at some examples of how that would change how we develop tech products.
Basic metaphor is the production line, where throughput and production are in focus
The basic metaphor is one of metabolism where life and complex systems are in focus
Responsibility of development ends with deliverables deployed
Working solution is a shared responsibility
Promotes centralized view of production due to low cost and concentration of expertise
Promotes decentralized models where development takes place in the natural context of where it creates value
Standardization of end product, process and technologies
Standardization of components, protocols and interfaces
Optimizes for throughput
Optimizes service utility
Operates on service level agreement
Operates on fitness functions
Rewards hours spent
Rewards value produced
Build from new
Focus on transactions and interactions
Focus on system dynamics
Development based on business requirements and user acceptance test
Cocreation and dialogue based on vision and goals
A circular mode of development will do many of the same things as is done in agile development and share some if not most processes and techniques, but the basic approach and mind set is radically different. Let us look at some of the more important possibilities.
Development as a complex system
In a traditional setting development consists in multiple modular activities that performance of which have little or no effect on each other. Business development and design is done in isolation from development, which is done in isolation from operations. Between them are handoffs that are always fraught with conflict and miscommunication.
In a circular perspective development is no longer just a sequence of atomic tasks that need to be completed by specialists but a fabric of interlocking areas that affect each other. By purposefully considering the entire fabric and its feedback loops development efforts optimize not just throughput but the utility of services produced and the interplay with the environment. For example, the programming languages used to develop a solution affects the employees working on it. It also affects recruitment of the right talent. If the language chosen is for example Scala because it is deemed superior for a given problem, this also affects Human Resources. Scala developers are among the highest paid globally, on average $77.159 per year and are difficult to find because they are fairly scarce. If HR was involved in this decision and the focus was on affordability and availability of talent, C++ might make sense with an average salary of $55.363 globally. From a financial perspective alone the $22.000 difference per developer per year going forward could also be important. Furthermore, the demography of C++ developers and Scala developers may be different, which affects work culture. Work culture can be an important factor to attract and retain talent. If the drive is towards a younger profile, this may go in the other direction with Scala more popular among the young.
What might look like an isolated technology choice in a linear mode of thought actually has wider impacts for the whole organization. Viewing decisions from the point of view as a complex system will help bringing these dynamics to light. In the example the decision could be optimal locally but not globally and may introduce unforeseen systemic effects.
Different functional areas such as sales, development and operations are commonly separate areas each with their own leadership and responsibility. Units of work pass between these different modules and changes responsibility along the way. But the responsibility stops at the boundary, which is the source of many political border wars.
Rather we should look for how to build a shared responsibility for not only the entire organization (business and IT) but also external entities like suppliers and collaborators. All should be responsible for the whole and not just their part. Today even in forward looking agile organizations the responsibility of development, infrastructure, business operations, sales and HR all belong in separate departments even if ideas such as DevOps is trying to break down the boundary between the two first.
Better ways to implement a shared responsibility must be developed. There are alternatives: colocation of the different functions for example. This is what makes it easy for start-ups smaller than a couple of 100 employees to move faster than the competition and often deliver vastly more value to the customer. It would make more sense if teams were organized around an objective rather than a type of work. If developers, support, marketing and sales were all part of the same team equally responsible for the same objective, work would more seamlessly align toward that.
The challenge of course is to find meaningful objectives that at the same time does not produce team sizes that are too big. One way is to architect the structure top down to make sure all the necessary objectives are represented. Another way could be to allow teams split up once it grows and divide the objective into sub-objectives. There is no easy answer, but the first step is to break down the default linear thinking that organizes responsibility around types of work rather than common objectives.
While agile development often works from the premise of decentral and empowered teams and work relatively independently, they still invariably belong to the technology function and are ultimately managed by the CIO or CTO. This brings with it some degree of centralization. If instead the creation of technology is just one aspect in a shared responsibility, it will allow decentralized multi-disciplinary teams to appear.
Agile methodologies try to make the connection to the business by inserting a part time representative from the business as the product owner, which often turn out to be an IT person anyway. Sometimes product managers encapsulate the business perspective but are mostly seen as something outside the development team. These are all ways around the implicit centralized mode of production.
Rather, having a truly cohesive multidisciplinary team means that there is no longer any need for a centralized mode of production. These teams would have all or most of the skills they need to fulfill their objectives. The skills may therefore vary greatly, but the decentral teams should be able to work on services in isolation using the tools and technologies that fit to maximize the success of their service. This will increase the agility of the entire organization.
The degree of decentralization can differ according to the context of the business. For some heavily regulated industries, decentralization makes less sense than other industries that are less regulated. Complex industries like nuclear power might similarly have strict requirements for centralizations on many parameters. In general, it can be said that complexity draws an ideal solution towards centralization but the general impetus should be towards decentralization. The focus should be on business services.
To have a decentralized focus on services it becomes critical to focus on standards. This is nothing new of course. Standards exist and are deployed in multiple contexts. In this context there is a need for clear standards of components, protocols and how interfaces are built and maintained. A precondition for decentralization and local autonomy around a service is that its interfaces are well defined and standardized. This is frequently done through an interface agreement that specifies what consumers can expect from this service. When the interface agreement is in place, autonomous development of how to support it is possible. The standards however need to be more than just protocols and service level agreements, it also involves quality. As an example, let us imagine a bank providing a service for risk scoring of counterparties. It is not sufficient that we know the protocol, how fast the response time is and the logic of the service, we also want quality standards around accuracy, false positives and errors since these aspects are likely to affect other services like credit decisions and customer relationship management. The concept of standards and protocols are thus expanded compared to common services.
Some degree of standardization across the services are also necessary. In the case of web-services it would not make sense that some services used XML, others JSON and still others invented their own protocol. It would be similar to different organs in an organism using different blood types. You can choose only one blood type as an individual, but different blood types may work equally well. Similarly, there needs to be only one standard for service interface protocols.
The focus on services delivered means that focus will be on the utility of that service to its consumers. It may seem self-evident but utility to consumers is not. To determine whether a service is useful you have to take the perspective of the consumers of the service. This is why user interviews, surveys, Net Promotor Score, focus groups etc. have been developed as techniques in product development. These are all ways to find out if a product or service is useful. This, however, is merely aimed at the end user, but a technology product usually rely on many other services. These should have a similar strong focus on whether they are useful. If we run a real estate company, the users will naturally be interested in the website and how it works. But an underlying service such as the price estimation engine will be only indirectly relevant to end users. It is not easily measured by existing methods mentioned above. The utility of the service to its consumers may be speed, accuracy, additional information or something else entirely. But whatever the utility is, this is what needs to be optimized.
Transition to fitness functions
The consequence is that we also need to rethink how to measure utility. The traditional service level agreement will not work for this way of working because it only relates to superficial features that may or may not be important like latency, uptime, service windows etc. Rather we need to focus on the fitness functions of the services as the relate to the utility of the service. The term fitness function is borrowed from evolutionary theory and designates a function that measures how close a potential solution is to solving a problem. In nature the problem is related to survival of a species. For example, the speed and agility of a spring buck is a part of a fitness function that determines its survival in the encounter with lions on the savannah.
In product development it is related to the utility of a service and thereby how it contributes to the overall success of the product or organization. An example could be how fast a website is ready to be used by the customer. If for example it is a social media service and it takes more than 2 seconds it is the equivalent of being caught by the lion. A service level agreement might specify a latency of 2 seconds but that is not the same as the functionality of the site being available. There could be many aspects involved, the browser type of the user, underlying services that take longer to load content etc. Thinking in terms of fitness functions makes it clear that it is a shared responsibility.
Rewarding value created
If teams are working on their services by optimizing the utility as measured by the fitness function, it seems strange that they should be rewarded by how much time they spend working. Another approach is to reward the value that their work produces or the fitness of the service. It is probably rarely possible to do this 100% since most people need some sort of predictability of remuneration for their work but then regarding value created can be worked into the reward structure in other ways such as bonuses calculated on performance based on the fitness function. There are already well-established ways to reward employees, the only change is that it should be based on the value being created.
Reuse and reappropriation
A key concept in circular economy is to reuse and reappropriate rather than buying new. When a new need arises, rather than starting to build it straight away, it should be investigated if other existing solutions could be used. Sometimes this requires a stretch of imagination, but it definitely requires knowledge of what exists. A solution for CRM could be used for case management, and a Service Management solution can work equally well for HR. By looking at what already exists, money and time can be saved. These solutions and the investments that have gone into them will be preserved. It is not trivial to develop something that is ready for production. Using something already in production thus also minimizes risk.
In biology this process is known as co-option, means a shift in the function of a specific trait and it was at the heart of Darwin’ theory of evolution. One example is feathers, which originally developed in order to regulate heat, but later was co-opted for flight. The same can be said of our human arms, which were originally legs but were coopted into holding and manipulating objects. These hands that are now typing are coopted from crude legs where the fingers originally were made only to keep balance now perform a magnificent array of functions. There is no reason why the co-option of existing systems could not provide the same effect.
The drive towards reuse also goes into how we design new solutions. Designing for reuse has already been a recurrent theme in development. This is the basis of service-oriented architecture, microservices, the use of libraries and object-oriented programming. It also works at higher levels. However, it requires a bit more reflection and abstract analysis. Developing a proper information architecture as a foundation is a precondition. Designing for reuse is not trivial and does take more time than just starting from one end and building what seems to be needed.
Focus on system dynamics
In traditional agile focus is on interactions as per the agile manifesto. Interactions are important and the agile manifesto’s focus on responding to change is important. Unfortunately, this is not sufficient in a complex system, where dynamics cannot be explained from individual dynamics. In a circular mode we want to focus on system dynamics particularly feedbacks and other system effects. Some of the most important effects of the dynamics of complex systems are delayed response, cascades, oscillations and instability. These will often appear puzzling and mysterious since the source is unknown and can’t be seen immediately from the interaction. The only way to try and tame a complex system is to focus on understanding the system. To do this one the most important is to identify and measure the feedback loops of the system.
Co-creation and dialogue
Another consequence is that teams will have to be interdisciplinary and work together based on co-creation and dialogue rather than through requirements gathering from business followed by development and finished with user acceptance testing. Teams will develop visions and set goals together that will guide development activities. It doesn’t mean, however, that everyone have to sit together always and only talk to each other. Different disciplines still need to work with similar people most of the time to hone their skills, but a significant portion of time must be dedicated to work together. Therefore startups are typically better and faster at adapting to new needs in the market, they naturally work in this mode, since they are so small that everyone works and talk together.
Most of these thoughts are not new and neither are the challenges. For example, Working in autonomous multidisciplinary teams is known from a matrix organization. The challenge is that specialized knowledge is not developed sufficiently. If five C++ developers work in five different autonomous teams, they will never learn from each other. This may lead to locally suboptimal solutions, but that is natural. We know this from biology too: humans have two kidneys but need only one, we do not need the appendicitis at all, but it is there. The spleen seems similarly superfluous. From a logical top-down design perspective there are some features that don’t make sense. However, the organism is designed by its ability to survive and thrive in an environment, which means that as long as theses locally suboptimal features do not interfere with that superior goal it is okay.
Similarly, there is no simple solution for magically fixing worker compensation. It is well known that the incentive structure is very hard and may have unintended consequences. This does not mean that we can’t be inspired to think about it in a more circular way in some cases. Some jobs will continue to be standard compensation per hours worked but we might try to make them more incentive driven and align the work being done with the value we want to create. Being physically at work sitting in front of the computer rarely produces value by itself. A circular perspective may help us to focus work on what does.
Towards a circular development
The agile revolution was a welcome improvement over the plan based or waterfall methodologies that were dominating It development at the time. Unfortunately it copied a mentality that was inherited from the industrialisation and modernity of the previous century. It is time to evaluate whether that way of thinking is still the best way to get work done. Experience in other areas of society has questioned whether the linear and modular thinking with a focus on throughput is optimal and increasingly a circular approach is adopted. This has not yet had any significant effect on IT development methodology. However, that should change in order to reach the next stage. If we can imagine a radically different way of working as is outlined here we can also change. Agile development has not made all the problems of development magically go away neither will a circular approach, but agile has reached the limit of where it continues to provide improvements regardless of the flavor. It is time to try another approach in order to make sure that we adapt work and technology to the 21st century rather than being stuck in a mindset that only made sense in the 20th.
When we develop tech products, we are always interested in how to improve them. We listen to customers’ requests based on what they need, and we come up with ingenious new features we are sure that they forgot to request. Either way product development inevitably becomes an exercise in what features we can add in order to improve the product. This results in feature creep.
The negative side of adding features
Adding new features to a product does frequently increase utility and therefore improves the product. But that does not mean it is purely beneficial. There are a number of adverse effects of adding features that are sometimes being downplayed.
The addition of each new feature adds complexity to the product. It is one more thing to think about for the user and the developer. What is worse is that unless this feature is stand alone and not related to any other features it does not just increase complexity linearly but exponentially. For example, if the addition of a choice in a drop-down menu has an effect on other choices being available in other drop-down menus the complexity increases at the system level not just with the new choice but with all the combinations. The consequence is that the entropy of the system is increased significantly. In practical terms this means that more tests need to be done, troubleshooting can take longer, and general knowledge of system behavior may disappear when key employees leave unless it is well document, which in turn is an extra cost.
The risk also increases based on the simple insight that the more moving parts there are the more things can break. This is why for decades I have ridden only one gear bikes. Because of that I don’t have to worry about the gear breaking or getting stuck in an impossible setting. Every new feature added means new potential risks of system level disruptions. Again, this is not a linear function as interactions between parts of a system add additional risks that are difficult assess. I think many of us have tried adding a function in one part of the system that produce a wholly unforeseen effect in another part. This is what I mean about interaction.
Every new feature requires attention, which is always a scarce resource. The user has a limited attention gap and can only consider a low number of options consciously (recent psychological research suggests around four items can be handled by working memory). Furthermore, the more features the longer it takes to learn how to use the product. And this is just on the user side. On the development side every feature needs to have a requirement specification, design, development, documentation and maybe training material needs to be made.
How about we don’t do that?
Luckily, it is not the only way to improve a product. We can also think about taking features away but somehow that is a lot harder and rarely if ever does it enter as a natural option into the product development cycle. It’s as if it is against human nature to think like that.
In a recent paper in Nature entitled “People systematically overlook subtractive changes” Gabrielle S. Adams and collaborators investigate how people approach improving objects ideas or situations. They find that we have a tendency to prefer looking to add new changes rather than subtract. This is perhaps the latest addition to the growing list of cognitive biases identified in the field of behavioral economics championed by Nobel laureates like Daniel Kahneman and Richard Thaler. Cognitive biases describe ways in which we humans act that are not rational in an economic sense.
This has direct implications for product development. When developing a tech product, the process is usually to build a road map that describes the future improvement of the product. 99% of the time this involves adding new features. Adding new features is so entrenched in product management that there hardly is a word or process dedicated to subtracting features.
However, there is a word. It is called decommissioning. But it has been banished from the realms of flashy product management to the lower realms of legacy clean up. As someone who has worked in both worlds, I think this is a mistake.
How to do less to achieve more
As with other cognitive biases that work against our interest, we need to develop strategies to counteract them. Here are a few ways that we can start to think about subtracting things from products rather than just adding them.
Start the product planning cycle with a session dedicated to removing features. Before any discussion about what new things can be done take some time to reflect on what things can be removed. Everything counts. You don’t have to go full Marie Kondo (the tidy up guru who recommends throwing away most of your stuff and who recently opened a store so you can buy some new stuff ) though, removing text, redundant functions is all good. A good supplement for this practice is analysis about what parts of the product are rarely if ever used. It is not always possible for hardware products, but for web-based software it is just a matter of configuring monitoring.
Include operational costs in the decision process, not just development costs. This is not an exact science like anything in product development but some measure of what it takes to operate new features is good to put down as part of the basis for a decision. If a new feature requires customer support, then that should be part of the budget. Often a new feature will lead to customer issues and inquiries. That is part of the true cost. Also, there may be maintenance costs. Does it mean that a new component of the tech stack is introduced? That requires new skill, upgrades, monitoring and management. All of this needs to be accounted for when adding new features.
Introduce “Dogma Rules” for product development. A big part of the current international success of Danish film can be ascribed to the Dogme 95 Manifesto by Palme D’or and Oscar winning directors Lars Von Trier and Thomas Vinterberg. It was a manifesto that limited what you could do when making films. Similarly, you can introduce rules that limit how you can make new product enhancements. For example, a feature cap could be introduced or the number of clicks to achieve a goal could similarly be capped.
Create a feature budget. For each development cycle create a budget of X feature credits. Product managers can then spend them as they like and create x number of features but by having a budget, they can also retire features in order to gain extra credits. Naturally this runs inside the usual budget process. Obviously, this is somewhat subjective, and you may want to establish a feature authority or arbiter to assess what counts.
Work with circular thinking. Taking inspiration from the circular economy, which has some similar challenges is another approach. Rather than only thinking about building and removing things it could prove worthwhile to think in circular terms: are there ways to reuse or reappropriate existing functionality? One could think about how to optimize quality rather than throughput.
Build a sound decommissioning practice. Decommissioning is not straight forward and definitely not a skill set that comes naturally to gifted creative product managers. Therefore, it may be advantageous to appoint decommissioning specialists, people tasked primarily with retiring products and product features. This requires system analysis, risk management, migration planning etc. Like testing, which is also a specialized function in software development, it provides reduction in product risk and cost.
Taking the first step
Whether one or more of these will work depends on circumstances, what is certain is that we don’t naturally think about how to subtract functionality to improve a product. We should though. The key is to start changing the additive mentality of product development and start practicing our subtractive skills. It is primarily a mental challenge that requires discipline and leadership in order to succeed. It is bound to meet resistance and skepticism but most features in software today are rarely if ever used. Maybe this is a worthwhile alternative path to investigate. Like any change it is necessary to take the first step. The above are suggestions for that.
The advent of SARS-COV-2 has mobilized many tech people with ample resources and a wealth of ideas. A health crisis like this virus calls for all the help we can get. However, the culture of the tech sector exemplified with the phrase “Move fast and break things” is orthogonal to that of medicine exemplified with the Hippocratic oath of “first do no harm”. What happens when these two approaches meet each other? Unfortunately, the well-intentioned research and communication sometimes results in the trivialization of scientific methods resulting in borderline misinformation that may cause more harm than good.
Under much fanfare the following piece of research was presented by Sermo, a social network platform for medical professionals, on April 2nd: “Largest Statistically Significant Study by 6,200 Multi-Country Physicians on COVID-19 Uncovers Treatment Patterns and Puts Pandemic in Context” . This is a survey of what doctors are prescribing against COVID-19. So far so good. This would indeed be interesting to know. But already the next line sends chills down the spine of any medically informed person: “Sermo Reports on Hydroxychloroquine Efficacy”…Can you spot the the dubious word here? Efficacy? Let’s rewind and rekindle ourselves with what efficacy means: “the ability, especially of a medicine or a method of achieving something, to produce the intended result” according to the Cambridge dictionary.
It gets worse: The CEO claims:
“With censorship of the media and the medical community in some countries, along with biased and poorly designed studies, solutions to the pandemic are being delayed.”
What he means to say then is that the more than 400 clinical trials that are already under way are one and all “biased and poorly designed”? Criticism is always welcome because it sharpens our arguments and logic unfortunately the piece does not have one reference to even one study that would exemplify this bias and poor design.
This is the first clue that this is a tech person and not a medical or scientific, I would say not even academic person. This is a person that moves fast, throws out unchecked assumptions and accusations and then moves fast to his much better designed study that was presumably scribbled equally fast on the back of a napkin.
This is where clue number two becomes evident. What is this superior method that is above the entire medical world scrambling to produce scientific knowledge about the outbreak and efficient therapies? Naturally the inquisitive reader is drawn to the sentence: “For the full methodology click here”. I click and read with bated breath.
We are informed that the survey is based on responses from doctors from 30 countries with sample sizes of 250 respondents. Sounds fair although 30 times 250 is 7.500 not 6200 as mentioned in the title (what happened to the remaining 1.300)? We are told that the Sermo platform is exclusive to verified and licensed physicians. Let’s pause here. How is it exclusive? This is the methodology section, and this is where you tell me HOW you verified the doctors. Otherwise I have no idea whether the results actually mean anything. It could be a mixture of neo-nazis and cosplay enthusiasts for all I know.
Next we read :
“The study was conducted with a random unbiased sample of doctors from 30 countries”.
That’s it. For people unfamiliar with the basics of clinical scientific method this is the equivalent of a suspect in a trial getting up in front of the judge claiming “I totally didn’t do it. Just let me go”. Again, how do we know? Maybe you sent your list of invitations based on a secret list from Donald Trump with doctors that are fanboys of Cloroquine? Maybe the responding doctors are unemployed (for a reason), which would explain why they had time answering the questionnaire. What was the distribution of age and gender? Was it representative of the age and gender distribution of the country they come from? Traditional scientific studies based on samples like these can dedicate up to a third of the article just to demonstrate that indeed there was no bias. Here we are offered one line without any evidence.
The study was based on a survey that took 22 minutes. Basically, any Joe-Never-Was-A-Doctor could have done this with Survey Monkey and a list of emails scraped from the internet. That is also fine, but we don’t get any information about what the questions were. Next section is the “Data Analysis” (and remember we are still in the methodology section) informs us that all results are statistically significant at a 95% confidence interval. Why was 95% chosen and not 99%? What were the actual p values?
In a little less than a page we learn virtually nothing that would help us ascertain the validity of the reported results. And where is the discussion? Could it be that the preferred treatments were dependent more on local availability than choice on the part of the doctor? Was there a bias in terms of geography, gender or age in relation to what they prescribed? Did everyone respond? Was there a pattern in those who didn’t respond?
Although we are left with a lot of unanswered questions, the attentive reader can already from this very sparse information deduce a damning flaw in the study design that completely undermines any of the purported claims to efficacy: the study asks the doctors themselves about their treatments! Now why is that a problem? Doctors, like all humans are susceptible to confirmation bias. This means that they are prone to look for confirming evidence. If they prescribe something, they have reasoned is a good drug they will look more confirmation that this is indeed a good drug. This is exactly why any proper designed study that shows efficacy needs the administering doctors to not know what they are treating their patients with. This is why the double blind test is a hall mark of any demonstration of efficacy.
Where do we go from here?
I am from the tech sector myself and not trained in medical science (although I have taken PhD level courses in advanced statistics and study design), so don’t get me wrong, I believe strongly in the power of technology and I want tech people to engage and help as much as possible. However, as should be apparent by now, this is not helpful.
Had this been presented as what it is, a descriptive survey of what doctors are prescribing against COVID-19, it would have been fine and even valuable. Rather it is pitched as a revelatory study that undermines all current research, something it is not, which may undermine the serious efforts being undertaken currently to find adequate treatments. The clear favourite of the article is Chloroquine, but Chloroquine poisoning has already cost lives around the world due to the current hype. Recently an Arizona man died after ingesting Chloroquine on the recommendation of President Trump. How many more will die after reading this flawed and unsubstantiated “study”?
This is where the “move fast and break things” attitude has to be tempered with the “first do no harm” attitude. When tech people who know nothing about science or medicine use their tech skills they need to openly declare that they do not know anything and that this is not peer reviewed and only subjective opinion. Present the interesting findings as what it is and do not ever make any claims to efficacy or superiority to the medical system of producing knowledge that has increased our global average life expectancy from 30 years to more than 70 years in the past century.
Tech people should still engage but they should stay within their sphere of competence and not trivialize science. Scientists and medical professionals don’t correct them on software design or solution architectures either. So, please don’t get in their way.
Let me then give an example of how tech people should engage. The folding at home project simulates how proteins fold and thereby help the medical community in possible drug discovery. It has succeeded in constructing the world’s most powerful supercomputer delivering more than an exaflop, that is, more than one quintillion calculations per second . It works by letting people install some software on their computer and thereby contribute their compute power in a distributed network of more than a million computers worldwide. This is a good model for how tech people can support the medical community rather than undermine it.
We in the tech sector need to move over and support our friends in the medical world in this time of crisis and approach their world with the same respect and caution that we expect others to show our domain of competence. Even though we are extremely smart, we are just not going to turn into doctors in a few days. Rather than move fast and break things we should “move fast and do no harm”.
It seems evident we are on our way to a recession . This will prove a challenge for many, and our world economies will suffer. Not least the stock market. We are similarly probably headed for a bear market.
But the stock market is ahead of the curve and typically turns around about 5 months before the recession ends. For investors it is therefore important to look for indicators of when the current bear market is turning around since no one wants to invest in a bear market.
Since this is a unique situation we haven’t been in before we need to look for unique indicators. It has been suggested by Supertrends Institute that we should look for numbers such as number of new cases, new hospitalizations and number of deaths to start declining but since the world treats this very differently across nations it may be difficult to find out what countries to look for or whether to look for the total. Looking for the global decline may be misleading since the economic impact of countries may differ. A massive outbreak in Venezuela may cloud that view, since the economic integration of that country is not significant.
Furthermore, there may be a lag between this point and when people actually feel comfortable going out. Consequently, we should look at other more robust indicators. One suggestion is looking at what can be inferred from traffic data. But should we just look at google/Waze data or telecom data to tell us the raw volume? That would be an indication of when traffic starts to pick up again. That is true. It is also however a data source with severe limitations. First of all, none of them have a complete view of traffic. Google and Waze only monitor its user base and can apparently easily be deceived as was recently demonstrated in Berlin. Telecoms only know what goes through their own networks not their competitors’. Second, all of these data sources know nothing about what sort of vehicle is moving. From an economic point of view, it makes a big difference whether the movement comes from a person in a bus, a car, a motor cycle or a truck, since trucks are reliable indicators that goods are moving around. All the others are not.
It is not enough to look for a surge in traffic in order to spot a turnaround in the economy., This could be motor bikes or cars. What we are looking for is a surge in trucks, since trucks bring goods to stores and only when stores again receive goods will we know that people started spending.
None of the existing solutions actually tell you what goes on in traffic. This is why we developed sensorsix to monitor not only traffic flow but also the composition of traffic. We monitor at the number of different types of vehicles at any given time through a network of traffic cameras.
The effects can be seen pretty clearly. One example is how the traffic quickly fell after Denmark was put on lock down. This figure shows the volume of truck and car traffic on Zealand in March. On the evening of the 11th Prime minister Mette Frederiksen announced that all public workplaces would shut down and employee s work from home. On the 13th borders closed. This resulted in a significant drop that echoes the decrease in demand due to the lock down of restaurants, cafes and most stores. While it was not illegal to drive around it is clear that truck traffic dropped much more than car traffic. IF we were just measuring the total volume of traffic that may not have been apparent.
Another example is from New York where we measured traffic in the whole city. Here is an illustrative sample from December.
We can see a lot of truck traffic in the days leading up to Christmas day right until the last day where people are shopping. Then again after Christmas we see a similarly high number of trucks presumably carrying returned gifts, but then traffic is levelling off the rest of the month all the way down to the level of Christmas day because of the sudden decrease in demand.
These are just illustrative examples of the correlation between truck traffic and demand. We would expect to see a surge in truck traffic when the economy of our cities are really picking up and not until then.
We at Sensorsix have built a tool for ambient intelligence. Ambient intelligence is knowledge about what goes on around us. In our case it is built on what we can learn about human mobility from sensors. We have been in stealth until now working on a prototype to quantify the flow of human movement in particular traffic. Basically we use Machine Learning to extract information from video feeds to measure the volume of vehicles, pedestrian bikes etc. across time on select locations.
As part of our testing of the product we had set up monitoring of the region of Zealand in Denmark. For those unfamiliar with the geography of Demark, Zealand is an island on which the capitol region of Copenhagen is located. The region is home to almost 2,3 million people. We wanted to understand the ebb and flow of traffic, the heartbeat of the region if you will.
We started this test on Sunday March 8th. On the evening of March 11th the prime minister of Denmark, Mette Frederiksen, closed all schools and required all public employees to work from home. Most schools and institutions closed down already the following day. On Friday the 13th at noon the borders to our neighboring countries were closed as well. Since Zealand is next to and deeply integrated with Sweden these two events would be expected to have a significant impact on mobility in the Zealand region.
Since we were monitoring the traffic from before the decision, we are able to accurately quantify and visualize the flow of traffic. The figure below that displays traffic volume from noon Sunday March 8th until noon Sunday March 15th. For reasons of simplicity we chose to focus on cars, so the figure only displays cars. Different patterns may exist for other types of vehicles, but the majority of traffic is cars.
When we look at the pattern, what we see is the usual heartbeat of a city. Previous research and our own pilots in New York have shown the same pattern where traffic increases in the morning, has a noon dip and then rises in the afternoon and evening. But it is clear that even if the pattern is recognizable, the heartbeat is losing power. Just how much may be clearer from the figure below. Here we see a jaw breaking drop of about 75% in traffic volume.
These are just some preliminary findings that we wanted to share for reflection and in the service of public information. Based on our data we can see that this is not a drill! It is not fake news. It is not tendentious journalism finding a deserted or heavily trafficked road depending on what they want to see. It is not exaggerated and it is not played down. it is a 75% drop regardless of how you frame it. In these times of fake news it is all the more important to get solid facts on the table. This is exactly what we built sensor six for. In all modesty we are probably the only ones in the world who can tell what actually goes on in traffic.
What can this be used for?
A fair question is therefore what we can use this data for. Is it just another piece of data to throw on the heap? We think not. In the current Corona context, there are at least three key issues that solid ambient intelligence can help solve
Compliance – do people really stay at home or do they ignore the orders political leaders are giving them? This is an interesting way to provide a fact-based way of monitoring the efficiency and compliance of curfew and other measures of limiting trafic.
Efficiency – since this is a good proxy for degree of quarantine a society is enforcing it is potentially an important metric. The frequency of interaction between people is an important variable in the spread of an epidemic disease and understanding the trends in mobility will give an indication about what that is. We should be able to correlate with the effect on number of infections and morbidity in the longer term. Obviously the effect will be delayed.
Economic activity – it should be possible to correlate the flow of traffic with economic activity. Initially it will of course be a drop and similarly the effect will be delayed. We can use the data to understand the economic impact a drop has. Eventually it should turn around and the rise in traffic volume should be the first harbinger of increase in economic activity.
We will keep monitoring the traffic and supply other interesting insights that we can mine from our data.
Note on methodology: we are continually monitoring roads leading to all entry points to and exit points from Zealand, which means all bridges and major ports. All traffic that comes into or goes out of the Zealand region is quantified. Based on this data we generate a volume score that is tracked continually
Recently I attended the TechBBQ conference. Having been part of the organizing team for the very first one, I was impressed to see what it had developed into. When I came to get my badge the energetic and enthusiastic volunteer asked me if I was “pumped”, but I was not pumped (as far as my understanding of what that meant) so I politely replied that I was probably as pumped as I was ever going to be.
Inside was packed and at one point a fascist looking guy pushed me and told me to step aside, just as I was getting ready to put up a fight and stand my ground I noticed the crown prince of Denmark strolling by. So, I left him with a warning and let him off the hook for this time (maybe if I had been some more pumped…also I suspect that all of this played out as a blank stare from the point of view of the body guard)
At the exhibition floor I had the good fortune of chatting with a few McKinsey consultants at their booth. The couches were exquisite and so would the coffee have been if they had offered me some. If there is one thing McKinsey can do it is talk and do research and currently they do a lot of talk and research on Artificial Intelligence (AI). I was lucky to get my hands on some of their reports that detail their look on Artificial Intelligence in general and AI in the Nordics in particular.
The main story line is the same one that you hear everywhere: AI is upon us and it promises great potential if not a complete transformation of the world as we know it. There are however a few conclusions that we should dive into a little bit more.
The wonders of AI
In terms of investment in AI, 2/3 of businesses allocate 3% or less of investments in AI and only 10% allocate 10% or more. If you were reading the tech news you would be forgiven for thinking that 90% of companies were investing a 100% or more in AI. So, this observation alone is interesting. There is not a lot of actual investment going towards AI for the vast majority of companies. When you ask senior management and boards there is a bit of a waiting game, where they look more towards competitors moves than to the actual potential of AI.
The status of adoption is that in the Nordics 30% (compared to 21% globally) of companies have embedded at least one AI technology across their business. This could be taken to mean that the Nordics were ahead of the curve compared to the global market. It could also be due to the Nordics having a higher general level of digitalization.
These things taken together it seems that AI as a technology is still in Innovators/the early adopter category in the diffusion of innovation theory developed by Everett M. Rogers. Rogers developed a framework and body of research that has been shown across multiple industries and technologies that show the patterns of adopting innovations of any type. AI is one such type of innovation, just like the Iowa farmers’ adoption of 2.4-D weed spray that was Rogers initial focus of investigation more than 50 years ago. The research showed that the adoption took the form of a bell curve.
Figure 1. Diffusion of innovations, credit: Wikimedia commons
The fact that companies are waiting for competitors to use AI also clearly indicates that we are in the early adopter or early majority category, as this is typical behavior for the early adopters. Whereas innovators will go with anything as long as it is new, early adopters are more picky. Early Majority are primarily looking at what the competition is doing in order to copy them.
If we look at figure 2 we can see that companies that have adopted AI today are vastly more profitable. The logic seems to be straight forward: there is a huge potential for AI to make companies more profitable.
While this is indeed a tempting conclusion, we have to be cautious. Keep in mind that the companies adopting AI may just be more technologically proficient. The AI adoption could be confounded with adopter category and technology utilization in general. It could just mean that companies more open to innovation of any kind are on average more profitable than those who are not. It is well known that especially early adopters are more profitable than other adopter categories.
To put it another way: adopting AI may result in you becoming more profitable, but is not certain that AI is the reason. What McKinsey doesn’t tell us, but I expect them to know full well, is that the reverse is also true. Investing in AI may actually set you up for failure.
AI adoption and adopter category
The issue here is that it may not be AI that is making the companies profitable, it may rather be their adopter category. The adopter category is related to their company culture. A company culture that is friendly to new technologies will behave as an early adopter and monitor the market and selectively choose solutions that they think will give them an advantage. This is what they do with any type of technology, not just AI. But we also have to remember that the reason they are successful is exactly because of their company culture and the fact that they are used to trying out new solutions.
They know that when they invest in something new you don’t just press install, next, next, finish and the money starts flowing. They know that new technologies are rough around the edges and there is going to be a lot of stop and start and two steps forward and one step back. They are driven by a belief that they will fix it somehow. More importantly, they have a sufficient amount of people with a “can-do attitude” that are not afraid to leave their comfort zone (see figure 3)
Figure 3. where magic happens
Now, compare this with organizations that have more people of a “not-invented-here attitude”. Their company culture leads them to the late majority and laggard categories. For this type of organization, innovations are something to be shunned, they know what they are doing and consider it a significant risk to do anything differently. Their infrastructure is not geared towards making experimental and novel technologies work. It is geared towards efficiently and making well known technology work in a predictable manner.
Let’s do a thought experiment about how this will play out: Karma Container, a medium sized shipping company, decides to send Fred, an inspired employee, to TechBBQ . They still have mission critical applications running on the mainframe and Windows NT servers (because Linux or MacOS are not in use anywhere) and upgrades are a major concern that has the CIO biting his nails every time. Fred comes back from the conference energized. He spoke to the same McKinsey consultants and read the same reports that I did. He pitches to his CIO that they should invest in AI because the numbers clearly indicate that it would increase the company’s profitability. The last time they invested in any new technology was to transfer their telephones to IP telephony and implement help desk technology. The CIO says ok, and they decide to try to adopt a chatbot to integrate with their helpdesk and website.
So, with a budget and a formal project established Fred starts. They wonder who in the organization would actually implement it. They go to the database administrator, who looks at them as if they were suddenly speaking a different language. He has no idea. They go to the .net developer who fails to appreciate how that could in any way involve him. They then go to the system administrators, who quickly show them to the door on account of a purported acute security event. They don’t get back to the project team either.
Remember that at this point they haven’t even started to figure out who would maintain, patch and upgrade the system or who would be responsible when it behaves strangely or who would support it. Fred quickly gives up and returns to his job of managing Remedy tickets.
Beware of AI
The purpose of this thought experiment (vaguely based on real life experience even though the names and details have been changed) is that even if AI does have much to offer in terms of profitability and efficiency it is not a realistic choice for most companies at this point. I would even go so far as to say that all AI should be avoided by most companies unless they have a track record and company culture that would indicate they could make it work.
Most AI solutions are not mature enough, that is easy enough to use, and more importantly the value proposition is speculative. If an organization is not geared towards implementing experimental technologies, they are wasting time, money and effort on trying. This is why most companies are better off waiting. This is similar to websites in the 1990ies. They were not for everyone, but today anyone can click a few times and create a beautiful site in WordPress or other CMS. Once we have the equivalent of a wordpress for AI, that is when most companies should invest.
Diffusion of innovations just takes time it cannot and should not be forced. The current AI hype is also a result of innovators and early adopters being more loud and opinion forming than most companies. Most companies are better off waiting for the dust to settle and more mature and comprehensive solutions to appear
Recently I received a notification from Green Mobility the electric car ride-share company I am using some times. I have decided not to own a car any longer and experiment with other mobility options, not that I care about the climate, it’s just, well, because. I like these cars a lot, I like their speed and acceleration and that you can just drop them off and not think about them ever again. Apparently I seem to have enjoyed the speed and acceleration a little too much, since the notification said the police claimed that I (allegedly) was speeding on one of my trips. For a very short period of time I toyed with the “It-wasn’t-me” approach, but quickly decided against that since technology was quite obviously not on my side here. Then I directed my disappointment at not receiving complete mobility immunity along with all the other perks of not owning my car against the company charging me an extra fee on top of the ticket, a so called administration fee. But that was a minor fee anyway. Then I decided to rant against the poor support person based on the fact that they called it a parking ticket in their notification and that I obviously wasn’t parking according to the photo. Although in my heart I did realize that this was not going anywhere.
I believe this is a familiar feeling to any of my fellow motorist: the letter in the mail displaying your innocent face at the wheels of your car and a registered speed higher than allowed along with payment details of the ticket you received for the violation. It is interesting to observe the anger we feel and the unmistakable sense that this is deeply unfair even though it is obviously not. The fine is often the result of an automated speed camera that doesn’t even follow normal working hours or lunch breaks (an initial reason for it being unfair). A wide suite of mobility products like GPS systems and Waze keeps track of these speed cameras in real time. Some people follow and report this with something approaching religious zeal. But what is the problem here? People know or should know the speed limit and know you will get a ticket if you are caught. The operative part of this sentence seems to be the “if you are caught” part. More about that in a minute.
The Technology Optimisation Paradox
Last year I was working with for the City of New York to pilot a system that would use artificial intelligence to detect different things in traffic. Like most innovation efforts in a city context it was not funded beyond the hours we could put into it, so we needed to get people excited and find a sponsor to take this solution we were working on further. Different suggestions about what we should focus on came up. One of them was that we should use the system to detect traffic violations and automatically fine car owners based on the license plate.
This is completely feasible, I have received tickets myself based on my license plates, so I gathered that the technology would be a minor issue. We could then roll it out on all the approximately 3000 traffic cameras that are already in the city. Imagine how much revenue that could bring in. It could probably sponsor a couple of new parks or sports grounds or even a proper basket ball team for New York. At the same time it would improve traffic flow because less people would double park and park in bus lanes etc. When you look at it, it seems like a clear win-win solution. We could improve traffic for all New Yorkers, build new parks and have a team in the NBA Play Offs (eventually). We felt pretty confident.
This is where things got complicated. We quickly realized that this was indeed not a pitch that would energize anyone, at least not in way way that was beneficial to the project. Even though people are getting tickets today and do not suggest seriously that they should not, the idea of OPTIMIZING this function in the city seemed completely off. This is a general phenomenon in technological solutions, I call this the “Technology Optimization Paradox”: when optimizing a function, which is deemed good and relevant leads to resistance at a certain optimization threshold. If the function is good and valuable there should be no logical reason why doing it better should be worse, but this is sometimes how people feel. This is the technology optimization paradox. It is often seen in the area of law enforcement. We don’t want surveillance even though that would greatly increase the fight against terrorism. We like the function of the FBI that lead to arrests and exposure of terrorist plots but we don’t want to open our phones to pervasive eaves dropping.
This is where we get back to the “If you are caught” part. Everyone agrees that it is fair that you are punished for a crime if you are caught. The emphasis here is on the “if”. When we use technology like AI we get very very close to substituting the “if” with a “when”. This is what we feel is unfair. It is as though we have an intuitive expectation that we should have a fair chance of getting away with something. This is what I call the right to stochastic liberty: The right for the individual to have events to be un-deterministic. Especially adversary events. We want to have the liberty to have a chance to get away with an infringement. This is the issue many people have with AI when it is used for certain types of tasks, specifically tasks that have an optimization paradox. It takes away the stochastic liberty, it takes away the chance element.
Let us look at some other examples. When we do blood work, do we want AI to automatically tell us about all our hereditary diseases, so the doctor can tell us that we need to eat more fiber and stop smoking? No sir, we quietly assert our right to stochastic liberty and the idea that maybe we will be the 1% who lives to be 90 fuelled on a diet of sausages, fries and milkshake even though half our family died of heart attacks before they turned 40. But do we want AI to detect a disease that we have suspicion that we might have? Yes!
Do we want AI to automatically detect when we have put too many deductions on our tax return? No way, we want our stochastic liberty. Somebody in the tax department must sit sweating and justify why regular citizens tax returns are being looked through. At most we can accept the occasional spot test (like the rare traffic police officer, who also has to take a break and get lunch and check the latest sport results, thats fair). But do we want AI to help us find systematic money laundering and tax-evation schemes: hell yeah!
Fear of the AI God
Our fear of AI is that it would become this perfect god that would actually enforce all the ideals and ethics that we agree on (more or less). We don’t want our AI to take away our basic human right of stochastic liberty.
This is a lesson you don’t have to explain to politicians who ultimately run the city and decide what gets funded and what not. They know that unhappy people getting too many traffic tickets that they think are unfair, will not vote for them. This is what some AI developers and technocrats do not appreciate when they talk about how we can use AI to make the city a better place. The city is a real place where technology makes real impacts on real people and the dynamics of technology solutions exceed those of the system in isolation. This is a learning point for all technology innovation involving AI: there are certain human preferences and political realities that impose the same limits on the AI solution as the type of algorithm, IOPS and CPU usage.
For about a decade I have been involved in various system development efforts that involved Artificial Intelligence. They have all been challenging but in different ways. Today AI is rightfully considered a game changer in many industries and areas of society, but it makes sense to reflect on the challenges I have encountered in order to asses the viability of AI solutions.
10 years of AI
About 10 years ago I designed my first AI solution, or Machine Learning as we typically called it back then. I was working in the retail industry at that time and was trying to find the optimal way of targeting individual customers with the right offers at the right time. Lots of thought went into it and I worked with an awesome University Professor (Rune Møller Jensen) to identify and design the best algorithm for our problem. This was challenging but not completely impossible. This was before TensorFlow or any other comprehensive ML libraries were developed. Never the less everything died due to protracted discussions about how to implement our design in SQL (which of course is not possible: how do you do a K-means clustering algorithm in SQL), since that was the only language known to the BI team responsible for the solution.
Fast forward a few years I find myself in the financial services industry trying to build models to identify potential financial crime. Financial crime has a pattern and this time the developers had an adequate language to implement AI and were open to use the newest technologies such as Spark and Hadoop. We were able to generate quite a few possible ideas and POCs but everything again petered out. This time the challenge was the regulatory wall or rather various more or less defined regulatory concerns. Again the cultural and organizational forces against the solution were too big to actually generate a viable product (although somehow we did manage to win a big data prize)
Even more fast-forward until today. Being responsible for designing data services for the City of New York the forces I encountered earlier in my career are still there, but the tides are turning and more people know about AI and take courses preparing them for how it works. Now I can actually design solutions with AI that will get implemented without serious internal forces working against it. But the next obstacle is already waiting and this time it is something particular to government and not present in the private industry. When you work for a company it is usually straightforward to define what counts as good, that is, something you want more of like say, money. In the retail sector, at the end of the day all they cared about were sales. In the Financial Services sector it was detecting financial crime. In the government sector that is not as straight forward.
What drives government AI adoption?
Sure, local, state and federal government will always say that they want to save money. But really the force driving everything in government is something else. What drives government is public perception, since that is what gets officials elected and elected officials define the path, appoint directors who hire the people who will ultimately decide what initiatives get implemented. Public perception is only partially defined by efficiency and monetary results. There are other factors that interfere with success such as equity, fairness, transparency etc.
Let me give some examples to explain. One project I am working on has to do with benefits eligibility. Initially City Hall wanted to pass legislation that would automatically sign up residents for benefits. However, after intervention by experts this was changed to doing a study first. The problem is that certain benefits interfere with other benefits and signing you up for something that you are eligible for may affect you negatively because you could loose another benefit.
While this is not exactly artificial intelligence it is still an algorithm that displays the same types of structural characteristics: the algorithm magically makes your life better. Even if we could make the algorithm count the maximum benefit of all available benefits and sign the resident up for the optimal combination, we still would not necessarily thrill everyone. Since benefits are complex it might be that some combination will give you more in the long term rather than the short term. What then if the resident prefers something in the short term? What if the system fails and a family gets evicted and has to live in a shelter because the system failed to detect eligibility due to bad master data?
When I was in the retail industry that would amount to a vegetarian getting an offer for steaks. Not optimal but also not critical if we could just sell 10 more steaks. In the financial services industry it would amount to a minor Mexican drug lord successfully laundering a few hundred thousand dollars. Again, this is not great but also not a critical issue. In government a family being thrown out on the street is a story that could be picked up by the media to show how bad the administration is. Even if homelessness drops 30% it could be the difference between reelection and loss of power.
What does a success look like?
So, the reward structures are crucial to understand what will drive AI adoption. Currently I am not optimistic about using AI in the City. Other recent legislation has mandated algorithmic transparency. The source code for every algorithm that affects decisions concerning citizens needs to be open to the public. While this makes sense from a public perception perspective it does not from a technical one. Contrary to popular belief I don’t think AI will catch on in the government any time soon. I think this can be generalized to any sector where the reward function is multi-dimensional, that is, where success cannot be measured by just one measure.
Do you need to be able to program to be a good product manager? Opinions differ widely here.
Full disclosure: I have very little if any meaningful command of any programming language. If you feel you need to be able to program in order to have an informed opinion, you have already answered the question yourself and can safely skip this and read on.
So, just to get my answer out of the way: “no”
I would say just as you don’t need to know how to lay bricks in order to be an architect or be a veterinarian in order to ride a horse.
When I hear people, who answer “yes” to the question I always want to counter: is it necessary to know anything about humans in order to build tech products for humans? Very few, if any, make products that do not crucially depend on and interact with humans, but it has always been curious to me why that part of the equation is always assumed to be trivial and not requiring any sort of experience or education.
This is even more puzzling when you consider that the prevalent cause of product failure seems to be the human part of it. Let me just mention three examples.
Remember google glass? That was a brilliant technology, but a failed product due to a lack of understanding of what normal humans think is creepy. I wrote about this back in 2014 and observed
A product has to exist in an environment beyond its immediate users. Analysis of this environment and the humans that live in it could have revealed the emotional reactions.
Remember autonomous vehicles? Perfect technology, but unfortunately not necessarily considered as such by the humans that run the imperceptible risk of being killed by it and who lives with the result of the actions of the AI, which will eventually be traced back to humans somewhere. This is something I touched on in a recent blog post.
We would still have to instill the heuristics and the tradeoffs in the AI, which then leads back to who programs the AI. This means that suddenly we will have technology corporations and programmers making key moral decisions in the wild. They will be the intelligence inside the Artificial Intelligence.
Similarly for features of products like the number of choices you have. You might assume that the more choice the more value to the product, but keep in mind that if the product is used by humans you have to think about the constraints humans bring:
In general the value of an extra choice increases sharply in the beginning and then quickly drops off. Given the choice of apples, oranges, pears, carrots and bananas are great, but when you can also choose between 3 different sorts of each the value of being offered yet another type of apple may even be negative. The reason for this phenomenon has to do with the limits of human consciousness.
The root cause of product failure is typically not technical but human, so rather than asking a product manager for his command of programming languages maybe do a check on where he or she falls on the autism spectrum. Maybe ask whether he has ever studied anything related to the human factors like psychology, anthropology, sociology or similar topics that would allow him to make products that work well for humans.