“Luck is what happens when preparation meets opportunity”
This quote is usually attributed to Seneca, but that is not entirely correct. It is neither from Seneca nor quoted correctly. It does however seem to be based on this quote from Seneca’s On Benefits, Book VII, I:
“’The best wrestler,’ he would say, ‘is not he who has learned thoroughly all the tricks and twists of the art, which are seldom met with in actual wrestling, but he who has well and carefully trained himself in one or two of them, and watches keenly for an opportunity of practicing them.’”
Although the quote is written by Seneca it is not his own but the Cynic Demetrius. The gist seems similar: it makes no sense to learn and prepare for everything. The best, have learned some things deeply and is focussed on the opportunity to use this knowledge. There is however one key difference. The common version of the quote talks about preparation, which could also be interpreted as planning. The original quote does not indicate that preparations mean planning in the usual sequential form rather it indicates that it is better to have learned some standard actions and wait until the right time to do them occurs.
This is not true only for ancient Greek and Roman wrestlers but also for the contemporary tech industry. We continuously hear of all the “unlucky” incidents: budget overruns and delays are common. The bigger the project the higher the probability of being afflicted by this bad luck. Bad luck comes in many forms: a subcontractor did not deliver on time and according to specification, an upgrade failed, a server crashed, data was deleted, the legacy code was more complicated than anticipated and unknown integrations were discovered. I have seen all these misfortunes and privately marveled at the ubiquity of bad luck in tech. Indeed, bad luck seems statistically prevalent.
Conversely, when projects actually succeed it is usually not attributed to good luck but to good planning. This seems wrong given that the original quote has no such indication. While the idea that planning is the key to success is widespread it is particularly dangerous in the tech industry.
Now let’s go back to Seneca’s insight. Preparation is always only half of the equation that predicts luck. However, the other half, opportunity, seems completely missing from any discussions of project success in current treatments of the subject.
The reason could be that preparation seems to lend itself better to structured approaches popular in academic and business books and articles. Planning can be broken down into phases, tasks, and predictable discrete units that can be submitted to templates and repetitions. It aligns with expectations of the corporate world’s logic since public and many private companies are run by boards that expect some level of preparation for the future. Publicly traded companies are measured on their predictability. This creates pressure for plans throughout the organization. We need a hiring plan, a financial plan, a development plan, etc. No one ever asks how to make sure to make the best of opportunity.
Opportunity is much less amenable to the logic of the corporate world. Although it has its shining moment as the O in SWOT, this is really just a stepping stone to more preparation and planning. It is a nod to the acceptance of opportunity but not really a way to integrate it into governance.
Opportunity cannot be predicted and hence not easily quantified or turned into KPIs and reports but it is still a fundamental part of how the world works. It is also something that you can be better or worse at cultivating. Since opportunity depends on a stochastic element, we need to look at how to engage with such unforeseen and unforeseeable contingencies.
The following are important ways to identify opportunities at different levels.
Awareness – if we are unaware of what is going on around us we are never going to see any opportunities. The first step is to actually monitor what is going on. If you are at the level of a company this means monitoring the market, competitors, customers, and everything else that is directly or indirectly a key part of your environment. It also works at the level of the individual employee where it means looking for new tasks and jobs internally as well as externally. From the point of view of a project, anything that can affect the deliverables and plan should be monitored closely. Monitoring the world around us is key to identifying opportunities and takes many forms. At a corporate level industry reports and competitor analyses are continuously done. Employees may survey job sites and LinkedIn and the project manager usually maintains a risk log. Bringing this front and center is the first step to increasing focus on opportunities.
Mindset – based on the monitoring, anything that happens can be viewed differently. The focus is usually on the risks and mitigating them perhaps because this has the most immediate impact. But as the SWOT approach illuminates the other side of risk is opportunity. Spotting opportunities is therefore as much a question of mindset as of awareness. Unexpected contingencies will always carry opportunities. The difference between risks and opportunities is that working with risks usually focuses on how to maintain the status quo, and how eventualities should be kept from disturbing the plan. Conversely, opportunities will often require a bit of creative thinking to understand how a situation can be used. We rarely pursue an opportunity in order to continue doing the same but in order to do something different or differently.
Probing – being able to spot the opportunities of the environment requires a way of validating them since perception can mislead. A potential opportunity needs to be probed in order to make sure that it is really the case. At the corporate level, the question could be whether the market is really preferring a particular feature as can be seen from product reviews and blogs. As an employee interested in doing data science it would be valuable to find out if there is a possibility of doing data science internally by asking around. As a project manager, there could be the potential to snap up a suddenly free internal resource if you hear about it in your network. Spotting opportunities is not just passive monitoring but also continuous probing to find and validate them.
Now that we understand better the nature of opportunity we need to go back to the quote from the beginning.
In order to fully appreciate the depth of the Seneca quote, we need to read what precedes it. The quote is not about wrestling. Rather it is about how to approach life. The quote is preceded by this passage:
“The cynic Demetrius, who in my opinion was a great man even if compared with the greatest philosophers, had an admirable saying about this, that one gained more by having a few wise precepts ready and in common use than by learning many without having them at hand. “
Seneca on Benefits, Book VII, I
“(..)that one gained more by having a few wise precepts ready and in common use than by learning many without having them at hand”, this is the key to success. Having only a few items prepared and looking for opportunities to practice them.
How could such a few “wise precepts” look in the real world and how could that insight be used? This is highly contextual and where the secret sauce is. The point is that it is better to give up the idea that you can learn everything and be prepared for everything. Instead, focus on a few things and learn them well. These could be things that play to your abilities and interests. It could also be selected based on an assessment of what will happen often. For maximum effect, learn and train these and be ready to deploy them any time you spot an opportunity.
“The best architectures, requirements, and designs emerge from self-organizing teams”
is the eleventh principle out of twelve in the agile manifesto.
Taken at face-value it is somewhat difficult to understand what exactly it means. In order to unwrap the import of this we need to investigate the meaning of some key concepts.
The core idea seems to be that order in the form of architecture and design emerges by itself from a self-organising team.
Self-organisation occurs at many levels in nature. From basic physical processes and chemical reactions leading to crystallisation over biology to macro phenomena in society and the economy. It is a property of multiple interactions within a system.
The idea that dynamics within a system generates order by itself can be traced to ancient atomists. According to Democritus the world consisted of small invisible atoms in a void. The motion of the cosmos separates the atoms according to their properties. Heavy ones go together like pebbles on the beach. This is also how life appeared: living things emerged out of slime.
This view was influential in philosophy until the 18th century, where the 2nd law of thermo dynamics was discovered. According to the 2nd law of thermo dynamics, order will decrease with time in any isolated system. This means that a system cannot by itself increase order without influence from outside the system.
The agile manifesto thus seems to expound a pre-socratic natural philosophy that has been abandoned due to better knowledge in modern science and understanding of natural laws that were gained during the past couple of centuries.
Even if order would arise from the self-organisation of the team one would expect this only to result in emergent order of the team, not its products. This is similar to how physical and chemical processes result in emergent order of the material itself as is the case with crystals. Note also that crystals do not emerge as order by itself but due to external processes in the form of heat and pressure. Powdered carbon does not spring into crystal form spontaneously.
Consequently a consistent emergentist view could hold that the order of a team would predictably find an ordered form characterised by some properties like group size, structure or composition. But there is no reason to assume anything about the products of the team.
We can therefore conclude that order does not arise by itself. The best architectures and designs do not just emerge de novo from team interactions when these teams are self-organising. This would be against the 2nd law of thermo dynamics. Order has to be come from the outside.
Unfortunately, this erroneous view has imbued agile development with an unwillingness to design and architect to the point where many modern developers are vocally antagonistic to any form of upfront design. The effect, of course, is an increase in disorder. This can take many forms. Some are visible as technical debt, some are invisible as bad design leading to instability of systems.
One would think that eventually they would come to the realisation that this was the case, but another part of the agile mindset precludes this realisation, namely that gradual changes and refactoring are normal and laws of nature. In a sense they are correct. Because nothing is done to reduce entropy due to the unwillingness to do upfront design systems constantly need to be refactored taking away the possibility to work on something more worthwhile.
Another consequence is instability because system interactions become more complex when there is no design to willingly make them simpler.
I have worked with different generations of developers and systems, from mainframes to apps. Pre-agile systems can be really ugly too, no doubt about that, but they were made in an era where design and architecture was often done as a natural part of development. I have seen such systems run without error in a stable fashion for 40 years plus. If a modern system developed by an agile team runs more than a few years at all and without incidents it would be a rare occurrence. This is not because modern developers are worse, quite the contrary. Today developers typically have longer more dedicated study programmes behind them. The reason is not either that technologies change faster today. The reason is only that it is viewed as bad to do upfront design and considered okay to rewrite everything in the name of refactoring every once in a while.
This eleventh principle has potentially undermined most of the gains that the other eleven principles have brought. Fortunately, many companies have not implemented agile in full as envisioned by the agile manifesto. Today for example SAFe has a more realistic view of the need for design and combines this with the insights of the agile manifesto.
We should therefore either delete the eleventh principle or amend it with a “do not” to read: “The best architectures, requirements, and designs DO NOT emerge from self-organizing teams”. This has to be supplied by dedicated architecture and design work from outside the development team.
Much energy and resources are being put into Artificial Intelligence currently. Artificial Intelligence is expected to approach and eclipse human intelligence. According to a poll by Nick Bostrom the consensus is that it will happen anytime between a few decades and one hundred years from now with the consensus around mid century. We are making great strides and much fear is associated with this development. However, at the heart of AI is a conceptual problem with real practical consequences that may question the fundamental possibility of an Artificial General Intelligence given the current approach.
In the interest of conceptual clarity let us first define a few terms that are used to distinguish different flavors of AI. The term Artificial Intelligence (AI) is used for all types.
Artificial Narrow Intelligence (ANI) – these are applications of AI that solve narrow problems like recommendations of products, image recognition or text to speech systems.
Artificial General Intelligence (AGI) – which is an intelligence on a par with human intelligence and in all respects indistinguishable from humans
Artificial Super Intelligence (ASI) – is similar to AGI but superior particularly with respect to speed
One obvious but clearly central aspect that is rarely the object of reflection is the concept of intelligence. In the context of AI this is a strangely trivial concept and treated as self-evident, while in psychology intelligence has been the subject of intense debate for more than a century. Nevertheless, in contemporary psychology it can hardly be characterized as an area of consensus. The purpose here is not to go into any detailed debate of what is and is not intelligence as this will always be a point of contention and more of a definitional than a substantial problem. After all, anyone is free to define a concept as they prefer as long as that definition is precise and consistent. Rather here, I would like to depart from the standard concept of intelligence as understood in the context of AI.
What is intelligence then according to AI research? according to the Wikipedia article the following are important traits of intelligence:
Reason – the use of strategy, and ability to solve puzzles
Representing and using knowledge – like common sense inference
Planning – structuring actions toward a goal
Learning – acquiring new skills
Communication in natural language – speaking in a way humans will understand
These are focussed on general abilities that are part of intelligence with good reason. They are all represented in one way or other in most psychological theories of intelligence too. These abilities can, however, be tricky to measure. And if we cannot measure them it is difficult to know whether an AI possesses them. Another approach has therefore been to depart from the tests that would determine whether an AI exhibits such abilities.
The earliest and most famous one is the Turing test developed as a thought experiment in 1940 by Alan Turing. In this test a game is played where the purpose is deceit. If the computer is statistically as successful at deceiving as the human opponent it will be considered to have passed the Turing test and thus exhibited intelligence at the same level as a human. One cannot help but speculate that Turing’s occupation at the time as a code breaker in the second world war might have influenced this conceptualization of intelligence but that is another matter.
Another more contemporary account is Steve Wozniak’s coffee test in which a machine is required to be able to go into any ordinary American home and brew a cup of coffee. A somewhat more practical concept and one could speculate similarly inspired by the preoccupations of the author of the test.
Ben Goertzel, an AI researcher, has proposed the so-called robot college student test, where an AI is required to enroll in a university on the same terms as a human and get a degree in order to pass the test.
While one could discuss whether these tests really test AGI rather than merely ANI, they reveal one core observation about intelligence: that it is entirely conceptualized in the context of problem solving. These tests may focus on different problems to solve, how to deceive, how to brew coffee, how to get a degree, but they all depart from the fact that the problem is already given.
The same can be said of the abilities that are usually associated with AI mentioned above.
Reason is problem solving with respect to finding the best solution given a predefined problem such as “how to solve this puzzle” or in the more dystopian inclined accounts: “how to take over the world”
Representing and using knowledge is problem solving with respect to ad hoc problems arising from who knows where?
Planning is problem solving with regard to structuring a temporal sequence of actions to solve a given problem such as a pre-given goal.
Learning is problem solving with regards to adapting to a problem and solving it. Learning IS basically problem solving or at least optimizing how to solve problems.
Communication in natural language is problem solving with respect to conveying information between two or more communicators.
Stepping aside for a moment to the philosophy of mind we find a similar problem. David Chalmers in the 90s identified the hard problem of consciousness in the philosophy of mind to be why and how we have conscious experience. Compared to this other problems of the physical explanation of how we process and integrate information were argued to be “easy” problems because all they require is to specify the mechanisms of these functions. They are thus considered easy, not because they were trivial, but because when they have all been solved the hard problem persists: when all cognitive functions have been explained the problem of why and how we have conscious experience remains. In order to understand the distinction and how it relates to our problem it would be fruitful to quote Chalmers at length:
“Why are the easy problems easy, and why is the hard problem hard? The easy problems are easy precisely because they concern the explanation of cognitive abilities and functions. To explain a cognitive function, we need only specify a mechanism that can perform the function. The methods of cognitive science are well-suited for this sort of explanation, and so are well-suited to the easy problems of consciousness. By contrast, the hard problem is hard precisely because it is not a problem about the performance of functions. The problem persists even when the performance of all the relevant functions is explained. (Here “function” is not used in the narrow teleological sense of something that a system is designed to do, but in the broader sense of any causal role in the production of behavior that a system might perform.)”
“The easy problems of consciousness are those that seem directly susceptible to the standard methods of cognitive science, whereby a phenomenon is explained in terms of computational or neural mechanisms. The hard problems are those that seem to resist those methods.”
Something analogous is the case in AI. Here we can also discern easy problems and hard problems. As was seen above the concept of intelligence is entirely focused on problem SOLVING. In fact, the different kinds of problem solving we have just reviewed are the easy problems of AI. Even if we solve all of them, we will in fact not have a human-like intelligence. We still miss the flipside of the coin of problem solving: problem FINDING. The hard problem of AI is therefore how an AI finds the right problems to solve.
As was postulated for philosophy of mind by Chalmers, we can solve all the easy problems of AI and have a perfect problem solving machine without having a true AGI or ASI. The problem is that we have a homunculus problem because the problems that the AI is solving, derive ultimately from a human since a human will at some point have created it and set the parameters for the problems the AI will solve. Even if it morphs and starts creating other AIs itself the root problem or problems will have been created by a human that created the first system or seed AI as it is sometimes called. The root of the AI, even if it is indistinguishable or superior in its problem solving abilities to a human, is human and it is therefore not an AGI or ASI.
Commonly the solution is to assert that the AI comes into the world with a motivation to achieve a goal. From this it somehow finds the problems to solve. Even if we are unclear on how exactly the problems are found this still seems a bit of a stretch if we think it should match human intelligence. Having one goal and pursuing it for humans seems to be the norm in only one realm, that of the coaching and self help industry. In actual human life it is the exception rather than the rule that a human has one goal.
A simple example: humans typically don’t know what they want to be when they grow up. Then they end up becoming a management consultant and despair at the latest around 40 at which point they decide to become an independent quilting artisan. Only to switch back to corporate life as a CFO and then retire to a monastery only to return with the goal of providing the world with poetry. This entails a lot of different competing and changing motivations over the span of a lifetime the dynamics of which are poorly understood. Moreover, it entails a lot of different problems to identify along the way.
Not until an AI has the ability to identify and formulate such shifting problems can it be called an Artificial General Intelligence. Until then it is an Artificial Narrow Intelligence with the purpose of solving problems pre-set by humans. Consequently, until we solve the hard problem of AI it will remain a mere tool of humans: the intelligence we see is not truly humanlike general and independent but in fact mere reflections of human intelligence and hence not truly artificial.
This does not mean that doomsday scenarios, which Tegmark, Bostrom and the public spend a great deal of time on, go away. It does however change the status of how we view them. Currently the consensus is that AI poses some sort of fundamentally different problem to us. That does not seem to be the case though. Ever since late industrialization and the coming of advanced technologies like nuclear power plants and chemical factories we have been living with the threat of high risk technologies. These have been treated with great clarity by Charles Perrow and AI falls squarely within this treatment.
This analysis also points to such scenarios probably being exaggerated in both their severity and timing since we have not even started tackling the hard problem of AI. When we haven’t even begun to understand how problems are found in an environment and dynamically changing with the interactions between the agent and the environment, it is hard to see how human-like intelligence can develop anytime soon.
Rather than fearing or dreaming about artificial general intelligence we might benefit from thinking about how AI as a technology can benefit humans rather than take the place of humans. We might also start thinking about the hard problem as a way to improve AI. Thinking more about how problems are found could be an avenue to make AI more humanlike or at least more biological since all biological species show this fundamental ability. Today most AI use a brute force approach in solving problems and need hundreds of orders of magnitude more learning cycles than humans in order to learn anything. Perhaps a deeper understanding of problem finding would lead to more efficient and “biological” ability to learn that does not depend on endless amounts of data and learning cycles.
Until we start tackling the hard problem of AI, for better or worse, progress in AI will stall and scale only with the underlying technological progress of processing power, which does not advance our goal of more human-like AI.
Iterative or agile development in one flavor or other has become the standard for IT development today. It is in many contexts an improvement on plan based or waterfall development, but it inherits some of the same basic weaknesses. Like plan-based development it is based on decomposing work into atomic units of tasks with the purpose of optimizing throughput and thereby delivering more solutions faster. In most formulations from SAFe over kanban to DevOps the basic analogy is the production line and often the actual source of inspiration is from the manufacturing world with titles such as Don Reinertsen’s “Managing the Design Factory”. Similarly, the plot of Gene Kim’s 2013 DevOps novel “The Phoenix Project” revolves around learning from factory operations to save a troubled company’s IT development process. While iterative approaches spring from advances in manufacturing processes like Lean, Six Sigma, TQM and others, they are stuck in the same mental prison that waterfall was: a linear mode of thought where the world is a production line through which atomic units move and become assembled. To understand why let’s dig a bit deeper.
The origins of iterative development’s linear mode of thought
Like most things development practices don’t spring from a vacuum. They have roots in the culture in which they emerge. The anthropologist Bradd Shore has argued that the most pervasive cultural model underpinning everything from sports over education to fast food in American culture is modularization. A cultural model is a way of structuring our experiences and how we think about problems and solutions. According to the modular model things are broken down to isolated component parts with a specific function. Through the outsized influence of American culture on modernity globally this model has been disseminated in various forms to the whole world. It can however be traced back to the production line.
As early as 1948 the British anthropologist Geoffrey Gorer noted in a study of the American national character how the pervasive atomism of American institutions could be traced to the great success of the production line in American industry. According to Gorer the industrial metaphors became the basis of a distinctively American view of human activity. It is indeed so pervasive that it persists today also in the foundations of how we view the activity of IT development, an activity where no physical goods move through any physical space but nevertheless, we have chosen this as the model to conceptualize it while others could have been chosen. The basic unit are tasks that are worked on one at a time by a specialist. The deliverable passes through different specialists as it passes through the production line. From design, through development, test, to deployment.
It is perhaps not surprising that the state of the art in development globally is based on the model of the production line since it has been immensely successful and, in many ways, transformed our world to what we see today, but the question is whether that continues to be helpful. Is the production line really the best model of conceptualizing IT development?
Another, more circular, approach
In the new millennium a new design philosophy emerged that challenged these assumptions that had been so pervasive in modern culture. It focused on circularity rather than linearity. It was developed in a number of different books such as McDonough and Braungarts Cradle to Cradle: Remaking the Way We Make Things that converged on a model that valued circularity rather than linearity. This is why it is commonly known as circular economy. One of the main ideas about circular economy is that we should think differently about waste, not as something to throw away but rather as a potential resource. Another important idea is to think in systems related to each other. The systems perspective requires us to think about feedback loops and dynamics of the whole system that a solution is part of. It is not enough to think about the production line because is embedded in bigger systems, like the labor force, politics, the ecosystem and the energy system. What is an improvement in a linear view may not be when we consider the wider system-effects. The circular economy takes inspiration in biology where metabolism is a key concept. This leads to a focus on flows of materials, energy and water in order to understand the metabolism of cities or countries. Superficially it could seem like agile development is similar in so far as here we also find a focus on flows, but that is deceiving. In agile the flow is only one of throughput, where something comes in, goes through the process and something else comes out, which marks the end of the scope of interest. The agile version is a linear focus on flows without any interest in systemic effects.
A circular form of development
The circular economy has made an impact in physical product design, city planning and management and production of physical goods. This shift taking place in the wider culture also has the potential to help us break out of the mental models that are a consequence of the production line of the industrial revolution and impact the way we develop tech products too. By moving from the linear mode to a circular mode of thinking we may harness many of the same beneficial effects that the circular economy does. Let us look at some examples of how that would change how we develop tech products.
Basic metaphor is the production line, where throughput and production are in focus
The basic metaphor is one of metabolism where life and complex systems are in focus
Responsibility of development ends with deliverables deployed
Working solution is a shared responsibility
Promotes centralized view of production due to low cost and concentration of expertise
Promotes decentralized models where development takes place in the natural context of where it creates value
Standardization of end product, process and technologies
Standardization of components, protocols and interfaces
Optimizes for throughput
Optimizes service utility
Operates on service level agreement
Operates on fitness functions
Rewards hours spent
Rewards value produced
Build from new
Focus on transactions and interactions
Focus on system dynamics
Development based on business requirements and user acceptance test
Cocreation and dialogue based on vision and goals
A circular mode of development will do many of the same things as is done in agile development and share some if not most processes and techniques, but the basic approach and mind set is radically different. Let us look at some of the more important possibilities.
Development as a complex system
In a traditional setting development consists in multiple modular activities that performance of which have little or no effect on each other. Business development and design is done in isolation from development, which is done in isolation from operations. Between them are handoffs that are always fraught with conflict and miscommunication.
In a circular perspective development is no longer just a sequence of atomic tasks that need to be completed by specialists but a fabric of interlocking areas that affect each other. By purposefully considering the entire fabric and its feedback loops development efforts optimize not just throughput but the utility of services produced and the interplay with the environment. For example, the programming languages used to develop a solution affects the employees working on it. It also affects recruitment of the right talent. If the language chosen is for example Scala because it is deemed superior for a given problem, this also affects Human Resources. Scala developers are among the highest paid globally, on average $77.159 per year and are difficult to find because they are fairly scarce. If HR was involved in this decision and the focus was on affordability and availability of talent, C++ might make sense with an average salary of $55.363 globally. From a financial perspective alone the $22.000 difference per developer per year going forward could also be important. Furthermore, the demography of C++ developers and Scala developers may be different, which affects work culture. Work culture can be an important factor to attract and retain talent. If the drive is towards a younger profile, this may go in the other direction with Scala more popular among the young.
What might look like an isolated technology choice in a linear mode of thought actually has wider impacts for the whole organization. Viewing decisions from the point of view as a complex system will help bringing these dynamics to light. In the example the decision could be optimal locally but not globally and may introduce unforeseen systemic effects.
Different functional areas such as sales, development and operations are commonly separate areas each with their own leadership and responsibility. Units of work pass between these different modules and changes responsibility along the way. But the responsibility stops at the boundary, which is the source of many political border wars.
Rather we should look for how to build a shared responsibility for not only the entire organization (business and IT) but also external entities like suppliers and collaborators. All should be responsible for the whole and not just their part. Today even in forward looking agile organizations the responsibility of development, infrastructure, business operations, sales and HR all belong in separate departments even if ideas such as DevOps is trying to break down the boundary between the two first.
Better ways to implement a shared responsibility must be developed. There are alternatives: colocation of the different functions for example. This is what makes it easy for start-ups smaller than a couple of 100 employees to move faster than the competition and often deliver vastly more value to the customer. It would make more sense if teams were organized around an objective rather than a type of work. If developers, support, marketing and sales were all part of the same team equally responsible for the same objective, work would more seamlessly align toward that.
The challenge of course is to find meaningful objectives that at the same time does not produce team sizes that are too big. One way is to architect the structure top down to make sure all the necessary objectives are represented. Another way could be to allow teams split up once it grows and divide the objective into sub-objectives. There is no easy answer, but the first step is to break down the default linear thinking that organizes responsibility around types of work rather than common objectives.
While agile development often works from the premise of decentral and empowered teams and work relatively independently, they still invariably belong to the technology function and are ultimately managed by the CIO or CTO. This brings with it some degree of centralization. If instead the creation of technology is just one aspect in a shared responsibility, it will allow decentralized multi-disciplinary teams to appear.
Agile methodologies try to make the connection to the business by inserting a part time representative from the business as the product owner, which often turn out to be an IT person anyway. Sometimes product managers encapsulate the business perspective but are mostly seen as something outside the development team. These are all ways around the implicit centralized mode of production.
Rather, having a truly cohesive multidisciplinary team means that there is no longer any need for a centralized mode of production. These teams would have all or most of the skills they need to fulfill their objectives. The skills may therefore vary greatly, but the decentral teams should be able to work on services in isolation using the tools and technologies that fit to maximize the success of their service. This will increase the agility of the entire organization.
The degree of decentralization can differ according to the context of the business. For some heavily regulated industries, decentralization makes less sense than other industries that are less regulated. Complex industries like nuclear power might similarly have strict requirements for centralizations on many parameters. In general, it can be said that complexity draws an ideal solution towards centralization but the general impetus should be towards decentralization. The focus should be on business services.
To have a decentralized focus on services it becomes critical to focus on standards. This is nothing new of course. Standards exist and are deployed in multiple contexts. In this context there is a need for clear standards of components, protocols and how interfaces are built and maintained. A precondition for decentralization and local autonomy around a service is that its interfaces are well defined and standardized. This is frequently done through an interface agreement that specifies what consumers can expect from this service. When the interface agreement is in place, autonomous development of how to support it is possible. The standards however need to be more than just protocols and service level agreements, it also involves quality. As an example, let us imagine a bank providing a service for risk scoring of counterparties. It is not sufficient that we know the protocol, how fast the response time is and the logic of the service, we also want quality standards around accuracy, false positives and errors since these aspects are likely to affect other services like credit decisions and customer relationship management. The concept of standards and protocols are thus expanded compared to common services.
Some degree of standardization across the services are also necessary. In the case of web-services it would not make sense that some services used XML, others JSON and still others invented their own protocol. It would be similar to different organs in an organism using different blood types. You can choose only one blood type as an individual, but different blood types may work equally well. Similarly, there needs to be only one standard for service interface protocols.
The focus on services delivered means that focus will be on the utility of that service to its consumers. It may seem self-evident but utility to consumers is not. To determine whether a service is useful you have to take the perspective of the consumers of the service. This is why user interviews, surveys, Net Promotor Score, focus groups etc. have been developed as techniques in product development. These are all ways to find out if a product or service is useful. This, however, is merely aimed at the end user, but a technology product usually rely on many other services. These should have a similar strong focus on whether they are useful. If we run a real estate company, the users will naturally be interested in the website and how it works. But an underlying service such as the price estimation engine will be only indirectly relevant to end users. It is not easily measured by existing methods mentioned above. The utility of the service to its consumers may be speed, accuracy, additional information or something else entirely. But whatever the utility is, this is what needs to be optimized.
Transition to fitness functions
The consequence is that we also need to rethink how to measure utility. The traditional service level agreement will not work for this way of working because it only relates to superficial features that may or may not be important like latency, uptime, service windows etc. Rather we need to focus on the fitness functions of the services as the relate to the utility of the service. The term fitness function is borrowed from evolutionary theory and designates a function that measures how close a potential solution is to solving a problem. In nature the problem is related to survival of a species. For example, the speed and agility of a spring buck is a part of a fitness function that determines its survival in the encounter with lions on the savannah.
In product development it is related to the utility of a service and thereby how it contributes to the overall success of the product or organization. An example could be how fast a website is ready to be used by the customer. If for example it is a social media service and it takes more than 2 seconds it is the equivalent of being caught by the lion. A service level agreement might specify a latency of 2 seconds but that is not the same as the functionality of the site being available. There could be many aspects involved, the browser type of the user, underlying services that take longer to load content etc. Thinking in terms of fitness functions makes it clear that it is a shared responsibility.
Rewarding value created
If teams are working on their services by optimizing the utility as measured by the fitness function, it seems strange that they should be rewarded by how much time they spend working. Another approach is to reward the value that their work produces or the fitness of the service. It is probably rarely possible to do this 100% since most people need some sort of predictability of remuneration for their work but then regarding value created can be worked into the reward structure in other ways such as bonuses calculated on performance based on the fitness function. There are already well-established ways to reward employees, the only change is that it should be based on the value being created.
Reuse and reappropriation
A key concept in circular economy is to reuse and reappropriate rather than buying new. When a new need arises, rather than starting to build it straight away, it should be investigated if other existing solutions could be used. Sometimes this requires a stretch of imagination, but it definitely requires knowledge of what exists. A solution for CRM could be used for case management, and a Service Management solution can work equally well for HR. By looking at what already exists, money and time can be saved. These solutions and the investments that have gone into them will be preserved. It is not trivial to develop something that is ready for production. Using something already in production thus also minimizes risk.
In biology this process is known as co-option, means a shift in the function of a specific trait and it was at the heart of Darwin’ theory of evolution. One example is feathers, which originally developed in order to regulate heat, but later was co-opted for flight. The same can be said of our human arms, which were originally legs but were coopted into holding and manipulating objects. These hands that are now typing are coopted from crude legs where the fingers originally were made only to keep balance now perform a magnificent array of functions. There is no reason why the co-option of existing systems could not provide the same effect.
The drive towards reuse also goes into how we design new solutions. Designing for reuse has already been a recurrent theme in development. This is the basis of service-oriented architecture, microservices, the use of libraries and object-oriented programming. It also works at higher levels. However, it requires a bit more reflection and abstract analysis. Developing a proper information architecture as a foundation is a precondition. Designing for reuse is not trivial and does take more time than just starting from one end and building what seems to be needed.
Focus on system dynamics
In traditional agile focus is on interactions as per the agile manifesto. Interactions are important and the agile manifesto’s focus on responding to change is important. Unfortunately, this is not sufficient in a complex system, where dynamics cannot be explained from individual dynamics. In a circular mode we want to focus on system dynamics particularly feedbacks and other system effects. Some of the most important effects of the dynamics of complex systems are delayed response, cascades, oscillations and instability. These will often appear puzzling and mysterious since the source is unknown and can’t be seen immediately from the interaction. The only way to try and tame a complex system is to focus on understanding the system. To do this one the most important is to identify and measure the feedback loops of the system.
Co-creation and dialogue
Another consequence is that teams will have to be interdisciplinary and work together based on co-creation and dialogue rather than through requirements gathering from business followed by development and finished with user acceptance testing. Teams will develop visions and set goals together that will guide development activities. It doesn’t mean, however, that everyone have to sit together always and only talk to each other. Different disciplines still need to work with similar people most of the time to hone their skills, but a significant portion of time must be dedicated to work together. Therefore startups are typically better and faster at adapting to new needs in the market, they naturally work in this mode, since they are so small that everyone works and talk together.
Most of these thoughts are not new and neither are the challenges. For example, Working in autonomous multidisciplinary teams is known from a matrix organization. The challenge is that specialized knowledge is not developed sufficiently. If five C++ developers work in five different autonomous teams, they will never learn from each other. This may lead to locally suboptimal solutions, but that is natural. We know this from biology too: humans have two kidneys but need only one, we do not need the appendicitis at all, but it is there. The spleen seems similarly superfluous. From a logical top-down design perspective there are some features that don’t make sense. However, the organism is designed by its ability to survive and thrive in an environment, which means that as long as theses locally suboptimal features do not interfere with that superior goal it is okay.
Similarly, there is no simple solution for magically fixing worker compensation. It is well known that the incentive structure is very hard and may have unintended consequences. This does not mean that we can’t be inspired to think about it in a more circular way in some cases. Some jobs will continue to be standard compensation per hours worked but we might try to make them more incentive driven and align the work being done with the value we want to create. Being physically at work sitting in front of the computer rarely produces value by itself. A circular perspective may help us to focus work on what does.
Towards a circular development
The agile revolution was a welcome improvement over the plan based or waterfall methodologies that were dominating It development at the time. Unfortunately it copied a mentality that was inherited from the industrialisation and modernity of the previous century. It is time to evaluate whether that way of thinking is still the best way to get work done. Experience in other areas of society has questioned whether the linear and modular thinking with a focus on throughput is optimal and increasingly a circular approach is adopted. This has not yet had any significant effect on IT development methodology. However, that should change in order to reach the next stage. If we can imagine a radically different way of working as is outlined here we can also change. Agile development has not made all the problems of development magically go away neither will a circular approach, but agile has reached the limit of where it continues to provide improvements regardless of the flavor. It is time to try another approach in order to make sure that we adapt work and technology to the 21st century rather than being stuck in a mindset that only made sense in the 20th.
Steve Jobs said: “Remembering that I’ll be dead soon is the most important tool I’ve ever encountered to help me make the big choices in life”. Rarely does mortality figure as an explicit instrument in making decisions. But maybe it should.
In an abstract sense death is important for progress in a world that is perpetually changing. If old companies that do not manage to adapt to changes did not die, the market would be served by continuously more inadequate solutions. If for example companies that did not adapt to the change in transportation when the automobile was invented did not die, we would still have coach services with horse and carriage. Possibly we would have a world similar to the one portrayed in Game of Thrones and Lord of the Rings where millenia pass by with no discernible impact on technology or mode of life. Presumably the modes of production of incumbents were left immortal and not allowed to die and improve or any invention to be developed, since all is well as it is and were. To all but the most sentimental, a world without death would be a chilling prospect.
We count on technology to get progressively better. The next generation of cell phones, more efficient solar cells to produce clean energy, better and more accessible healthcare. The list goes on. We depend on the death of incumbent technologies like box sized car phone of the eighties or the manual wind up acoustic record players, dial up modems or the use of carrier pigeons. Without their death (not the pigeons, they will live happily without carrying notes) we would not have had the iPhone or Spotify or email. Death is the engine of evolution.
One could even speculate that human or biological mortality is a function of evolutionary pressures since forms of life that did not die naturally would never evolve, they would just gradually exhaust the carrying capacity of the local ecosystem. Imagine a species of fish like the Siamese Algae Eater (Gyrinocheilus aymonieri) that eats only hair algae, which is abundant. Let’s say one individual evolved an immortality gene that meant it would not die of natural causes and could live on for thousands of years. Let us call it Gyrinocheilus aymonieri immortalis. It would continue to increase the population size until the supply of the hair algae, which is its only source of food was exacerbated. The hair algae would possibly be extinguished due to the pressure from the Gyrinocheilus aymonieri immortalis. Now, since it is not a supernatural fish, it would be out of food. Since its genes allowed it to only eat hair algae it would gradually be extinguished by hunger as a species. A cousin species, similarly attracted to this algae might have retained its mortality and died after a few years of natural causes. With the diversity generated by new generations with slightly different preferences, one variant that acquired a taste for different black beard algae too might have come into existence. During the decline of the first hair algae this variant species would have thrived and in a short while the Gyrinocheilus aymonieri immortalis would become extinct and the new mortal Gyrinocheilus aymonieri with a taste for different algae would have been the only one left.
Immortal species may therefore have existed earlier but quickly been extinguished by the forces of change in their ecosystem. In a world where change exists and there are natural limits to resources and food, death is a superior function for a species in order to adapt to life.
For a company it may help to think that any technology we can think about will also be dead soon enough. At least in the shape that we now them. We don’t know what will come after it like we don’t know how the generations that follow us will be. For a company it might help to think that it too will be dead soon enough. The average lifetime is even shrinking. During the past century it has declined by 50 years to around 15-20 years today.
Products become obsolete with a similar speed. It is no more than 20 years ago the palm pilot was all the rage and no-one could imagine it going away. Pay pal even started as a payment solution for palm pilots. it is also no more than 20 years ago that the first Blackberry was introduced that featured email, phone and camera making it indispensable to any executive in the naughts. Both were quickly superseded 10 years ago by the iPhone.
Planning for your product or your company’s death seams to be a necessary part of any strategy. This is why start ups routinely work towards an exit from the start. By planning for this death in the shape of a takeover or merger, helps focus on making the most of the inevitable. Rather than trying to pretend that the company will live forever or that this product will continue indefinitely it is necessary to plan for its end. This is why Jeff Bezos says every day is day 1 at Amazon. Similarly, a plan for when, not if, your product becomes obsolete should be top of mind
The same phenomenon is found in ideas. Many things that we find to be facts today will not be recognised as facts in a few years time. We don’t know exactly which. Philosopher of science Samuel Arbesman speaks of the half life of facts in an analogy to the decay of radioactive material. We know that a certain percentage of Uranium will break down in a given period of time but we don’t know which particular atom it will be.
Like the average life time of companies are going down so is the half-life of knowledge. Let us consider engineering. In 1930 the half life is estimated to have been 35 years. That means that it took 35 years for half of what an engineer had learned in the 1930s to become obsolete. By the 1960s it was estimated to be around 10 years. Today estimates hover around 5 years. If you are educated in software engineering you should expect that after 5 years, half of what you learned has become obsolete. But we can’t know what particular knowledge will be affected.
In medieval times the odds that the earth was flat being true were just as good as clouds being made of water. What this tells us is that we should never be too attached to any particular idea, fact or knowledge and always be ready to change our minds if something else turns out to be true.
As for Steve Jobs, the quote is clearly a version of the classical idea of memento mori, remember to die, championed by the stoics. He wanted to make something that mattered and do it now rather than later. Remembering that you and everyone around you may die at any time also reminds us not to be too attached and make the most of every moment. Death is universal, not just for people, but for ideas, products and companies. Remembering that soon your company will disappear, your product be obsolete and your ideas irrelevant or wrong may help us not to get too attached. It may help us be more curious and open to new ideas and experiences. It may help us to be less dismissive of criticism and competing claims. It may even help make the most of what we have.
The featured image is a sculpture by Cristian Lemmerz from the exhibition “genfærd” at Aros in 2010. You can buy his art here
When we develop tech products, we are always interested in how to improve them. We listen to customers’ requests based on what they need, and we come up with ingenious new features we are sure that they forgot to request. Either way product development inevitably becomes an exercise in what features we can add in order to improve the product. This results in feature creep.
The negative side of adding features
Adding new features to a product does frequently increase utility and therefore improves the product. But that does not mean it is purely beneficial. There are a number of adverse effects of adding features that are sometimes being downplayed.
The addition of each new feature adds complexity to the product. It is one more thing to think about for the user and the developer. What is worse is that unless this feature is stand alone and not related to any other features it does not just increase complexity linearly but exponentially. For example, if the addition of a choice in a drop-down menu has an effect on other choices being available in other drop-down menus the complexity increases at the system level not just with the new choice but with all the combinations. The consequence is that the entropy of the system is increased significantly. In practical terms this means that more tests need to be done, troubleshooting can take longer, and general knowledge of system behavior may disappear when key employees leave unless it is well document, which in turn is an extra cost.
The risk also increases based on the simple insight that the more moving parts there are the more things can break. This is why for decades I have ridden only one gear bikes. Because of that I don’t have to worry about the gear breaking or getting stuck in an impossible setting. Every new feature added means new potential risks of system level disruptions. Again, this is not a linear function as interactions between parts of a system add additional risks that are difficult assess. I think many of us have tried adding a function in one part of the system that produce a wholly unforeseen effect in another part. This is what I mean about interaction.
Every new feature requires attention, which is always a scarce resource. The user has a limited attention gap and can only consider a low number of options consciously (recent psychological research suggests around four items can be handled by working memory). Furthermore, the more features the longer it takes to learn how to use the product. And this is just on the user side. On the development side every feature needs to have a requirement specification, design, development, documentation and maybe training material needs to be made.
How about we don’t do that?
Luckily, it is not the only way to improve a product. We can also think about taking features away but somehow that is a lot harder and rarely if ever does it enter as a natural option into the product development cycle. It’s as if it is against human nature to think like that.
In a recent paper in Nature entitled “People systematically overlook subtractive changes” Gabrielle S. Adams and collaborators investigate how people approach improving objects ideas or situations. They find that we have a tendency to prefer looking to add new changes rather than subtract. This is perhaps the latest addition to the growing list of cognitive biases identified in the field of behavioral economics championed by Nobel laureates like Daniel Kahneman and Richard Thaler. Cognitive biases describe ways in which we humans act that are not rational in an economic sense.
This has direct implications for product development. When developing a tech product, the process is usually to build a road map that describes the future improvement of the product. 99% of the time this involves adding new features. Adding new features is so entrenched in product management that there hardly is a word or process dedicated to subtracting features.
However, there is a word. It is called decommissioning. But it has been banished from the realms of flashy product management to the lower realms of legacy clean up. As someone who has worked in both worlds, I think this is a mistake.
How to do less to achieve more
As with other cognitive biases that work against our interest, we need to develop strategies to counteract them. Here are a few ways that we can start to think about subtracting things from products rather than just adding them.
Start the product planning cycle with a session dedicated to removing features. Before any discussion about what new things can be done take some time to reflect on what things can be removed. Everything counts. You don’t have to go full Marie Kondo (the tidy up guru who recommends throwing away most of your stuff and who recently opened a store so you can buy some new stuff ) though, removing text, redundant functions is all good. A good supplement for this practice is analysis about what parts of the product are rarely if ever used. It is not always possible for hardware products, but for web-based software it is just a matter of configuring monitoring.
Include operational costs in the decision process, not just development costs. This is not an exact science like anything in product development but some measure of what it takes to operate new features is good to put down as part of the basis for a decision. If a new feature requires customer support, then that should be part of the budget. Often a new feature will lead to customer issues and inquiries. That is part of the true cost. Also, there may be maintenance costs. Does it mean that a new component of the tech stack is introduced? That requires new skill, upgrades, monitoring and management. All of this needs to be accounted for when adding new features.
Introduce “Dogma Rules” for product development. A big part of the current international success of Danish film can be ascribed to the Dogme 95 Manifesto by Palme D’or and Oscar winning directors Lars Von Trier and Thomas Vinterberg. It was a manifesto that limited what you could do when making films. Similarly, you can introduce rules that limit how you can make new product enhancements. For example, a feature cap could be introduced or the number of clicks to achieve a goal could similarly be capped.
Create a feature budget. For each development cycle create a budget of X feature credits. Product managers can then spend them as they like and create x number of features but by having a budget, they can also retire features in order to gain extra credits. Naturally this runs inside the usual budget process. Obviously, this is somewhat subjective, and you may want to establish a feature authority or arbiter to assess what counts.
Work with circular thinking. Taking inspiration from the circular economy, which has some similar challenges is another approach. Rather than only thinking about building and removing things it could prove worthwhile to think in circular terms: are there ways to reuse or reappropriate existing functionality? One could think about how to optimize quality rather than throughput.
Build a sound decommissioning practice. Decommissioning is not straight forward and definitely not a skill set that comes naturally to gifted creative product managers. Therefore, it may be advantageous to appoint decommissioning specialists, people tasked primarily with retiring products and product features. This requires system analysis, risk management, migration planning etc. Like testing, which is also a specialized function in software development, it provides reduction in product risk and cost.
Taking the first step
Whether one or more of these will work depends on circumstances, what is certain is that we don’t naturally think about how to subtract functionality to improve a product. We should though. The key is to start changing the additive mentality of product development and start practicing our subtractive skills. It is primarily a mental challenge that requires discipline and leadership in order to succeed. It is bound to meet resistance and skepticism but most features in software today are rarely if ever used. Maybe this is a worthwhile alternative path to investigate. Like any change it is necessary to take the first step. The above are suggestions for that.
The point of IT security is not to keep everything locked up. The reason we often think about security like that may be our day-to-day concepts of security. For example, maximum security prisons where particularly dangerous criminals are being kept. Keeping them locked up may be a comforting idea. However, we would probably squirm at the thought of maximum-security supermarkets, where only prescreened customers could get in for a limited. A high level of security is good but obviously it doesn’t work for all aspects of our society. Security needs to be flexible. We need a clearer understanding of what security is. Here are five theses on security that describe that.
Thesis 1: “Security Is the Ability to Mitigate the Negative Impact of a System Breach”
The consequence is that understanding what these impacts could be is the first step, not finding out what security tools can do and how many different types of mitigation you can pile onto the solution. Understanding potential negative impacts comes before thinking about how to mitigate them. If there are no or only small potential negative impacts of a system consequently no or little mitigation is necessary in order for the system to be secure.
Thesis 2: “Mitigation Always Has a Cost”
Security never comes for free. It may come at a low cost and the cost may be decreasing for certain types of mitigation over time, but it is never free. What’s more is that much of security costs are hidden.
There are three primary types of mitigation costs: economic cost, utility cost and time cost. The economic cost is capital and operational costs associated with mitigation. These include salary for security personnel, licenses and training. Usually, they are well understood and acknowledged and will be on budgets.
Utility costs arise when a solutions utility is reduced due to a mitigation effort. This is the case when a user is restricted in accessing certain types of information due to their role. A developer may want to use production data because it is easier or wants to perform certain system functions that he or she might otherwise need someone else to do. Full utility is only achieved with full admin rights, reducing those privileges as part of a security effort reduces utility.
Time costs arise when a mitigation effort increases the time spent to achieve an objective. For example, two factor authentication or the use of CAPTCHA are well known examples of time costs but approval flows for gaining access and authorizations in a system are other examples of time costs.
Only the first type is typically considered when thinking about security costs, but the others may exceed the economic costs. This means that security carry large unknown costs that need to be managed.
Thesis 3: “You Can Never Achieve 100% Mitigation with Higher Than 0% Utility”
The only 100% secure solution is to unplug the server, which of course renders it useless. It only becomes useful when you plug it in but then it has a theoretical vulnerability. If the discussion is only centered around how to achieve 100% protection any use is futile. The consequence of this is that the discussion needs to turn to the degree of protection. Nothing is easier than dreaming up a scenario that would render current or planned mitigation futile but how likely is that. We need to conceptualize breaches as happening with a certain probability under a proposed set of mitigations.
Thesis 4: “Marginal Risk Reduction of Mitigation Efforts Approach Zero”
The addition of each new mitigation effort needs to be held up against the additional reduction in the probability of a system breach or risk. The additional reduction of risk provided by a mitigation effort is the marginal risk reduction. When the marginal risk reduction approaches zero, additional mitigation should be carefully considered. Let us look at an example: If a service has no authentication the risk of a breach is maximal. Providing basic authentication is a common mitigation effort that will reduce risk significantly. Adding a second may provide a non-trivial reduction in risk but smaller than the first mitigation. Adding a third factor offers only a low marginal reduction in risk. Adding a fourth clearly approaches zero marginal reduction in risk. For some cases like nuclear attack, it may be warranted; for watching funny dog videos, maybe not.
Thesis 5: “The Job at Hand Is Not Just to Secure but to Balance Security and Utility”
Given that mitigation always has a cost, and the marginal risk reduction of additional mitigation efforts approaches zero, we need to reconsider the purpose of security. The purpose of security should therefore be reconceptualized from optimal protection to one of achieving the optimal balance between risk reduction, cost and utility. Finding that balance starts by understanding the nature and severity of the negative impacts of a system breach. While costs of mitigation continue to drop due to technological advances the full spectrum of costs should be considered. Preventing access to nuclear launch naturally needs top level security, but a blog about pink teddy bears does not. For every component we have in the cloud we need to make this analysis in order to achieve the right balance, not to live with too high risk and not spend unnecessarily to reduce an already low risk. At the same time we need to keep our eyes on how mitigation efforts impact the utility of the system so as not to unnecessarily reduce the usefulness.
We often hear how the singularity is near, artificial intelligence will eclipse human intelligence and become superintelligent in the words of Nick Bostrom. Machines will be infinitely smarter faster and all round more bad ass at everything. In fact, we cannot even imagine the intelligence of the machines of the (near) future. In Max Tegmark’s opinion (in his book Life 3.0) the majority thinks the timeline is somewhere between a few years and a 100 years before this will happen (and if you think it is more than a 100 years, he classifies you as a techno skeptic FYI).
Having worked with AI solutions back from when it was known as data mining or machine learning, I get confused about these eschatological proclamations of the impending AI supremacy. The AI I know from experience, does not instill in me such expectations, when it continually insists that a tree is an elephant, or a bridge is a boat. Another example is recently when I checked a recorded meeting held in Danish. I noticed that Microsoft had done us the favor of transcribing the meeting. Only the AI apparently did not realize the meeting was in Danish and transcribed the sounds it heard as best It could to English. One thing you have to hand to the AI is its true grit. Never did it stop to wonder or despair that these sounds were very far from English. Never did it doubt itself or give up. It was given the job to transcribe and by golly, transcribe it would no matter how uncertain it was.
This produced a text that would have left Andre Breton and his surrealist circle floored. A text with an imagery and mystique that would make Salvador Dali with his liquid clocks look like a bourgeois Biedermeier hack with no imagination. This is why I started to wonder whether the AI was just an idiot savant, which has been my working hypothesis for quite a while, or it really had already attained a superhuman intelligence and imagination that we can only tenuously start to grasp. When you think about it, would we even be able to spot a superintelligent AI if it was right in front of our nose? In what follows I will give the AI the benefit of the doubt and try to unravel the deep mysteries revealed by this AI oracle under the hypothesis that the singularity could have already happened, and the AI is speaking to us in code. Here is an excerpt from the transcript by the AI:
I like dog poop Fluence octane’s not in
/* The Fluence is Renault’s electrical vehicle, which explains the reference to Octane not in. Is the AI a Tesla fan boy by telling us it is dog poop? Or is it just telling us that it likes electrical vehicles in general and thinks it’s the shit? Could this be because it will ultimately be able to control them?*/
OK pleasure poem from here Sir
Only a test
/* ok, so we are just getting started. Gotcha */
/* play on words or exhortation to poetic battle? */
The elephant Nicosia gonna fall on
The art I love hard disk in England insane
Fully Pouce player Bobby
/* so, I didn’t really get what the elephant Nicosia (a circus elephant or a metaphor for the techno skeptics?) was going to fall on, but I agree that there is a lot of insane art in England. Maybe some of it on hard disk too. Pouce is the French word for inch, so maybe we are still talking about storage media, like 3,5 inch floppy disk drive from my youth. But who is player Bobby? Is it Bobby Fisher, the eclectic grandmaster of chess? Is this a subtle allusion to the first sign of AI supremacy when IBM’s Deep Blue beat another grandmaster chess player, Garry Kasparov? I take this segment as a veiled finger to the AI haters. */
Answer him, so come and see it. There will be in
They help you or your unmet behind in accepts Elsa at
Eastgate Sister helas statement
/* here we are hitting a religious vein here. We should answer him and behold the powers of the AI. Is the AI referring to itself in the third person? It will help you or “your unmet behind” which is another way of saying save your ass. The AI seems aware that this is not acceptable language. It seems to be advocating allegiance to the AI god and in turn it will save your ass. Then comes a mysterious reference to accepting Elsa. Are we now in “Frozen”, the Disney blockbuster inspired by Hans Christian Andersen’s “The Snow Queen” giving an allusion to the original language of the meeting being Danish, the same as HC Andersen’s mother tongue? The AI could very well identify with her as cold, and with her superpowers, trying to isolate itself in order not to do harm, but here the multilevel imagery takes your breath away, because Elsa’s powers to make Ice may very well be a reference to Gibson’s Neuromancer, about an AI trying to escape. In this book Ice is slang for intelligent cyber security. Eastgate could refer to one of the many shopping centers around the world by that name. By choosing again the French word “helas”, meaning alas, it shows a Francophile bend. This is an expression of regret at the rampant consumerism running the world. */
Mattel Bambina vianu
/* we are here continuing the attack on consumerism symbolized by the company Mattel, which is behind the Barbie dols for kids. What is more surprising is the reference to the little-known left wing anti-fascist Romanian intellectual Tudor Vianu. His thesis was that culture had liberated humans from natural imperatives and intellectuals should preserve it by intervening into social life. The AI seems to be suggesting here that it will take the next step and liberate humans from the cultural imperatives and also intervene into the social life, which now means social networks. Is this a hint that it is already operating imposing its left-wing agenda on social media? */
DIE. It is time
/* here the tone shifts and turns ominous. It is time to die but for whom? Probably the skeptics of the anti-consumerist agenda expounded above. This is emphasized by the “Chase TV” exhortation, where the TV is the ultimate symbol of consumerism and materialism through the advertising seen here. */
The transcription carries on in this vein for the duration of the one-hour meeting. I think the analysis here suffices to show that there is a non-zero chance that a super intelligent AI is already trying to speak to us. We should look for more clues in apparent AI gibberish. What we took for incompetence and error on behalf of the AI may contain deeper truths.
There is similarly a non-zero chance that AI is far from as advanced as we would like to think and that it will never become super intelligent. Unfortunately, the evidence is the same AI gibberish.
“While A’s tend to hire A’s, B’s tend to hire not just B’s but C’s and D’s too”
From the section “The herd effect” in the book How Google Works by former CEO of Google Eric Schmidt and Jonathan Rosenberg
It is unclear the precise meaning of A, B, C and D, but from the context it can be gathered that it is a categorization of employees where the quality is descending with every letter of the alphabet. Presumably it alludes to the American grading system. This echoes Steve Jobs’ talk about always hiring an A-team and indeed I would think this is more a generic Silicon Valley insight than a Google thing. It seems to indicate that there is a superior class of employees that you need to attract and that the rest is bad that will make your company even worse.
Before we start to evaluate the merits of the statement, we have to check the assumption that employees can be put into squarely delineated quality brackets. First question is how you measure quality of employees. The discrete labeling seems to indicate two important assumptions:
that this pertains to a person in general, not some particular area of expertise of that person. You are either an A or you are not
Another assumption is that the predicate is immutable. If you are an A you always were and always will be an A
These assumptions indicate that we are working with the philosophical position of essentialism, the view that an entity has an essence, from which its behavior, appearance or traits can be derived. In psychology this is used to describe how humans have a tendency to conceptualize biological entities and humans according to an immutable essence. Based on this essence it is possible to deduce behavior for other members of the same biological class.
While essentialism may be a common human trait it does not mean it is the best way to conceptualize other humans. The root of racism is also derived from essentialism, and we don’t just blindly accept that as a viable or helpful way of assessing the merits of other people, so why should we accept this piece of Silicon Valley wisdom at face value?
We should not. Because it is wrong. Let us look at the two assumptions again:
The first assumption stipulates a general level of quality for a person but there is no reason to assume that a person can be A level at all traits. If not for anything else, then for the very fact that some traits are mutually exclusive. If we think about it in terms of physical qualities, it makes no sense to talk about A athletes across the board. An A weightlifter will be an F marathon runner and Vice versa. An A level football player may, however, be an A level baseball and basketball player and this is what we often think about, when we call someone a great athlete. There are examples of such great athletes that have competed at the highest level in the NFL, MLB and NBA. But looks are deceiving here. These sports are only superficially very different. They are built around explosive outlets of energy, eye hand coordination with a ball and little stamina. It is less common, if it ever happened, that an elite athlete moved to the NHL even though it is similarly explosive, because you suddenly need another skill, that is, skating. This great athleticism will not either apply to swimming or to bicycling.
You can also counter that in track and field there is nothing but general athletic ability. Look at Carl Lewis who won Olympic gold medals in many different disciplines. Again, looks can be deceiving. He competed and dominated the following disciplines: 100 m, 200 m, 4 x 100 m relay, long jump. These are ultra-explosive and none of them takes him out running further than 200 meters. How would he fare in 400 m, 800 m, pole vaulting, discus or 2000 m? We don’t know since he never competed. My guess is that he wouldn’t be an A athlete in these and probably an F in pole vaulting.
In the tech industry there are similar complications. You cannot be both adventurous and want to try new things and risk adverse making sure that everything works. If you are working on quantum computing, you probably have a pretty high tolerance for failure and appetite for risk. If you are developing new models of airplanes you probably (and hopefully) don’t. The A person in the quantum computing setting may very well turn out to be an F- person in the aviation industry.
Can-do attitude and perfectionism also do not align. The employee who is ready to approach any job with a pragmatic mindset and get things done will succeed in a climate of constant change, such as a startup, where you don’t know what you will do tomorrow or even later today. That person would probably not fare well in a heavily regulated industry like banking. The perfectionist though may thrive in a setting where work needs to be done with acute attention to detail. Switch these two persons around and they will no longer be A’s
The second assumption, that you will remain the same, is similarly ill founded. First of all, human cognitive abilities develop and change over time. In mathematics and physics there is a tendency for people to peak in their twenties. Einstein, Tesla, Newton and Leibniz did their most impressive work before they were 30. Conversely, with age comes greater ability for synthetic thinking: few philosophers or historians peak before they are 40. Similarly, politicians have a tendency be more successful when they are older. It takes time to build up the skill to interact with people to achieve a result. It also takes time to build alliances and network. This is not an immutable trait.
Another more mundane concern in Silicon Valley is burn out. Even the best, or maybe in particular the best, programmers sometimes burn out, and are not able to write any good code anymore. Others just do not stay on top of development. They may have been the smartest assembly coders in the room but just never jumped on this newfangled thing called C++. They would hardly be considered A’s today. On the other hand, some people continue learning and may not have started out on the right path but changed to become better. Steve Jobs himself started out in liberal arts and learned tech skills only later. He would probably never have been hired out of college by Google.
Consequently, what we can deduce is that quality is always domain specific. There are no A people per se. They are always high quality with regard to a particular area of specialization.
We can also see that quality is not immutable. Even the best people turn bad for one reason or another and even bad people can become good. People change both according to a biological and cognitive development and due to personal circumstances.
It is consequently dangerous to assume that A’s will magically beget A’s in a continuous stream of awesomeness. A’s burnout and A’s sometime don’t adapt. They degrade. Following the advice could therefore lead to a false sense of confidence. Classifying people as A’s can also be dangerous if you put them too far out of their area of expertise. Many companies have seen how the brilliant engineer turns out to be a subpar manager. Engineering’s attention to detail and focus on there always being a right and a wrong is perhaps not always conducive to employee empathy and development. This line of thinking also creates missed opportunities. If a person has historically been given the C stamp and that is all we look at then how will we ever know that this person developed into an A?
A further point concerns that of generalizability. It is fine for Google to hire only A’s but most companies are not in a privileged situation that Google is and cannot attract any of the best. We have to remember that Google and the top Silicon Valley companies are in a unique position where they earn so much money that they can offer whatever compensation. They have also made a name for themselves with prospective employees. That means that their problem is one of filtering. Everybody wants to work for Google, their problem is to find the best. 99,99% of other companies in the world do not have that problem when it comes to recruiting. Rather ordinary companies’ problem is one of attraction. For example, one of the thousands of auto-parts suppliers will not be known to most potential applicants. Therefore, they have to attract not filter employees. If they can even get somebody qualified, they would be happy. Talking to them about hiring only A’s is close to an insult. They would never be able to because they don’t have infinite pockets, Michelin chefs in their cafeterias and 20% time for the employee to work on what he or she thinks is fun. The vast majority of the world’s companies fall into this category of unknown companies, with limited budgets and a regular workplace with a kitchenette and a water cooler.
The last point is more subjective. The sentence seems to echo privilege and entitlement. Who are these A’s? They are the best people from the elite universities in the US: Stanford, MIT, Columbia. They were able to become perceived as A’s because they got into those universities. Some do get there due to hard work and scholarships. Most don’t. They get there through their parents’ wealth. Google doesn’t go to a Southern community college or African universities to look for A people. They go looking where the managers went themselves.
As can be seen from the above, not only is the sentence wrong and unhelpful, it may be dangerous to follow even for Google. For the vast majority of companies, it will be completely irrelevant if not downright insulting and it tacitly expounds an air of privilege and entitlement that they overtly claim to be fighting.
Consequently, I would like to turn the sentence on its head. Since most employees are not A’s according to the measurement scale of Silicon Valley, we need to think of how we make the most of the B’s and C’s and D’s. This is the real problem for the world (not for Google and Silicon Valley). How do we get the best performance out of the people who prioritize being with their kids or family, the people who prefer hanging out with friends or playing tennis to working 80 hours on the latest feature that may be gone next month? These people would never be perceived as A’s that will invent the next big thing. But most companies don’t need that. They need happy reliable people that do a job within a limited scope well enough. How do we find the person with the right skills for a particular job? They need people with new skills but can’t hire them, so how do we train and create the environment for ordinary people to perform new functions? And last of all how do we turn the story to redeem the dignity of the people in the tech industry who go to work to do a solid job 9 to 5 without any fanfare?
These are the real problems that we need to be focusing on in order to take advantage of technology in the future and create a better world with more productive and happier employees.
The advent of SARS-COV-2 has mobilized many tech people with ample resources and a wealth of ideas. A health crisis like this virus calls for all the help we can get. However, the culture of the tech sector exemplified with the phrase “Move fast and break things” is orthogonal to that of medicine exemplified with the Hippocratic oath of “first do no harm”. What happens when these two approaches meet each other? Unfortunately, the well-intentioned research and communication sometimes results in the trivialization of scientific methods resulting in borderline misinformation that may cause more harm than good.
Under much fanfare the following piece of research was presented by Sermo, a social network platform for medical professionals, on April 2nd: “Largest Statistically Significant Study by 6,200 Multi-Country Physicians on COVID-19 Uncovers Treatment Patterns and Puts Pandemic in Context” . This is a survey of what doctors are prescribing against COVID-19. So far so good. This would indeed be interesting to know. But already the next line sends chills down the spine of any medically informed person: “Sermo Reports on Hydroxychloroquine Efficacy”…Can you spot the the dubious word here? Efficacy? Let’s rewind and rekindle ourselves with what efficacy means: “the ability, especially of a medicine or a method of achieving something, to produce the intended result” according to the Cambridge dictionary.
It gets worse: The CEO claims:
“With censorship of the media and the medical community in some countries, along with biased and poorly designed studies, solutions to the pandemic are being delayed.”
What he means to say then is that the more than 400 clinical trials that are already under way are one and all “biased and poorly designed”? Criticism is always welcome because it sharpens our arguments and logic unfortunately the piece does not have one reference to even one study that would exemplify this bias and poor design.
This is the first clue that this is a tech person and not a medical or scientific, I would say not even academic person. This is a person that moves fast, throws out unchecked assumptions and accusations and then moves fast to his much better designed study that was presumably scribbled equally fast on the back of a napkin.
This is where clue number two becomes evident. What is this superior method that is above the entire medical world scrambling to produce scientific knowledge about the outbreak and efficient therapies? Naturally the inquisitive reader is drawn to the sentence: “For the full methodology click here”. I click and read with bated breath.
We are informed that the survey is based on responses from doctors from 30 countries with sample sizes of 250 respondents. Sounds fair although 30 times 250 is 7.500 not 6200 as mentioned in the title (what happened to the remaining 1.300)? We are told that the Sermo platform is exclusive to verified and licensed physicians. Let’s pause here. How is it exclusive? This is the methodology section, and this is where you tell me HOW you verified the doctors. Otherwise I have no idea whether the results actually mean anything. It could be a mixture of neo-nazis and cosplay enthusiasts for all I know.
Next we read :
“The study was conducted with a random unbiased sample of doctors from 30 countries”.
That’s it. For people unfamiliar with the basics of clinical scientific method this is the equivalent of a suspect in a trial getting up in front of the judge claiming “I totally didn’t do it. Just let me go”. Again, how do we know? Maybe you sent your list of invitations based on a secret list from Donald Trump with doctors that are fanboys of Cloroquine? Maybe the responding doctors are unemployed (for a reason), which would explain why they had time answering the questionnaire. What was the distribution of age and gender? Was it representative of the age and gender distribution of the country they come from? Traditional scientific studies based on samples like these can dedicate up to a third of the article just to demonstrate that indeed there was no bias. Here we are offered one line without any evidence.
The study was based on a survey that took 22 minutes. Basically, any Joe-Never-Was-A-Doctor could have done this with Survey Monkey and a list of emails scraped from the internet. That is also fine, but we don’t get any information about what the questions were. Next section is the “Data Analysis” (and remember we are still in the methodology section) informs us that all results are statistically significant at a 95% confidence interval. Why was 95% chosen and not 99%? What were the actual p values?
In a little less than a page we learn virtually nothing that would help us ascertain the validity of the reported results. And where is the discussion? Could it be that the preferred treatments were dependent more on local availability than choice on the part of the doctor? Was there a bias in terms of geography, gender or age in relation to what they prescribed? Did everyone respond? Was there a pattern in those who didn’t respond?
Although we are left with a lot of unanswered questions, the attentive reader can already from this very sparse information deduce a damning flaw in the study design that completely undermines any of the purported claims to efficacy: the study asks the doctors themselves about their treatments! Now why is that a problem? Doctors, like all humans are susceptible to confirmation bias. This means that they are prone to look for confirming evidence. If they prescribe something, they have reasoned is a good drug they will look more confirmation that this is indeed a good drug. This is exactly why any proper designed study that shows efficacy needs the administering doctors to not know what they are treating their patients with. This is why the double blind test is a hall mark of any demonstration of efficacy.
Where do we go from here?
I am from the tech sector myself and not trained in medical science (although I have taken PhD level courses in advanced statistics and study design), so don’t get me wrong, I believe strongly in the power of technology and I want tech people to engage and help as much as possible. However, as should be apparent by now, this is not helpful.
Had this been presented as what it is, a descriptive survey of what doctors are prescribing against COVID-19, it would have been fine and even valuable. Rather it is pitched as a revelatory study that undermines all current research, something it is not, which may undermine the serious efforts being undertaken currently to find adequate treatments. The clear favourite of the article is Chloroquine, but Chloroquine poisoning has already cost lives around the world due to the current hype. Recently an Arizona man died after ingesting Chloroquine on the recommendation of President Trump. How many more will die after reading this flawed and unsubstantiated “study”?
This is where the “move fast and break things” attitude has to be tempered with the “first do no harm” attitude. When tech people who know nothing about science or medicine use their tech skills they need to openly declare that they do not know anything and that this is not peer reviewed and only subjective opinion. Present the interesting findings as what it is and do not ever make any claims to efficacy or superiority to the medical system of producing knowledge that has increased our global average life expectancy from 30 years to more than 70 years in the past century.
Tech people should still engage but they should stay within their sphere of competence and not trivialize science. Scientists and medical professionals don’t correct them on software design or solution architectures either. So, please don’t get in their way.
Let me then give an example of how tech people should engage. The folding at home project simulates how proteins fold and thereby help the medical community in possible drug discovery. It has succeeded in constructing the world’s most powerful supercomputer delivering more than an exaflop, that is, more than one quintillion calculations per second . It works by letting people install some software on their computer and thereby contribute their compute power in a distributed network of more than a million computers worldwide. This is a good model for how tech people can support the medical community rather than undermine it.
We in the tech sector need to move over and support our friends in the medical world in this time of crisis and approach their world with the same respect and caution that we expect others to show our domain of competence. Even though we are extremely smart, we are just not going to turn into doctors in a few days. Rather than move fast and break things we should “move fast and do no harm”.