To understand the present situation it often helps to look at history even if our world and time seems so novel and so advanced that we can’t imagine anything in history could be similar to our current situation. Nevertheless, it seems worthwhile if not for anything other than to try to view the situation from another point.
One of the great technology historians is Thomas P. Hughes. To situate him in the landscape it is important to note that he was opposed to technological determinism, the view that technological development drives social, cultural, and historical change, often independently of human influence or control. This is the position of most prominent tech experts who extoll one or another version of Ray Kurzweil’s theory of the Singularity, which stipulates a hypothetical point at which artificial intelligence surpasses human intelligence, leading to rapid and unforeseeable technological and societal transformations.
On the contrary Hughes viewed technology as a system. He analyzed such Large Technological Systems (LTS) as electricity. In his view an LTS could be characterized by the co-evolution of its components grouped into artifacts, organisations and people. Like electricity we can imagine AI to be one such LTS.
All such technological systems have a forward momentum but are also held back by what he termed a reverse salient. While a system is accelerating, its technical abilities improving exponentially. But acceleration in one area often reveals weaknesses elsewhere. A reverse salient is the most underdeveloped component of an LTS that acts as a bottleneck, threatening to hold back the entire system’s progress.
In the early days of electrification, a reverse salient was the inability to transmit power efficiently over long distances. Today, with the AI technological system, most people think processing power or algorithms are this reverse salient, but in fact AI’s reverse salient is not a technical problem; it is found within the enterprise itself. Specifically, the failure of large organizations to successfully integrate AI into their core business processes. As was revealed in the MIT report State of AI in Business 2025 reveals that 95% of businesses are getting nothing in return on their AI investments because it is not worked into core business processes. Similarly in the study that I conducted for PA consulting, entitled Nordic AI archetypes, showed that what was holding businesses back from adopting AI was solutions characteristics such as unpredictability and lack of transparency, data, human factors and in the last place were technological factors.
Right now, most companies treat AI as a bolt-on feature: a slightly better chatbot, a smarter email sorter, or a tool for marketing content generation. These are fringe applications. To be sure, they are valuable, but they don’t fundamentally change the structure and performance of the business.
The real strategic value of AI, the efficiency gains and competitive advantages, is locked within core operational processes. Just to name a few examples, AI could autonomously handle complex, realtime capital allocation, FX trades borrowing and hedging or dynamically handle route planning, shipping, and predict cascading failures. Finally AI could be designing potential new digital products, running simulations, managing the development of prototypes and bringing them to market and monetising them.
This is where the organizational system comes to a grinding halt. The AI technology system stalls due to internal friction because the enterprise is not an easy part of the LTS to fix. The obstacles are deep, structural, and cultural.
Data might be fragmented: AI does not act predictable and may not work without the right data in the right quality. In most large companies, mission-critical data remains siloed, inconsistent, and often locked in legacy systems. One could say that the AI is ready to learn but the data is a bad teacher.
Process Rigidity: Core business processes are typically rigid with decades of institutional momentum behind them. Integrating an adaptive, autonomous AI requires fundamentally redesigning the process, roles and operating model around the technology, not just inserting the technology into the old process. Most organizations are unwilling or unable to take on this level of internal re-engineering of the organisation and the change necessary at the people level.
Trust and Governance: To deploy AI in high-stakes operational roles (e.g., manufacturing, trading, or medical diagnostics) it is required to have trust in the solution, transparency, and clear accountability, which is rarely possible to establish easily with AI. The necessary governance frameworks, training programs, and validation processes are often non-existent and regulatory frameworks may cast doubt on the legality.
Hughes teaches us that System Builders fix their reverse salients to expand power. If companies are to harness the power of the AI LTS, they must stop viewing AI as a software purchase and start viewing their own operation as the critical infrastructure that must be rebuilt. The strategic imperative is clear: Before buying the next model, invest heavily in the internal reverse salients: building a proper data foundation, process engineering, and defining an AI ready operating model.
Until AI’s reverse salient is fixed the massive momentum generated by the technology providers will simply hit a wall of organizational complexity. The result will only be marginal gains instead of the exponential leap promised by the technology.
Photo by Dan Roizer on Unsplash
