Ancient greek AI robot

What it would take for AI to take our jobs?

AI-powered tools like ChatGPT and Midjourney’s image generator have incredible abilities, leading to mainstream fears of AI taking over the world. Talk on online media and social media alike is not about whether AI will replace humans but when they will do so and whether this will be good or bad or even leave any trace of human existence behind. 

As I have argued earlier from a philosophical point of view I do not believe that we are any closer to Artificial General Intelligence than we were at the time of Philon’s robot two thousand years ago. Therefore, there is no reason to address the more wide-reaching consequences of AGI replacing or eliminating humans. There is, though, still a possibility that Artificial Narrow Intelligence will disrupt and replace many jobs. 

If we look at my job, which supposedly is ripe for AI takeover, it is illuminating to consider the prospects of AI and the limitations. I work as an IT consultant, helping companies optimize their use of technology, including AI. As a consultant, AI only marginally helps me with tasks such as writing, translation, music recommendations, navigation, search and AI image generation. 

I am still trying to discover a way for AI to effectively handle the key responsibilities of my role, such as organizing projects, arranging meetings with relevant parties, retrieving and analyzing internal client documents, reviewing vendor proposals, outlining requirements, and evaluating data accuracy. If we take a step back, we can more clearly see the shortcomings of AI. 

The missing abilities of AI

Social intelligence –  current AI cannot gauge the social domain. Chatbots struggle with understanding what is appropriate to say and to whom. How would an AI even start to be able to “read the room?” How would it know the formal and informal power and interest of stakeholders? How would it even construct a working model of an organization?

Humans have an active social cognition. Psychologist Nicholas Humphrey for example, believes that social intelligence is what makes us human and allows us to read and understand other minds. Cognitive scientist Michael Tomasello thinks that he can trace humans’ evolutionary divergence from primates to an increased ability in social intelligence. To mount a challenge to humans, AI must develop even a rudimentary social intelligence. 

Verisimilitude – AI lies, reproduces biases, and gleefully spews propaganda. That much is widely accepted and the impetus for EU regulatory activity. This points to a deeper problem with AI: it simply has no concept of truth. In specific circumstances, it might be tailored to guess the probability of something. Still, to my knowledge, there is no general solution for AI to understand what is closer to being true, which is necessary for an AI to contrast conflicting information. Karl Popper was the first to isolate the problem of verisimilitude as a central one for scientific progress. Unfortunately, subsequent philosophical debates have not reached a consensus on what Verisimilitude is or how to assess it. If we cannot even, in principle, agree on what constitutes truth and lie, how can we expect an AI to master that? 

Teleology – AI has yet to develop a genuine concept of the purpose of anything. LLMs mimic this successfully because they operate only on statistical regularities in language use that typically capture the historical patterns of communication that an AI can detect. But AI has no model or concept of purpose of its own. This is a problem when it has to construct a novel solution that supports a particular goal or purpose. Suppose we have to create a bookkeeping solution for company X (or maybe Y is a better example) that integrates thirty different systems. The AI needs to understand the specific purpose of each system’s contribution and the overall purpose of the new solution as well as the purpose of the contributions of every stakeholder and project participant.

For Plato and Aristotle, teleology played a crucial role in explaining and understanding the world. Although that has since disappeared from the focus of philosophy, it is still a powerful human intuition. Philosopher Daniel Dennett identified teleology as a key way we relate to the world and dubbed this the Design Stance in his book Kinds of Minds. Cognitive psychology has demonstrated that this is indeed a default mode of reasoning in humans.In a limited and predictable area, it seems possible for an AI to develop a particular purpose function, which is analogous to teleological reasoning, but there is no general-purpose teleological function in sight for AI.

Indeterminacy – A frequent problem for AI is that it does not know what it doesn’t know. It builds only on the things it knows and has been trained on, and it gives its best shot with supreme confidence. While the overconfidence effect exists in humans, it is the default of AI. The AI would have to know what it does not know to discover new information. This is how human knowledge progresses. In philosophy, indeterminacy has to do with the fact that not everything can be defined precisely; we cannot clearly and quantifiably know everything. This goes back to Nieszhe’s criticism of Kant’s concept of Noumenon, the thing in itself, and was the focus of Derrida and Foucault. All definitions will form loops, and the meaning of a word is never metaphysically fixed. Instead, even meanings fluctuate. AI thus has to somehow develop a working model where the foundation on which it stands, the language it speaks, and the world in which it operates is indeterminate and fluctuating. 

We got a job to do

These are just the main abilities that AI has to develop in order to take the jobs of professions that rely heavily on social interactions, the construction of novel solutions, or working in areas with high degrees of unknowns that need to be discovered. Even if these could be developed, they would have to be orchestrated to form a coherent functional whole. These could be fruitful avenues of study now that all the basic functions of AI, like pattern recognition and generation, are in place. Perhaps a tighter integration with philosophy and psychology might help reframe AI to incorporate these and other more challenging capabilities prevalent in human jobs. That would also help us create a more “friendly” and humble AI that would not, by default, resolve to “mansplaining,” making up facts, and blindly reproducing biases.