Photo by Mike Lewinski

To the Moon – is AGI really inevitable? 

Much current discussion revolves around when we will see Artificial General Intelligence and how that will affect us. The current edition of the Economist (July 26th 2025 edition) is dedicated to that topi. It reports how Investment centers around building ever bigger data centers to run ever bigger models on ever bigger amounts of data. Looking at graphs tells us that we are heading at exponential speed towards AGI. Here is one from the Economist:

The trajectory is undeniable. It seems only a matter of time before AI can do all the engineering that humans do, right? However, we should always be careful about extending graphs of trends without understand all of the underlying dynamics. It reminds me of the story of Taleb’s Turkey from the Black Swan. Nicholas Nassim Taleb remarked how looking at the growth of Turkey’s might not be the best way to predict it’s future. Come thanksgiving it might stop very abruptly. Something similar may be the case with the quest for AGI. I would like to offer another parable.

Consider the first human who wanted to get to the moon. He must have seen that this was a place of possibilities and wondered how to get there. He noticed how close it rose from the mountain in the distance at certain times. He convinced the people of his tribe to go there to get to this new land. They came up with a plan to walk to the moon by gradually scaling the mountain. Day by day they predictably came closer to the moon. They were in good spirits and discussed when they would finally reach the moon. There was some disagreement on the exact date but it seemed certain they would get there. There was also much fear about what would happen when they reached the moon, and some people wanted to stop their progress. Would there be lunar aliens who would conquer the earth? Would there be a virus that would make the extinction of humanity possible? but it was argued that others might get there first anyway. The progress continued until finally one day they reached the top of the mountain and saw that the moon was no closer than ever. 

This is an allegory for our quest for Artificial General Intelligence. We are irrefutably getting closer, but we will not get there with the current approach. The reason is that we have not solved the basic problems yet, like what is (general) intelligence? If it is problem solving, as is the assumption currently, where do problems come from? This is what I have called the Hard problem of AI and no-one has even begun solving that. For humans to get to the moon we needed to step back to understand what the moon was, planetary motion, physics and develop flight rather than scaling mountains although the latter seemed to get us ever closer. The same is the case for AGI. If we want to achieve it, we need to understand what it is and solve the hard problem of AI, which entails a number of things different from what the big tech companies are doing now.

We need to understand what intelligence is and how it develops, we need to understand how to interact with digital intelligence, we also need to understand the kinds of intelligence that we may encounter. Finally, we need to understand, whether superhuman intelligence even makes sense, because it might not be humanlike at all. There are a lot of conceptual and philosohpical problems that you can’t datacenter your way out of. The sooner we realise this the sooner we will begin progress towards intelligent and helpful digital systems.

Photo by Mike Lewinski on Unsplash


Posted

in

by

en_GBEnglish