Sam Altman’s $7 Trillion AGI: AI Moonshot Beyond Comparison

Money for AI

When Sam Altman announced that he was raising $7 trillion in yet another attempt to build Artificial General Intelligence, the news broke with the same mixture of awe and dread that always accompanies moonshot-style declarations in Silicon Valley. Some saw it as the natural next step for OpenAI. This AI company has become synonymous with the rapid acceleration of machine intelligence. Others quietly remembered the last times corporate America promised the impossible, raised obscene sums of money, and assured investors that they had discovered a new kind of financial or technological alchemy. Those stories ended with the collapse of Lehman Brothers and Enron before that. The similarities are more than superficial.

Enron was not just a case of corporate fraud. It was a paradigmatic collapse of hubris, where the desire to be the smartest people in the room led to a company selling narratives detached from reality. Its executives created structures so complex they could hardly steer them, and the more capital flowed in, the less grounded the enterprise became. Altman’s vision of AGI echoes that same dangerous confidence. The idea is that with enough money, hype, and disregard for practical applicability, one can brute-force a path to a technological singularity. Just as Enron believed it had transcended the laws of energy trading, OpenAI insists it is on the cusp of transcending the limits of human intelligence.

When More AI Becomes Less: The Scale Fallacy

The problem with another multitrillion-dollar push toward AGI is not just that it prioritizes scale at the expense of grounding. It fundamentally misunderstands the nature of intelligence and applicability. Most of us don’t need an AI that can analyze a business and present the results in the form of a Shakespearean sonnet.

We need specialized tools for specific tasks, most of which we can build much cheaper with small language models. A smaller model, designed thoughtfully and applied strategically, can provide enormous value without consuming resources on a planetary scale. Small language models operate on what matters for well-bounded applications, data sovereignty, and human-centric use cases. Unlike so-called “frontier AI,” they do not pretend to know everything. Instead, developers calibrate them to support the applicable domains reliably.

Contrast this focus with Altman’s $7 trillion gamble. The implicit assumption is that bigger is better, not just marginally but exponentially. The same logic drove Enron’s financial engineers to create SPVs and derivatives “beyond reasonable comprehension.” It caused them to stack ever more complexity to leave competition, regulation, and even physics itself behind. Yet complexity, in both finance and technology, carries hidden fragilities. At some point, adding scale not only produces diminishing returns but erodes trust, stability, and efficiency. The pursuit of AGI through brute force is not a roadmap to usefulness but a theater of ambition disconnected from how real people and organizations benefit from technology.

Hubris, Capital, and the Illusion of Transcendence

It is worth remembering that Enron raised and burned billions of dollars while constructing the illusion that markets no longer applied to their innovations. Their executives published glossy reports, projected skyrocketing revenues, and positioned themselves as thought leaders of the new energy economy. They told the world they had tamed uncertainty through mathematical sophistication, all while hiding the mounting losses behind increasingly large and unsustainable bets.

Altman, too, is offering a vision that transcends realism. His contradictory claims are not that large language models can become incrementally better at tasks like summarizing text, generating code, or structuring information. He promises something categorically different. OpenAI will design digital minds capable of human-level cognition, reasoning, and creativity. He only needs an investor to believe in that vision of an inevitable AGI and fund it with the GDP of Germany. Similar to how investors once thought that Lehman Brothers had tamed the housing market and Enron had mastered the energy markets, we find ourselves confronted with the very definition of hubris.

Philosophers might see the classical notion of arrogance before the gods in this vision. Yet, as the story of Icarus has told us, the catastrophe is not merely a risk but almost a structural inevitability once the degree of overreach becomes clear. The problem is not ambition itself but ambition swollen with capital to the point of blindness. And when that blindness plays out in artificial intelligence, the fallout is not simply financial. It impacts who controls the future of technology, who governs data, and how we define humanity’s role in the machine age.

AI Applicability as the Antidote

What makes the narrative particularly frustrating is that the alternatives are so apparent. AI does not need to conquer the whole of general intelligence to be transformative. It needs to be helpful to humans and the problems we face in our daily lives. Most of us want AI (and robotics) to take over the tedious and repetitive tasks. We want computers to wash, iron, and fold our clothes while we read books, go on picnics, and make music. Most of us want to spend less time with technology, not have something so complex that it forces us to waste time on it, rather than spend time with our families, friends, and neighbors.

By contrast, $7 trillion for AGI is not simply money in search of a problem, but money invested in an unsolvable abstraction. Genuine intelligence is not reducible to parameter counts or scaling laws. Intelligence is about interaction, context, lived experience, and grounding in physical and social environments. The quest for AGI instead embraces a mechanical determinism. If you stack enough GPUs, intelligence will become emergent. While this assumption may be an interesting philosophical experiment, it is hardly the foundation for a responsible or sustainable technological agenda.

Enron collapsed because it mistook accounting alchemy for real value creation. If AGI collapses under similar hubris, it would not just bankrupt investors but potentially erode public trust in AI altogether. This erosion of trust would, in turn, undermine the quieter, more practical progress that could help improve our lives.

The Lessons AI Leaders Refuse to Learn

The Enron lesson was that transparency, applicability, and humility matter more than unbounded ambition. It was also that hype corrodes realism, and once the cycle of belief overtakes the cycle of accountability, the crash is only a matter of time. Watching Sam Altman raise $7 trillion for AGI feels like watching history rhyme once again. The rhetoric of inevitability and transcendence is masking the same core problem: the misalignment between promises and reality.

To avoid disaster, we need to center AI development on applicability. We need useful products instead of moonshots, modest models instead of monumental ones, and grounding instead of grandiosity. None of us can afford another Enron moment, this time transported into the domain of intelligence itself. We need AI that helps us work and live better without collapsing under the weight of its makers’ pride.

More Articles & Posts

Mastodon