
Sam Altman’s 7 trillion dollar AGI moonshot looks even more out of touch in 2026 than it did when announced. In Davos this January, Satya Nadella gave us the most explicit statement on why we should reconsider how we allow big tech to approach AI: AI that earns its keep in the real economy versus AI as an abstract, GDP‑sized science project.
From AGI Alchemy to Energy Bills
When Altman floated the idea of raising 7 trillion dollars to build AGI, it landed with the mix of awe and unease that always accompanies Silicon Valley’s grandest promises. The story sounded familiar: raise unprecedented capital, claim you’ve transcended traditional constraints, and assure the world that scale itself will solve the complex problems. We have seen this before, when WeWork claimed it turned real estate into cloud services. In both cases, complexity and confidence stood in for genuine value creation, until they didn’t.
WeWork sold the idea that they were the most intelligent people in the room, using technology and prediction to overcome the traditional limits of real estate. At the same time, management told investors to trust the narrative, not the numbers. Altman’s AGI pitch rhymes with that story. If you stack enough GPUs and enough capital, intelligence will somehow emerge at human or superhuman levels, regardless of whether anyone can define a practical path from today’s systems to that outcome.
The Scale Fallacy
The core problem has not changed: the assumption that “more AI” is always better AI. The 7 trillion vision leans on a simple scaling logic, more data, more parameters, more compute, without a commensurate focus on grounding, applicability, or resilience. Most organizations do not need a system that can, in theory, pass a philosophy exam. They need AI that reduces their logistics costs, improves their compliance, and makes their employees more effective.
History suggests that scale without grounding creates fragility. In finance, ever more leveraged and opaque instruments magnified risk until a shock brought the entire structure down. In AI, chasing AGI through brute force risks similar unseen fragilities: brittle models deployed at planetary scale, incentive structures that reward hype over safety, and infrastructure commitments that outpace real productivity gains.
Nadella’s Davos Hybris
Nadella’s Davos speech clearly shows that Big Tech hasn’t learned any lessons over the past year. In conversation at the World Economic Forum, he framed the “realization of AI” not as a singular AGI event, but as broad diffusion: AI must be “accessible and available” across societies and industries. His solution to the problems with AI is more AI.
Crucially, Nadella underscored that he understood a hard economic constraint that AGI dreamers prefer to ignore: energy. At Davos, he pointed out that energy costs will determine who actually wins the AI race, because running and cooling massive models is not free. That turns the conversation from mystical “intelligence” back to kilowatt‑hours, grid capacity, and the carbon footprint of hyperscale data centers. In other words, if your AI roadmap assumes near‑infinite cheap power, you don’t have a strategy.
Hubris, Capital, and the Cost of Overreach
Big Tech AI dreams are not just “money in search of problems.” The dreams lead to an extreme use of human capital and resources. Yet, the usage of capital aims at an abstraction that resists operationalization. They forget that intelligence is not a simple function of parameter counts or an emergent property guaranteed by scale. It lives in context, embodiment, and social interaction. None of those are part of the current LLM paradigm. Thus, treating AGI as an inevitable outcome of bigger clusters is a form of technological determinism that sounds scientific but behaves like faith.
Nadella’s framing at Davos exposes the fragility of that faith. If AI must prove itself via measurable contributions to GDP growth, productivity, and widely diffused capability, then a moonshot that hoovers up trillions without clear, incremental returns becomes much harder to justify. Investors and policymakers count energy bills, adoption rates, and productivity metrics. Without sufficient revenue, it becomes hard to justify a hypothetical future.
However, the bigger risk is not just financial losses but a collapse in public trust. Beyond bankrupting their shareholders, Enron damaged confidence in markets and auditors. A spectacular AGI overshoot could similarly sour the public on AI as a whole, making it harder for grounded, beneficial systems to gain acceptance in healthcare, education, and public administration.
Lessons AI Leaders Still Resist
Many past failures can teach us brutal but clear lessons: transparency beats opacity, applicability beats abstraction, and humility beats the illusion of transcendence. Once narratives about inevitable triumph drown out evidence and accountability, the crash is a matter of timing, not probability. Watching Altman, Nadella, and Musk chase unprecedented sums for an undefined AGI target feels like watching that cycle reboot with GPUs instead of collateralized debt obligations.
Nadella’s Davos remarks show that they aren’t ready to walk a more sustainable path. He talked about AI needing to justify its existence. Yet, action speaks louder than words, and their Big Tech companies are still chasing the dream of a world-spanning AI that will replace our need to work.
Choosing Useful AI Over Grand Narratives
Despite the dreams of Grok and Copilot, AI and technology as a whole aren’t about unlimited ambition but about making our lives better. Unless we can build solutions that provide real value, investors and the public will lose their confidence in the technology.
We need an ecosystem in which many organizations can use AI safely, affordably, and productively within their own constraints. Only if users experience benefits from AI rather than forced adaptation will AI be able to justify its costs.

One response to “AI Moonshots Are Still Wrong”
I don’t think these things are ever gonna change. They just took to much of their own coolaid.