As fast as possible, please.
This article is basically a financial sinkhole — with a bonus punchline you can recycle one day to impress the neighbour’s kid. Because while the idea of machines replacing millions of jobs isn’t exactly new, the real and immediate threat AI poses to our employment contracts today is actually more old-fashioned than we might expect.
It might end up looking less like Terminator and more like a good old economic crash — think 1929 stock market meltdown or the 2008 subprime crisis. Here’s just enough data to make your head spin: developing AI costs a fortune. Seriously. In 2025 alone, global investment is expected to hit $400 billion — about 1.2% of U.S. GDP. That’s roughly the same proportion telecom investments represented… right before the dot-com bubble burst in 2000. Pop.
Most of that beautiful money is going into building data centers: enormous, not-so-pretty warehouses packed with steroid-pumped microchips delivering the computing power AI needs. Data centers are ogres — they devour absurd amounts of energy and cash. Monsters that constantly need feeding, like in Jurassic Park or a corporate cafeteria in Puteaux. AI chips typically need replacing every two to three years. At this pace, our friends at Morgan “I’m-a-big-fan” Stanley estimate that investments will skyrocket to $3 trillion (or as they say in Gstaad, “three trilliards”) by 2028. Those numbers are spectacular — the kind that make your head spin, like sniffing poppers.

An AI-triggered financial crisis could be on the horizon.
But here’s the issue. To justify that level of spending, there needs to be revenue. (Or as they say in Gstaad, “return on investment.”) And right now, AI revenue… well, let’s just say it’s not party time. About $20 billion expected in 2025 — compared to $400 billion invested. Not exactly smoke-machine material. So, can AI generate enough revenue soon to justify all this spending and avoid a financial bubble bursting in our faces? Well, that would mean AI has to actually deliver on its promises — transforming the world in a deep and lasting way. Making companies spectacularly more productive. Powering revolutionary medical treatments. Helping Michel Polnareff record a new album (don’t ask what to italicize in that sentence).
"If the AI bubble does burst, the impact could be spectacular. And our jobs will go down with it" - Vianney Vaute, co-founder of Back Market.
Basically: building tools that millions of people and businesses are willing to pay real money for. Because right now, 70% of OpenAI’s revenue (the folks behind ChatGPT) comes from 40 million users paying $20 a month. That’s sweet — but not exactly enough to justify billions in infrastructure. Sure, AI will likely achieve major breakthroughs someday. But when it comes to the potential AI bubble, the key question is: will that happen soon — or in ten years? If it’s the latter, a market correction seems inevitable, just like the 2000 dot-com crash (and then, a decade later, we got the iPhone and Facebook).
And if the AI bubble does burst, the impact could be spectacular. It’s roughly four times bigger than the subprime bubble (!), involves the seven most “glamorous” tech giants (lol), and the rest of the financial system probably has five or six fingers in the cookie jar too. But, as always, we only notice that at the end of the movie — when the real economy hits the floor. And our jobs go down with it.
We’ve reached the end of the little lecture for the neighbor’s kid. Time to wrap it up. Of course, plenty of “free market” enthusiasts in Silicon Valley (or Gstaad) would argue this isn’t our business — that investors are free to do whatever they want with their capital. But governments — and through them, citizens — would be wise to peek into the financial ghost kitchens of the AI giants, given the scale of what’s cooking there and the potential fallout if the bubble bursts.
Especially since we’re talking about a technology capable of reshaping our world as profoundly as agriculture did 20,000 years ago (feeling old yet?) or the steam engine in the 19th century. If we don’t start flexing our “political muscle” now — especially around how this thing is financed — how can we hope to control how it’s used later? And more broadly, shouldn’t the fact that we’re pouring so much money, so fast, into a technology this powerful (and potentially destructive) worry us just a little? Shouldn’t AI be first and foremost a matter for philosophers, policymakers, and anthropologists — before it becomes a playground for Wall Street?
"Capitalism, unlike democracy, isn’t a vaccine against stupidity" - Vianney Vaute, co-founder of Back Market
What’s crazy is that it almost feels naïve to ask that — because everything around us insists this tech race is inevitable. But it’s not. We can slow down. Reflect. Even backtrack. Just ask the ozone hole, GMO babies, and nuclear test sites — they still remember and they’re still mad at us.
A lot of economists are drawing parallels between today’s AI bubble and the British railway mania of 1845 (6% of GDP back then — they really went full steam ahead). At the peak of the frenzy, railway barons had managed to build three separate train lines from Liverpool to Leeds, and from London to Peterborough. Lol. Capitalism, unlike democracy, isn’t a vaccine against stupidity. And at a time when a few overexcited sorcerer’s apprentices are burning through billions to build a real-life Terminator, that’s probably worth remembering.
Alright, I’ll leave you with the SNCF jingle to lighten the mood: Paaapa, paaaala. Warm regards — Vianney Vaute, Co-founder of Back Market.

