The most credentialed AI researcher alive stood on a stage in November 2025 and said something that should have made every OpenAI investor nervous:
“The path to superintelligence via LLMs is complete bullshit. It’s just never going to work.”
Four months later, he raised $1.03 billion to prove it.
Yann LeCun, the Turing Award winner who helped build the foundations of modern deep learning, officially launched AMI Labs (Advanced Machine Intelligence Labs) in March 2026 with the largest seed round in European startup history.
His bet: that the LLMs powering ChatGPT, Claude, and Gemini are architecturally flawed, and that a completely different type of AI called world models is the only real path forward.
Whether he’s right or catastrophically wrong, this story is worth understanding. It’s the biggest internal schism in AI research right now, and the outcome will shape every tool you use over the next decade.

What World Models Are and Why LeCun Thinks LLMs Will Fail
World models are AI systems that learn to understand and predict physical reality, not generate text. Instead of predicting the next word in a sequence, a world model learns the underlying structure of how reality works: physics, cause and effect, spatial relationships, and how actions lead to outcomes.
LeCun’s core argument is simple, even if the engineering behind it isn’t. Language models are trained on text. Text is a compressed, symbolic representation of reality.
It describes what happens, not why it happens. An LLM that reads a million physics textbooks doesn’t “know” that if you push a table, the cup on it will move. It knows the words used to describe that fact. To LeCun, that’s a fundamental dead end.
The way he puts it: you can’t distill the real world into text. The model learns correlations, not causes. And without causal understanding, you’ll never get genuine reasoning, reliable planning, or the kind of physical common sense a toddler has after a few years of knocking things off tables.
The JEPA Architecture and How It Works
What is JEPA: Joint Embedding Predictive Architecture, a framework where AI learns by predicting the abstract structure of its environment rather than raw output like text or pixels.
AMI Labs is building on JEPA (Joint Embedding Predictive Architecture), which LeCun first proposed in 2022.
Rather than predicting what a video frame will look like pixel by pixel, a JEPA-based model predicts the underlying representation of what the frame means. The model learns a compressed, structured picture of reality rather than a surface-level imitation of it.
Here’s the concrete difference:
LLM approach: Given the text “I dropped the glass and it…”, a language model predicts the next word (“shattered”).
World model approach: Given a 3D simulation of a glass at the edge of a table being nudged, the model predicts the physical trajectory, the momentum, the surface it will hit, and whether it shatters based on the material properties.
One is learning patterns in language. The other is learning how reality operates. Whether JEPA can actually scale to the level LeCun is promising is a genuinely open question.
But $1.03 billion says a lot of very smart people think it’s worth finding out.
Inside the $1.03 Billion AMI Labs Raise

AMI Labs raised $1.03B at a $3.5 billion pre-money valuation in March 2026, the largest seed round in European startup history. The company is headquartered in Paris. LeCun, who remains a professor at New York University, serves as executive chairman.
The CEO is Alex LeBrun, a former Meta engineer who previously founded Nabla, a healthcare AI company.
LeBrun is refreshingly candid about the hype risk: “My prediction is that ‘world models’ will be the next buzzword. In six months, every company will call itself a world model to raise funding.”
Per TechCrunch’s reporting on the raise, the investor list is striking:
| Investor | Type |
|---|---|
| Nvidia | Strategic |
| Samsung | Strategic |
| Toyota Ventures | Strategic |
| Bezos Expeditions | Venture |
| Cathay Innovation | Venture |
| HV Capital | Venture |
| Tim Berners-Lee | Individual |
| Mark Cuban | Individual |
| Eric Schmidt | Individual |
| Jim Breyer | Individual |
Nvidia’s involvement is the detail I find most telling. Jensen Huang’s company makes the chips that power every LLM deployment on earth, yet is now also funding what could displace that paradigm.
From where I’m sitting, that’s not just a hedge. It’s Nvidia signalling they intend to win regardless of which architecture prevails.
AMI Labs’ target markets include robotics, autonomous vehicles, industrial process control, healthcare, and wearable devices. These are all domains where text prediction is nearly useless.
If you want a robot to navigate a warehouse, it doesn’t need to generate poetry. It needs to model what happens if it reaches for a box at a certain angle.
The Case Against LeCun
LeCun’s critics argue that LLMs already demonstrate many capabilities he claims they can never develop, and that his predictions have a poor track record.
It’s a fair point, and worth weighing seriously before treating AMI Labs as a foregone conclusion.
A few years ago, LeCun claimed that even a hypothetical GPT-5000 could not figure out that pushing a table would move a book sitting on it. Current models handle exactly that scenario without difficulty.
He underestimated how far pattern learning at scale could take these systems.
The counterargument from the LLM camp is strong:
- Emergent reasoning. GPT-4 and its successors show reasoning behaviors that no one explicitly programmed in. Scaling data and compute keeps producing surprising capabilities.
- Grounding via multimodal training. Modern models now train on images, video, audio, and code alongside text, giving them richer world representations than text-only predecessors had.
- Tool use and planning. Models like OpenAI’s o3 solve complex multi-step tasks using explicit reasoning chains. That starts to look a lot like physical planning.
Dario Amodei, CEO of Anthropic, told a Davos audience in January 2026 that current-architecture AI would replace all software engineers within a year and reach Nobel-level scientific research within two.
He was not describing a dead-end technology.
My honest read: nobody knows who’s right. LeCun could be ten years ahead of his time, or he could be backing the wrong architecture with a billion dollars of other people’s money.
The AI field is littered with confident predictions in both directions, and the graveyard of “next big thing” paradigms is long.
LLMs vs World Models Side by Side
LLMs and world models differ fundamentally in what they learn: LLMs predict text sequences, while world models predict how physical environments change over time.
The practical gap between them is massive right now.
Here’s how the two approaches compare across the dimensions that matter most right now:
| Dimension | LLMs (GPT, Claude, Gemini) | World Models (AMI Labs / JEPA) |
|---|---|---|
| How they learn | Predicting text tokens from massive datasets | Predicting structured representations of physical environments |
| Best use cases | Writing, coding, Q&A, summarization, chat | Robotics, physical planning, autonomous vehicles, simulation |
| Current maturity | Production-ready, deployed globally | Research stage, no consumer products yet |
| Key weakness | Hallucinations, no causal understanding, pattern-based reasoning | Unproven at scale, no real-world products, requires major theoretical leap |
| Funding (2026) | Hundreds of billions across OpenAI, Anthropic, Google DeepMind | $1.03B seed (AMI Labs), backed by Nvidia, Samsung, Bezos |
| The contrarian bet | Dominant paradigm, scaling continues to produce gains | Full architecture rethink, betting current approach hits a wall |
What This Means for the AI Tools You Use Right Now
LLMs are still your best practical option for AI tooling in 2026, regardless of how the AMI Labs story plays out.
World models are a research bet, not a product. AMI Labs is years away from anything consumer-facing.
What matters to you in the near term:
- Ignore the LLM death proclamations. ChatGPT, Claude, Gemini, and the tools built on them are getting meaningfully better quarter over quarter. The gains in reasoning, context handling, and code generation since 2024 are real and usable.
- Watch the robotics space. World model breakthroughs will show up in physical systems first: drones, manufacturing robots, surgical tools. They’ll reach your chat interface later, if at all.
- The agent layer is where it gets interesting. The best way to extend LLM capability right now is agents: AI systems that take actions, use tools, and chain reasoning steps together. If you want to build something practical today, agent frameworks are where the leverage is. Tools like Dynamiq let you build custom AI agent pipelines on top of existing models without reinventing the underlying architecture.
- Multi-modal is bridging the gap. Models that see, hear, and read are already moving LLMs toward world-awareness. It’s not JEPA, but it narrows the gap LeCun is betting on.
If you want an all-in-one tool to stay productive while this architectural debate plays out, Sider AI puts browser-based AI assistance, writing, coding, and search across every major LLM in one place.
The practical implication: keep using the LLM tools that work. Just know the ground may shift underneath them over the next five to ten years.
Quick Takeaways
- Yann LeCun launched AMI Labs in March 2026 with a $1.03B seed round, the largest in European startup history
- AMI Labs is building “world models”, AI that learns how physical reality works rather than predicting text
- The technology is based on JEPA, LeCun’s 2022 architectural proposal for learning structured world representations
- Investors include Nvidia, Samsung, Toyota Ventures, Bezos Expeditions, Mark Cuban, and Eric Schmidt
- World models are years from consumer products; LLMs remain the practical choice for any AI tooling in 2026
- LeCun has a strong theoretical argument but a track record of underestimating how far LLMs would scale
