I’m Not So Interested in LLMs Anymore, Says Yann Lecun 

5 days ago 9
  • Published on April 14, 2025
  • In AI News

LeCun explained that his focus has shifted to four areas he considers more fundamental for machine intelligence: understanding the physical world, persistent memory, reasoning, and planning.

‘AI will Soon Match or Surpass Human Intelligence,’ says Yann LeCun

Meta’s chief AI scientist, Yann LeCun, said he is no longer interested in large language models (LLMs), calling them a product-driven technology that is reaching its limits.

The Llama Club Article Foot

“I’m not so interested in LLMs anymore,” LeCun said during a recent talk at NVIDIA GTC 2025. He added that today, LLMs are mainly handled by product teams that are making small improvements by adding more data, increasing computing, and using synthetic data.

LeCun explained that his focus has shifted to four areas he considers more fundamental for machine intelligence: understanding the physical world, persistent memory, reasoning, and planning.

“There is some effort, of course, to get LLMs to reason, but in my opinion, it’s a very simplistic way of viewing reasoning,” he said. “I think there are probably better ways of doing this.”

LeCun expressed interest in what he called “world models” — systems that form internal representations of the physical environment to enable reasoning and prediction. “We all have world models in our minds. This is what allows us to manipulate thoughts, essentially,” he said.

He criticised the current reliance on token prediction, which underpins how LLMs operate. “Tokens are discrete… When you train a system to predict tokens, you can never train it to predict the exact token that will follow,” LeCun said. He argued that this approach is insufficient for understanding high-dimensional and continuous data like video.

“Every attempt at trying to get a system to understand the world or build mental models of the world by being trained to predict videos at a pixel level has failed,” he said. Instead, he pointed to ‘joint embedding predictive architectures’ as a more promising approach.

These architectures, according to LeCun, make predictions in abstract representation space rather than raw input space. He described a method where a system observes the current state of the world, imagines an action, and then predicts the next state — a core component of planning.

“We don’t do [reasoning and planning] in token space,” he said. “That’s the real way we all do planning and reasoning.”

He also criticised current agentic AI systems that rely on generating many token sequences and selecting the best one. “It’s sort of like writing a program without knowing how to write it,” he said. “You write a random program and then test them all… It’s completely hopeless.”

Responding to growing claims about the imminent arrival of artificial general intelligence (AGI), or what some call artificial major intelligence (AMI), LeCun remained sceptical.

Picture of Siddharth Jindal

Siddharth Jindal

Siddharth is a media graduate who loves to explore tech through journalism and putting forward ideas worth pondering about in the era of artificial intelligence.

Related Posts

Our Upcoming Conference

India's Biggest Conference on AI Startups

April 25, 2025 | 📍 Hotel Radisson Blu, Bengaluru

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

Happy Llama 2025

AI Startups Conference.April 25, 2025 | 📍 Hotel Radisson Blu, Bengaluru, India

Data Engineering Summit 2025

May 15 - 16, 2025 | 📍 Hotel Radisson Blu, Bengaluru

MachineCon GCC Summit 2025

June 20 to 22, 2025 | 📍 ITC Grand, Goa

Cypher India 2025

Sep 17 to 19, 2025 | 📍KTPO, Whitefield, Bengaluru, India

MLDS 2026

India's Biggest Developers Summit | 📍Nimhans Convention Center, Bengaluru

Rising 2026

India's Biggest Summit on Women in Tech & AI 📍 Bengaluru

Read Entire Article