Breaking
+ Gary Marcus calls Claude Code "the single biggest advance in AI since the LLM" - April 2026 + Marcus on AI newsletter crosses 103,000 subscribers + "AGI is nowhere close" - Marcus vs. OpenAI's 70% claim + Taming Silicon Valley (MIT Press) - New Yorker Recommended Books 2024 + Royal Society keynote London Oct 2025 - "LLMs are deeply flawed imitators" + Robust.AI CEO + NYU Professor Emeritus + Author of 6 books + Gary Marcus calls Claude Code "the single biggest advance in AI since the LLM" - April 2026 + Marcus on AI newsletter crosses 103,000 subscribers + "AGI is nowhere close" - Marcus vs. OpenAI's 70% claim + Taming Silicon Valley (MIT Press) - New Yorker Recommended Books 2024 + Royal Society keynote London Oct 2025 - "LLMs are deeply flawed imitators" + Robust.AI CEO + NYU Professor Emeritus + Author of 6 books
Gary Marcus - AI Critic and Cognitive Scientist
AI Critic • Cognitive Scientist • Author
Gary Marcus
The Man Who Said "No" to Silicon Valley's God Complex
While everyone else was buying the hype, he was filing the paperwork.
AI Skeptic Substack Bestseller Senate Witness 6 Books @GaryMarcus
103K+
Newsletter Subscribers
25+
Years of Being Right
$100K
Bet Against AGI by 2029

The Man Whose Job Is to Tell You It Won't Work

The hearing room on Capitol Hill, May 16, 2023. Sam Altman, CEO of OpenAI - the hottest company in the world - sits to Gary Marcus's left. Christina Montgomery of IBM sits to his right. Under oath, Marcus says the quiet part out loud: the technology being deployed is fundamentally broken, the companies building it cannot be trusted to regulate themselves, and we need an agency with actual teeth - something like the FDA for AI.

Altman nods along. The senators ask about risks. Marcus describes a "perfect storm of corporate irresponsibility, widespread deployment, lack of adequate regulation, and inherent unreliability." The man next to him runs the company being described.

This is the Gary Marcus experience. Not hostile. Not angry. Just persistently, methodically, documentably correct - and willing to say so in any room, on any stage, before any audience.

He has been AI's most consistent skeptic for over 25 years. Not the conspiracy-theory kind. The kind who read the papers, built the startups, trained the students, and noticed that the emperor's clothes were cut from very thin cloth. While the industry celebrated each new benchmark, Marcus was pointing at what the benchmarks missed. While investors poured hundreds of billions into large language models, he was asking whether language models that hallucinate with supreme confidence were really what anyone wanted deployed in hospitals and courtrooms.

"We as a society are placing truly massive bets around the premise that AGI is close. I am talking about literally a trillion dollar bet."
- Gary Marcus, 2025
23 Age at MIT PhD (Pinker's Lab)
6 Books Published
1 Startup Acquired by Uber

The Kid Who Coded Before He Could Drive

Baltimore, 1978. Gary Marcus is eight years old and running a paper-based computer simulation. By ten he is fascinated by artificial intelligence. By sixteen he has written his first AI program. The curiosity isn't academic - it's urgent. He wants to understand how minds work, whether built of neurons or silicon.

He skips the last two years of high school to enroll at Hampshire College. Graduates in three years. Walks into MIT's doctoral program at 19. At 23, he has a PhD from the Brain and Cognitive Sciences department, supervised by Steven Pinker - one of the most celebrated cognitive scientists alive. His dissertation is on inflectional morphology, the patterns children use when they say "breaked" instead of "broke." The errors aren't random. They're windows into how the mind structures language.

That insight - that structured rules and messy pattern-matching coexist in human cognition - never left him. It became the lens through which he would spend the next three decades evaluating artificial intelligence. And finding it wanting.

Fast Fact

Marcus's PhD advisor Steven Pinker is the author of The Language Instinct and How the Mind Works. The student became the louder critic; the teacher became the cautious optimist. They remain connected by the same fundamental question: what does it actually mean for a system to understand something?

He joined NYU as a professor in 1993. Spent 23 years studying cognitive development. Watched neural networks go from a backwater curiosity to the dominant paradigm in AI. Watched the field claim - repeatedly - that this time, the problem was basically solved. Watched each claim quietly implode. Kept notes.

A Quarter Century of Correct

1970
Born February 8 in Baltimore, Maryland
1986
Skipped last two years of high school; enrolled at Hampshire College
1989
Graduated Hampshire in 3 years with B.S. in Cognitive Science
1993
MIT PhD at 23 under Steven Pinker; joined NYU as Professor of Psychology and Neural Science
2001
Published The Algebraic Mind (MIT Press) - early challenge to pure connectionist AI
2008
Published Kluge - NYT Editor's Choice - arguing the human mind is evolution's "good enough" hack
2012
Published Guitar Zero - New York Times Bestseller - learned guitar at 40 as a scientific self-experiment
2014
Co-founded Geometric Intelligence, a machine learning startup combining deep learning with Bayesian and evolutionary methods
2016
Uber acquires Geometric Intelligence; Marcus briefly serves as Director of AI; becomes NYU Professor Emeritus
2018
Published Deep Learning: A Critical Appraisal on arXiv - sparks massive backlash from the deep learning community
2019
Co-founded Robust.AI with Rodney Brooks (iRobot/Roomba co-inventor); published Rebooting AI with Ernest Davis
2023
Testified before U.S. Senate Judiciary Committee on AI regulation alongside Sam Altman and IBM's Christina Montgomery
2024
Published Taming Silicon Valley (MIT Press) - New Yorker recommended; foresaw rise of tech oligarchs
2025
Keynote at Royal Society, London; Marcus on AI newsletter tops 103,000 subscribers
2026
Declares Claude Code "the single biggest advance in AI since the LLM" - cites symbolic kernel as proof of neurosymbolic vindication
6
Books Published
103K+
Newsletter Readers
1
Senate Testimony
2
Startups Founded

Six Books. One Argument.

Each book is a different angle on the same fundamental question: what does understanding actually require?

2001
The Algebraic Mind

Argued the mind requires structured symbolic rules, not just pattern-matching. MIT Press. Required reading in cognitive AI.

2004
The Birth of the Mind

How a small number of genes produces the complexity of human thought. A nativist challenge to blank-slate theories.

2008
Kluge

The human mind as evolution's duct-tape fix. Funny, sharp, and unexpectedly illuminating on why we make the same mistakes repeatedly.

NYT Editor's Choice
2012
Guitar Zero

He learned guitar at 40. As a science experiment. Then wrote a bestseller about it. The book debunks the myth that adults can't learn new skills.

NYT Bestseller
2019
Rebooting AI

With Ernest Davis. The most clear-eyed account of what AI can actually do vs. what the press releases say. Read before investing in anything AI-related.

Forbes Top 7 AI Books
2024
Taming Silicon Valley

MIT Press. Anticipated the rise of tech oligarchs. Proposes an FDA-equivalent for AI. The New Yorker recommended it. So did events.

New Yorker Pick
"Sam Altman talks about scaling laws like they're a property of the universe. They aren't. They're empirical observations, like Moore's Law. And Moore's Law ran out."
- Gary Marcus, on AI's scaling hypothesis

The Debates That Defined a Career

Marcus has spent decades in public argument with people who run the institutions they disagree about. His debates are not theoretical.

Gary Marcus
Cognitive Scientist / AI Critic
Argued deep learning needs symbolic structure; mind cannot emerge from pure pattern-matching alone.
VS
Yann LeCun
Meta Chief AI Scientist
Called Marcus "wrong" publicly in 2018. By 2022, LeCun's own research direction (JEPA, modular world models) had shifted toward Marcus's position.
Gary Marcus
Testifying Under Oath
Called for strict federal regulation of AI companies, including OpenAI. Said the industry cannot self-regulate.
VS
Sam Altman
OpenAI CEO (seated next to him)
U.S. Senate Judiciary Committee, May 2023. Altman's own AGI timeline claims have grown more cautious since. Marcus noted: "Things are so desperate at OpenAI that Sam Altman is starting to sound like Gary Marcus."

"What I've heard for a quarter century is 'we're working on it, we're going to solve it next year.'"

What He's Actually Saying (And Why It Matters)

Strip away the debates and the books and the newsletter and what you get is one argument, made consistently since the late 1990s: language is not the same as understanding, and pattern-matching is not the same as reasoning.

Large language models are extraordinary at predicting what text should come next. They have been trained on essentially all the text that exists. They can produce fluent prose, working code, plausible medical advice, and confident nonsense - often in the same paragraph. Marcus calls this "the Eliza effect": the human tendency to attribute understanding to systems that merely mirror our language back at us.

His specific technical objections are: LLMs cannot maintain adequate world models; they cannot extrapolate reliably outside their training distribution; they hallucinate not occasionally but structurally; they have no way to know what they don't know; and the scaling hypothesis - the idea that more data plus more compute will eventually produce genuine intelligence - has no theoretical foundation. It is an empirical observation extrapolated into a cosmological claim.

The Neurosymbolic Position

Marcus has consistently advocated for hybrid AI systems that combine neural networks (good at perception and pattern recognition) with symbolic AI (good at reasoning, rules, and logical inference). In April 2026, he pointed to Claude Code's 3,167-line deterministic symbolic kernel - with 486 explicit IF-THEN branch points - as proof that Anthropic agreed with him. "The biggest advance in AI since the LLM," he wrote, "is one where someone finally admitted the LLM alone isn't enough."

What he wants: an FDA-like regulatory agency with authority to require pre-deployment testing, mandate transparency, and recall AI products that cause harm. Universal basic income to absorb AI-driven job displacement. International coordination to prevent a race to the bottom. He has proposed this in books, in op-eds, in Senate testimony, and in 103,000-subscriber newsletter posts. The proposals remain largely unimplemented. He keeps writing.

"The only way you can kill hallucinations is to not run the system."
- Gary Marcus, on the unfixability of LLM hallucinations

The Cognitive Scientist Who Learned Guitar at 40

Here is the thing about Gary Marcus that the AI criticism sometimes obscures: he is genuinely curious about human learning, and he uses himself as the experimental subject.

On the eve of his fortieth birthday, he had never played a musical instrument. He described himself as "coordinatively challenged." He decided this was scientifically interesting - he would learn guitar, document everything, and turn it into a book that was simultaneously a memoir and a study of adult learning, critical periods, and the science of skill acquisition.

He visited Day Jams, a rock camp for children ages 8-15. He performed in a band called Rush Hour. He practiced scales while his daughter watched. The resulting book, Guitar Zero, became a New York Times Bestseller - partly because it was genuinely funny, partly because it demolished the received wisdom that adults cannot learn new complex skills, and partly because reading about someone else struggling to play a G chord is oddly comforting.

Kim Stanley Robinson, author of Ministry for the Future: "Gary Marcus is one of our few indispensable public intellectuals - more readers would improve the actions being taken to shape AI development."

The guitar episode is not a detour from his work. It is the same work. The same question: what does it actually take to learn something? What are the constraints? What can be changed? The guitar, the PhD dissertation on children's language errors, the critique of AI - they are the same project, wearing different shoes.

Six Predictions for 2026

At the start of 2026, Marcus published his predictions. He has been making annual predictions for years. The track record is publicly available. Here are the 2026 calls:

01

No AGI in 2026 or 2027. The trillion-dollar bet has no scientific basis.

02

Domestic robots (Optimus, Figure) will remain largely demo products, not market-ready.

03

No decisive national winner in the GenAI race.

04

Escalating research into world models and neurosymbolic approaches - the field is quietly pivoting toward his position.

05

2025 recognized as peak AI bubble; Wall Street confidence in GenAI eroding.

06

Backlash to radical AI deregulation will escalate as harms become more documented.

The People Around the Argument

PhD Advisor
Steven Pinker

MIT cognitive scientist, author of The Language Instinct. Shaped Marcus's entire framework for thinking about language, mind, and structure.

Co-Founder, Robust.AI
Rodney Brooks

iRobot co-founder. Invented the Roomba. Both independently concluded that deep learning alone couldn't build reliable robots. Now building the alternative together.

Co-Author
Ernest Davis

NYU computer science professor specializing in commonsense reasoning. Co-wrote Rebooting AI with Marcus. The technical backbone of the book's arguments.

Long-Running Rival
Yann LeCun

Meta's Chief AI Scientist called Marcus "wrong" in 2018. LeCun's own JEPA architecture - modular, world-model-driven - arrived at positions remarkably similar to Marcus's by 2023.

What's Happening Now

Apr 2026

Declared Claude Code "the single biggest advance in AI since the LLM" - argued that Anthropic's use of a symbolic kernel vindicates his 25-year advocacy for neurosymbolic AI.

Jan 2026

Published "Six Predictions for AI in 2026" on Substack - no AGI, peak bubble, neurosymbolic surge. Available with full track record going back years.

Oct 2025

Keynote at the Royal Society, London - called LLMs "deeply flawed imitators preying on the Eliza effect."

Jun 2025

World AI Summit 2025 - public conversation with Dr. Ben Goertzel (SingularityNET) on the future of AI architectures.

Sep 2024

Taming Silicon Valley published by MIT Press - made The New Yorker's recommended books list. Proposed FDA-like AI agency, UBI for displaced workers.

May 2023

Testified before U.S. Senate Judiciary Committee on "Oversight of A.I.: Rules for Artificial Intelligence" alongside Sam Altman (OpenAI) and Christina Montgomery (IBM).

Share This Profile
Spread the word about Gary Marcus