The Man Whose Job Is to Tell You It Won't Work
The hearing room on Capitol Hill, May 16, 2023. Sam Altman, CEO of OpenAI - the hottest company in the world - sits to Gary Marcus's left. Christina Montgomery of IBM sits to his right. Under oath, Marcus says the quiet part out loud: the technology being deployed is fundamentally broken, the companies building it cannot be trusted to regulate themselves, and we need an agency with actual teeth - something like the FDA for AI.
Altman nods along. The senators ask about risks. Marcus describes a "perfect storm of corporate irresponsibility, widespread deployment, lack of adequate regulation, and inherent unreliability." The man next to him runs the company being described.
This is the Gary Marcus experience. Not hostile. Not angry. Just persistently, methodically, documentably correct - and willing to say so in any room, on any stage, before any audience.
He has been AI's most consistent skeptic for over 25 years. Not the conspiracy-theory kind. The kind who read the papers, built the startups, trained the students, and noticed that the emperor's clothes were cut from very thin cloth. While the industry celebrated each new benchmark, Marcus was pointing at what the benchmarks missed. While investors poured hundreds of billions into large language models, he was asking whether language models that hallucinate with supreme confidence were really what anyone wanted deployed in hospitals and courtrooms.
"We as a society are placing truly massive bets around the premise that AGI is close. I am talking about literally a trillion dollar bet."- Gary Marcus, 2025
The Kid Who Coded Before He Could Drive
Baltimore, 1978. Gary Marcus is eight years old and running a paper-based computer simulation. By ten he is fascinated by artificial intelligence. By sixteen he has written his first AI program. The curiosity isn't academic - it's urgent. He wants to understand how minds work, whether built of neurons or silicon.
He skips the last two years of high school to enroll at Hampshire College. Graduates in three years. Walks into MIT's doctoral program at 19. At 23, he has a PhD from the Brain and Cognitive Sciences department, supervised by Steven Pinker - one of the most celebrated cognitive scientists alive. His dissertation is on inflectional morphology, the patterns children use when they say "breaked" instead of "broke." The errors aren't random. They're windows into how the mind structures language.
That insight - that structured rules and messy pattern-matching coexist in human cognition - never left him. It became the lens through which he would spend the next three decades evaluating artificial intelligence. And finding it wanting.
Marcus's PhD advisor Steven Pinker is the author of The Language Instinct and How the Mind Works. The student became the louder critic; the teacher became the cautious optimist. They remain connected by the same fundamental question: what does it actually mean for a system to understand something?
He joined NYU as a professor in 1993. Spent 23 years studying cognitive development. Watched neural networks go from a backwater curiosity to the dominant paradigm in AI. Watched the field claim - repeatedly - that this time, the problem was basically solved. Watched each claim quietly implode. Kept notes.
A Quarter Century of Correct
Six Books. One Argument.
Each book is a different angle on the same fundamental question: what does understanding actually require?
Argued the mind requires structured symbolic rules, not just pattern-matching. MIT Press. Required reading in cognitive AI.
How a small number of genes produces the complexity of human thought. A nativist challenge to blank-slate theories.
The human mind as evolution's duct-tape fix. Funny, sharp, and unexpectedly illuminating on why we make the same mistakes repeatedly.
NYT Editor's ChoiceHe learned guitar at 40. As a science experiment. Then wrote a bestseller about it. The book debunks the myth that adults can't learn new skills.
NYT BestsellerWith Ernest Davis. The most clear-eyed account of what AI can actually do vs. what the press releases say. Read before investing in anything AI-related.
Forbes Top 7 AI BooksMIT Press. Anticipated the rise of tech oligarchs. Proposes an FDA-equivalent for AI. The New Yorker recommended it. So did events.
New Yorker Pick"Sam Altman talks about scaling laws like they're a property of the universe. They aren't. They're empirical observations, like Moore's Law. And Moore's Law ran out."- Gary Marcus, on AI's scaling hypothesis
The Debates That Defined a Career
Marcus has spent decades in public argument with people who run the institutions they disagree about. His debates are not theoretical.
"What I've heard for a quarter century is 'we're working on it, we're going to solve it next year.'"
What He's Actually Saying (And Why It Matters)
Strip away the debates and the books and the newsletter and what you get is one argument, made consistently since the late 1990s: language is not the same as understanding, and pattern-matching is not the same as reasoning.
Large language models are extraordinary at predicting what text should come next. They have been trained on essentially all the text that exists. They can produce fluent prose, working code, plausible medical advice, and confident nonsense - often in the same paragraph. Marcus calls this "the Eliza effect": the human tendency to attribute understanding to systems that merely mirror our language back at us.
His specific technical objections are: LLMs cannot maintain adequate world models; they cannot extrapolate reliably outside their training distribution; they hallucinate not occasionally but structurally; they have no way to know what they don't know; and the scaling hypothesis - the idea that more data plus more compute will eventually produce genuine intelligence - has no theoretical foundation. It is an empirical observation extrapolated into a cosmological claim.
Marcus has consistently advocated for hybrid AI systems that combine neural networks (good at perception and pattern recognition) with symbolic AI (good at reasoning, rules, and logical inference). In April 2026, he pointed to Claude Code's 3,167-line deterministic symbolic kernel - with 486 explicit IF-THEN branch points - as proof that Anthropic agreed with him. "The biggest advance in AI since the LLM," he wrote, "is one where someone finally admitted the LLM alone isn't enough."
What he wants: an FDA-like regulatory agency with authority to require pre-deployment testing, mandate transparency, and recall AI products that cause harm. Universal basic income to absorb AI-driven job displacement. International coordination to prevent a race to the bottom. He has proposed this in books, in op-eds, in Senate testimony, and in 103,000-subscriber newsletter posts. The proposals remain largely unimplemented. He keeps writing.
"The only way you can kill hallucinations is to not run the system."- Gary Marcus, on the unfixability of LLM hallucinations
The Cognitive Scientist Who Learned Guitar at 40
Here is the thing about Gary Marcus that the AI criticism sometimes obscures: he is genuinely curious about human learning, and he uses himself as the experimental subject.
On the eve of his fortieth birthday, he had never played a musical instrument. He described himself as "coordinatively challenged." He decided this was scientifically interesting - he would learn guitar, document everything, and turn it into a book that was simultaneously a memoir and a study of adult learning, critical periods, and the science of skill acquisition.
He visited Day Jams, a rock camp for children ages 8-15. He performed in a band called Rush Hour. He practiced scales while his daughter watched. The resulting book, Guitar Zero, became a New York Times Bestseller - partly because it was genuinely funny, partly because it demolished the received wisdom that adults cannot learn new complex skills, and partly because reading about someone else struggling to play a G chord is oddly comforting.
Kim Stanley Robinson, author of Ministry for the Future: "Gary Marcus is one of our few indispensable public intellectuals - more readers would improve the actions being taken to shape AI development."
The guitar episode is not a detour from his work. It is the same work. The same question: what does it actually take to learn something? What are the constraints? What can be changed? The guitar, the PhD dissertation on children's language errors, the critique of AI - they are the same project, wearing different shoes.
Six Predictions for 2026
At the start of 2026, Marcus published his predictions. He has been making annual predictions for years. The track record is publicly available. Here are the 2026 calls:
No AGI in 2026 or 2027. The trillion-dollar bet has no scientific basis.
Domestic robots (Optimus, Figure) will remain largely demo products, not market-ready.
No decisive national winner in the GenAI race.
Escalating research into world models and neurosymbolic approaches - the field is quietly pivoting toward his position.
2025 recognized as peak AI bubble; Wall Street confidence in GenAI eroding.
Backlash to radical AI deregulation will escalate as harms become more documented.
The People Around the Argument
MIT cognitive scientist, author of The Language Instinct. Shaped Marcus's entire framework for thinking about language, mind, and structure.
iRobot co-founder. Invented the Roomba. Both independently concluded that deep learning alone couldn't build reliable robots. Now building the alternative together.
NYU computer science professor specializing in commonsense reasoning. Co-wrote Rebooting AI with Marcus. The technical backbone of the book's arguments.
Meta's Chief AI Scientist called Marcus "wrong" in 2018. LeCun's own JEPA architecture - modular, world-model-driven - arrived at positions remarkably similar to Marcus's by 2023.
What's Happening Now
Declared Claude Code "the single biggest advance in AI since the LLM" - argued that Anthropic's use of a symbolic kernel vindicates his 25-year advocacy for neurosymbolic AI.
Published "Six Predictions for AI in 2026" on Substack - no AGI, peak bubble, neurosymbolic surge. Available with full track record going back years.
Keynote at the Royal Society, London - called LLMs "deeply flawed imitators preying on the Eliza effect."
World AI Summit 2025 - public conversation with Dr. Ben Goertzel (SingularityNET) on the future of AI architectures.
Taming Silicon Valley published by MIT Press - made The New Yorker's recommended books list. Proposed FDA-like AI agency, UBI for displaced workers.
Testified before U.S. Senate Judiciary Committee on "Oversight of A.I.: Rules for Artificial Intelligence" alongside Sam Altman (OpenAI) and Christina Montgomery (IBM).