Wharton Professor - AI Researcher - Author
He could have written a doom-and-gloom AI book. Instead he ran a controlled experiment with 758 real Boston Consulting Group professionals. The results were more surprising - and more honest - than any prediction.
While everyone else was writing op-eds about whether AI would destroy humanity, Ethan Mollick was running experiments and reporting back like a scientist - not a pundit.
The first clue that Ethan Mollick is different from most AI commentators: when asked what jobs AI will take away, his honest answer is "nobody knows anything." Not "here are the three industries most at risk." Not "AI will eliminate X% of white-collar jobs by 2030." Just: nobody knows. From a professor at the Wharton School who has spent five years deeper inside AI's practical implications than almost anyone alive, that kind of epistemic honesty is either brave or refreshing. Probably both.
He runs the most-read non-technical AI newsletter in the world - One Useful Thing - with 429,000 subscribers who come every week for research-backed, jargon-free takes on what AI actually does to work, learning, and thinking. The name is deliberate: every post has to contain something you can use. Not a hot take. Not a prediction. A thing.
His 2024 book Co-Intelligence became a New York Times bestseller and was named Best Book of the Year by both the Economist and the Financial Times - that last double is rare enough to be notable. The book's argument is simple and genuinely counterintuitive: stop treating AI like a tool and start treating it like a very weird, very capable alien collaborator. The companies and individuals who made that mental shift first moved fastest.
Before all that, he was building internet experiments at Harvard in 1997 - including a website that collected translations of the phrase "I Can Eat Glass" into as many languages as possible. You can learn a lot about a person from their early internet projects. This one says: curious, systematically playful, interested in human variation, totally unbothered by the frivolous.
"Humans have managed to convince well-organized sand to pretend to think like us."
- Ethan Mollick, Co-Intelligence (2024)
In 2023, Mollick co-led a randomized controlled trial with 758 BCG knowledge workers - not students in a lab, but actual management consultants doing actual work. The results became the most-cited empirical foundation for enterprise AI adoption.
AI is a skill leveler: it helps the worst performers most. The ceiling rises, but the floor rises faster.
Before Mollick was Wharton's AI professor, he was a startup founder in New York. He co-founded eMeta Corporation as VP of Business Development, where he also met his wife Lilach. That experience - being inside an early internet company when the internet still felt like it might change everything - gave him something that pure academics often lack: a felt sense of what it's like when technology outruns your mental models.
When the startup phase ended, he went back to school - not for the credential, but because he had questions that his experience couldn't answer. He got an MBA at MIT Sloan, then stayed for a PhD. His dissertation was called "Essays on Individuals and Organizations." He spent six years at MIT asking why some organizations produce consistent innovation and why most don't.
He arrived at Wharton in 2010 to teach entrepreneurship. For over a decade, he built simulations through Wharton Interactive - educational games that put 20,000+ students inside the decisions of startup founders. The simulations were his theory of learning: you understand entrepreneurship by doing it, not by reading case studies about it. That same philosophy would later define how he thinks about AI. You understand it by using it, not by reading about it.
When ChatGPT appeared in late 2022, Mollick was one of the first academics to take it seriously as a genuine object of study rather than a parlor trick. He started writing. The newsletter was free. The experiments were documented in public. The audience grew by word of mouth among people who were confused by AI hype and wanted someone who was actually measuring things.
His newsletter process is worth noting: he writes every post himself, completely, before asking AI for any feedback. He shows you the original and the AI-edited version side by side, or he describes where AI spotted a flaw in his argument. He's not outsourcing his thinking. He's demonstrating what collaboration actually looks like when done honestly.
"Working with AI is easiest if you think of it like an alien person rather than a human-built machine. Treat it like a person and you're 90% of the way there."
- Ethan Mollick, on AI collaboration
The book that reframed how millions think about AI: stop treating it as a tool, start treating it as a co-worker, co-teacher, and coach. The central argument is that AI advantages English majors and liberal arts thinkers as much as - or more than - coders. The Economist and Financial Times both named it Book of the Year.
A data-driven dismantling of Silicon Valley mythology. Mollick uses actual research to show that the "lone young genius founder" narrative is a story told backwards from outliers. The reality of who builds successful startups is far messier and far more accessible.
"We have never built a generally applicable technology that can boost our intelligence."
"You should try inviting AI to help you in everything you do, barring legal or ethical barriers."
"Don't trust AI jobs predictions. No one knows anything."
"Almost everything we knew about training people doesn't apply anymore."
"Imagine your AI collaborator as an infinitely fast intern, eager to please but prone to bending truth."
"What we still have is human agency - that we can, as individuals, actually make choices about how we integrate AI into our lives."
The Hershey's Kisses. Mollick advised the President's Intelligence Advisory Board on AI in 2023, before Biden signed Executive Order 14110. His thank-you gift from the White House: a box of Hershey's Kisses. He mentioned it publicly. No lecture about its significance. Just the detail, precisely delivered.
The "I Can Eat Glass" Website. At Harvard in 1997, Mollick built a website crowdsourcing translations of "I Can Eat Glass" in every language - chosen because the phrase is painful to say but sounds beautiful. A monument to early internet weirdness, and an early marker of someone who thinks systematically about human variation.
The Newsletter Rule. He writes every post for One Useful Thing himself - fully, start to finish - before asking AI for feedback. He won't let AI start his thinking. Only respond to it. He teaches what he practices, and practices what he teaches.
The Co-Directors. Mollick co-directs the Wharton Generative AI Labs with his wife Lilach Mollick, who is also an academic. They met at their NYC startup in the late 90s. Two PhDs running an AI lab together, married, having originally connected inside a dot-com in Manhattan. The internet created their lab twice.
Vibefounding. He launched an MBA course at Wharton literally called "Vibefounding." The title signals an entire philosophy: the entrepreneurship mental model has changed enough that we need new vocabulary, not just updated syllabuses.
Mollick's position on AI is frustrating to people who want a clean take. He won't say AI is going to destroy jobs. He won't say it's going to save them. He ran one of the most rigorous studies ever done on AI's effect on knowledge workers, and his conclusion was: it depends what you're asking it to do, and what you do when it's wrong.
The "Jagged Frontier" framing is his most durable contribution: AI has uneven capability - great at some tasks, actively harmful at others, and the frontier between them is invisible until you hit it. The workers who failed with AI in his BCG study didn't use it less. They trusted it on tasks where they shouldn't have. The skill is knowing which side of the frontier you're on.
His prescription for individuals is consistent: use AI for everything, document what happens, keep the things that work, discard the things that don't. He's not a technologist predicting an AI-native future. He's an organizational behaviorist watching how humans actually adapt - which, as always, is messier and more interesting than any prediction suggests.
On education, his view is quietly radical: he says "almost everything we knew about training people doesn't apply anymore." Not "some things need adjusting." Everything. Because AI collapses the gap between knowing and doing in ways that make traditional assessment, traditional practice, and traditional progression all simultaneously obsolete.
AI is a skill leveler - it helps the worst performers most, not the best
English majors and humanities thinkers may gain more from AI than coders
Job displacement predictions are unreliable - anyone claiming certainty is wrong
The biggest risk isn't job loss - it's trusting AI on tasks outside the frontier
Run experiments now - stop waiting for consensus to tell you what AI can do
Agentic AI - systems that can work autonomously for hours at a time - is his primary focus. He's been doing live demos at keynotes showing what agentic systems can do without human supervision. His view: this is the next inflection point, and it will arrive faster than most organizations are prepared for.
"People like AI when they use it themselves; they don't like AI writ large."
- Ethan Mollick, on the paradox of AI perception
Mollick teaches entrepreneurship and innovation at both undergraduate and MBA levels. He's added courses that didn't exist before AI and removed assumptions that no longer hold.