The Literature Grad Who Helped Build the Future
In March 2026, Jack Clark stepped into a new title at Anthropic: Head of Public Benefit. The announcement came alongside the launch of the Anthropic Institute - a research operation pulling together frontier red-teaming, societal impact research, and economic analysis under one roof. The job title is new. The obsession behind it is not.
Clark has spent the better part of a decade trying to get the world to pay attention to what AI actually is, not what people assume it to be. He did it as a journalist. He did it from inside OpenAI, twice testifying before Congress when most policymakers couldn't have told you the difference between machine learning and a search algorithm. He's been doing it at Anthropic since 2021, where he co-founded the company with six colleagues who shared a conviction that safety was not a marketing position but an engineering requirement.
He studied English Literature at the University of East Anglia. He graduated with a 2:1. He once was - and this is genuinely true - the only reporter in the world specifically assigned to distributed systems. When he describes his humanities education as an advantage, he means it in a specific way: the ability to ask better questions about what a technology means, not just how it works.
I'm a literature graduate, and I don't think you'd put that as a cofounder of a frontier AI company, but what turned out to be useful is that I got to learn a lot about history and a lot about the kind of stories that we tell ourselves about the future.
- Jack ClarkThat framing - stories we tell ourselves about the future - shows up everywhere in how Clark works. Every single edition of Import AI, the newsletter he has written weekly since 2016, ends with a short piece of AI-themed science fiction that he writes himself. He calls them "messages in a bottle I'm trying to throw out of this semi-frightening AI lab, which I'm a principal character in." These are not thought experiments. They're dispatches. The lab is real, and so is the freight he's carrying.
The World's Only Distributed Systems Reporter
Before Anthropic, before OpenAI, Clark was a technology journalist. Not a generalist tech journalist covering gadget launches and quarterly earnings - a specialist who developed a beat so narrow it may have been unique on earth. At The Register, he was the world's only reporter dedicated to distributed systems. At Bloomberg, he was their sole neural network reporter, covering evolutionary algorithms, semi-supervised learning, and data representation at a time when none of those terms had entered the mainstream vocabulary.
This is where Clark developed the instinct that defines his work: the conviction that understanding the technical reality of a technology matters enormously for understanding what it might do to the world. The journalists who couldn't follow a research paper had to rely on press releases. Clark could read the paper.
He joined OpenAI in September 2016, just months after the nonprofit launched. Over four years he rose to Policy Director, a role that took him to Congress twice - April 2018 and June 2019 - to testify on artificial intelligence at a moment when the term "AI" was still largely synonymous with the Terminator in Washington's collective imagination. The fact that a former tech journalist was briefing lawmakers on neural networks before most of them had heard of GPT-2 is not an accident. It is the Clark playbook: get to the room before the issue does.
Clark testified before the U.S. House of Representatives on AI policy in April 2018 - years before ChatGPT would make AI a household word. He came back in June 2019. He described the subsequent policy environment in a 2024 tweet with characteristic honesty: "Years ago people working in AI policy (including me) perhaps foolishly wanted to 'wake up' DC. Well, we got what we wanted! Now I think a lot of the challenge of AI policy relates to trying to stabilize this leviathan that has been woken."
Seven People Leave OpenAI and Build Something New
In 2021, seven people departed OpenAI together and founded Anthropic. The co-founders - Dario Amodei, Daniela Amodei, Tom Brown, Jared Kaplan, Sam McCandlish, Christopher Olah, and Clark - shared a concern that safety and commercialization were being handled differently than they believed was right. Anthropic was their answer: a company where safety research wasn't a department on the side, but the central commitment.
Clark became Head of Policy. The role suited him. He knew Congress. He knew the research. He understood that the regulatory gap between what AI could do and what governments understood was enormous and closing fast. By 2026, Anthropic had grown into one of the world's most valuable private AI companies, with valuations approaching $350 billion in some estimates. Forbes placed individual co-founder wealth at roughly $3.7 billion each - numbers that Clark and his colleagues almost immediately began attaching conditions to.
In January 2026, all seven Anthropic co-founders signed a pledge to donate 80% of their personal wealth to combat AI-driven inequality. Clark had already signed the Giving What We Can pledge - a commitment to donate at least 10% of income to effective charities - long before he had anything approaching billionaire status. When the money arrived, the commitment was already in place.
The Model Too Powerful To Release
In April 2026, Clark confirmed that Anthropic had briefed the Trump administration on Mythos - an AI model so capable in cybersecurity that Anthropic has declined to release it publicly. Under Project Glasswing, a small number of vetted companies have access. Governments are being informed. The public is not.
Clark's explanation was direct: "Our position is the government has to know about this stuff, and we have to find new ways for the government to partner with a private sector that is making things that are truly revolutionizing the economy, but are going to have aspects to them which hit National Security, equities, and other ones."
The timing was sharp. Anthropic had, in the same period, filed a lawsuit against Trump's Department of Defense after the DOD labeled Anthropic a supply-chain risk. The conflict: the military wanted unrestricted access to AI capabilities, including for mass surveillance and autonomous weapons. Anthropic pushed back. Clark continued briefing the administration anyway. The distinction between informing and enabling is exactly the kind of line his policy work has always tried to hold.
At the Semafor World Economy Summit in April 2026, Clark warned plainly: powerful AI systems capable of autonomously exploiting cybersecurity vulnerabilities are arriving within months, not years. "The world needs to get ready," he said. He has been saying versions of this for a decade. The difference now is that most of the room believes him.
Project Glasswing: What We Know
- Mythos is an Anthropic AI model with advanced cybersecurity capabilities, not publicly released as of April 2026.
- Selective access granted under Project Glasswing to vetted partners and government bodies.
- Clark personally confirmed briefings with the Trump administration.
- This occurred even as Anthropic pursued legal action against the DOD over mass surveillance and autonomous weapons use.
- Clark's position: government must be an informed partner - but informed is not the same as unrestricted.
What Clark Actually Says
After a decade of being hit again and again in the head with the phenomenon of wild new capabilities emerging as a consequence of computational scale, I must admit defeat.
It's actually going to be the era of the manager nerds now, where I think being able to manage fleets of AI agents and orchestrate them is going to make people incredibly powerful.
By April 2027, AI systems should be able to do tasks that might take a person 150 hours.
You can't manage what you can't measure. Knowing how to ask the right questions beats knowing how to code.
He Ends Every Newsletter With Fiction He Wrote
There's a thing Jack Clark does that no other AI company co-founder does. Every edition of Import AI - which goes out weekly, to 70,000 readers, written by a man who also runs a major AI institute, advises governments, and manages a team - ends with a short piece of science fiction. Clark writes it himself.
He's described this practice using a lyric from the band Jawbreaker: "My fiction beats the hell out of my truth." The stories are not optimistic tales of AI saving the world. They're explorations of what happens when the systems work as designed, and that's still disturbing. They're dispatches from someone sitting inside a situation that most people only read about from the outside.
Clark was reportedly also building a detailed paperclip factory simulator using Claude Code. Partly for the comedy - a nod to the famous "paperclip maximizer" thought experiment in AI safety. Partly because he genuinely loves complex simulations. Both explain a great deal about who he is: someone who holds a serious idea and a sense of absurdity simultaneously, and finds them compatible.
He grew up in Brighton, England. He moved to San Francisco. He hikes. He takes deliberate breaks from the newsletter to spend time with family and, in his phrase, "defrag his brain." He is, by the lights of his own self-assessment, a technological pessimist who lost a long argument with reality. He's made his peace with being wrong. This, paradoxically, is what makes people trust him.
We are the child from that story and the room is our planet - and when we turn the light on, we find not harmless objects but powerful and somewhat unpredictable AI systems.
- Jack Clark, on the nature of AI developmentThe Era of the Manager Nerd
Clark's current read on where things are headed is specific and unsentimental. He believes AI will soon handle tasks that require 150 person-hours - not just simple queries but extended, complex work. He believes the humans best positioned for this shift are not coders but orchestrators: people who can manage fleets of AI agents, direct their work, check their outputs, and judge what they get wrong. He calls these people "manager nerds," and suggests their moment is arriving.
He does not think this will be painless. The Anthropic Institute he leads exists precisely because he believes the economic disruption from AI will be real, significant, and unevenly distributed. The 80% wealth pledge is not a gesture - it's a recognition that the co-founders of a company like Anthropic are positioned to benefit enormously from a transition that many others will find difficult.
Through the Institute, Clark is building the measurement infrastructure he thinks the field lacks. His phrase - "you can't manage what you can't measure" - is not just a management cliche. It's a policy thesis: that without rigorous, independent data on how AI is affecting jobs, income, security, and social outcomes, nobody - not companies, not governments, not workers - can make well-informed decisions about what to do next.
He gave the AI field a newsletter before it needed one. He gave Congress a briefing before it knew to ask. He's building a think tank for the consequences that are arriving. The literature graduate's instinct - read the text carefully, understand what it means, ask what story it's telling - has turned out to be exactly the right preparation for a career in artificial intelligence. He was probably the only one who knew that when he was studying.