There are roughly two types of people who end up influencing AI policy in Washington. The first type comes with a computer science degree, spent years at a lab or a big tech firm, and speaks fluently in transformer architectures and RLHF. The second type is Dean Ball. Ball studied history at Hamilton College in upstate New York, graduated magna cum laude in 2014, and spent the better part of a decade managing the legacy of the 30th President of the United States. His path to becoming one of the most-read voices on AI governance runs straight through Calvin Coolidge - and it makes more sense than you'd think.
What Ball brings to AI policy is exactly what most AI policy debates are missing: a sense of institutional time. Historians understand that technologies don't just arrive - they land in specific political economies, specific regulatory cultures, with specific existing interests already dug in. Ball reads the AI moment the way a historian reads a primary source: carefully, skeptically, and with the nagging sense that most people are misinterpreting what they're looking at.
The major problems that AI poses are primarily scientific and engineering problems rather than regulatory ones.
- Dean BallHis newsletter, Hyperdimensional, launched in January 2024 and grew rapidly to tens of thousands of subscribers. The name is precise rather than flashy - it captures Ball's core intuition that AI systems operate in a mathematical space so vast and strange that ordinary policy concepts (regulate the product, regulate the use, regulate the company) barely touch it. The newsletter is free to read; a paid institutional tier at $7,500 per year buys you a quarterly one-hour consultation. Washington's policy world took notice quickly.
From Coolidge to the Code
The career pivot from presidential history to artificial intelligence was not as abrupt as it looks on a resume. After his time at the Calvin Coolidge Presidential Foundation, Ball moved to the Manhattan Institute for Policy Research as Deputy Director of State and Local Policy, then to Stanford University's Hoover Institution as Senior Program Manager for its State and Local Governance Initiative. By 2022, he'd landed at George Mason University's Mercatus Center as a Research Fellow in its Artificial Intelligence and Progress Project. That's where the deep AI policy focus crystallized.
At Mercatus, Ball developed what became his signature framework: AI governance should focus on adaptation rather than intervention, using market mechanisms and private governance structures wherever possible, with government playing a limited role in transparency and liability frameworks. It's a position that put him at odds with a significant chunk of the AI safety and governance community - but that tension is precisely where Ball does his best work.
Four Months That Shaped a Nation's AI Strategy
In early 2025, Ball joined the White House Office of Science and Technology Policy as Senior Policy Advisor for Artificial Intelligence and Emerging Technology. He was also named Strategic Advisor for AI at the National Science Foundation, Co-Chair of the National AI Research Resource Pilot Steering Committee, Co-Chair of the National Science and Technology Council's Subcommittee on Machine Learning and AI, and Chair of the GSA's AI Community of Practice - a portfolio that would be formidable for a full-time government employee, remarkable for someone who was only there for four months.
During those four months, Ball served as the primary staff drafter of America's AI Action Plan - a 28-page document released in July 2025 containing several dozen strategic objectives and over 90 policy recommendations. The plan outlined the Trump administration's approach to maintaining U.S. AI leadership, with a focus on specific, actionable federal agency actions using existing authorities and budgets rather than waiting for new legislation. For a document of this scope, the drafting window was extraordinarily compressed. Ball delivered anyway.
Smart strategic steps right now are what we need - not waiting for new legislation.
- Dean Ball, on AI governance timingBy August 2025, he'd returned to the Foundation for American Innovation as Senior Fellow - the White House chapter closed, but its influence baked into federal policy. He was simultaneously appointed Policy Fellow at Fathom, Visiting Fellow at the Heritage Foundation, and Visiting Lecturer at Yale Law School, where he co-teaches a Colloquium on Frontier AI Governance. He also co-hosts the AI Summer podcast with journalist Timothy B. Lee. The name is a deliberate double entendre: the season, and the hype cycle.
The Private Governance Thesis
Ball's most distinctive policy contribution is his framework for private AI governance. The conventional debate pits "regulate AI" against "don't regulate AI." Ball breaks out of that frame entirely. His proposal: state legislatures should authorize private AI standards-setting organizations - hybrid public-private bodies that set standards for frontier AI labs. Labs that opt in and comply receive protection from tort liability for customer misuse of their models. The result is a marketplace of governance structures, each competing to attract labs by offering rigorous standards and meaningful liability protection.
This isn't libertarian hand-waving. Ball is explicit that some transparency requirements are necessary - he supports modest standards requiring frontier labs to share documentation like model specifications. What he opposes is prescriptive regulation that constrains AI based on predicted use cases before those use cases have materialized into real harms. His "reasonable care" standard for liability is designed to address harms as they actually emerge rather than pre-emptively shaping the technology.
On Open Source and the Long Game
Ball's position on open-source AI is one of the most nuanced in the field - and deliberately uncomfortable for advocates on both sides. In the long run, he argues, government regulation of closed-source frontier models is actually good for open-weight AI: as proprietary models face more regulatory friction, open alternatives become relatively more attractive. In the short run, he's "fairly bearish" - particularly concerned about developing countries that might rely on open-source models when those models can't match the capabilities of the leading closed systems. He holds both positions simultaneously and doesn't apologize for the tension.
His intellectual influences are eclectic in the way that only a certain kind of serious reader achieves. Michael Oakeshott shows up regularly - the British philosopher's case for political rationalism's limits maps directly onto Ball's skepticism about prescriptive AI regulation. The combination of Oakeshottian conservatism with a genuine enthusiasm for technological acceleration puts Ball in a category of one in most DC policy rooms.