"The doctor who found racism hiding in a spreadsheet - and made Washington listen."
UC Berkeley professor. Emergency room physician. Machine learning researcher. The man who proved that healthcare algorithms weren't just wrong - they were wrong in a very specific direction.
In 2019, a team led by Ziad Obermeyer published a paper in Science that named no company, accused no one, and still managed to detonate a quiet bomb in the foundations of healthcare AI. The finding was deceptively simple: a commercial algorithm used to allocate care to millions of patients was systematically disadvantaging Black patients. Not because anyone programmed in race - in fact, race wasn't in the model at all. The bias was hiding in a proxy variable: healthcare costs. Since Black patients had historically received less care, they had lower costs. The algorithm read that as "healthier." It was wrong. And it was wrong in exactly one direction.
Fixing the model's objective would increase the share of Black patients receiving additional care from 17.7% to 46.5%. That's not a rounding error. That's medicine practicing two different standards of care simultaneously, invisibly, at scale.
AI will transform medicine and the health care system - for better or for worse, depending on how it is built and applied.
- Ziad Obermeyer, U.S. Senate Finance Committee, 2024What's strange about Obermeyer's path to this insight is that it runs through a history library, not a biology lab. He graduated Harvard College with a degree in History and Science - the kind of degree designed to ask how knowledge gets made, not just what it contains. He then went to Cambridge on a Frank Knox Fellowship to study the history and philosophy of science. He wanted to understand the scaffolding of scientific claims - their hidden assumptions, their social contexts, the invisible choices baked into their methods.
Then McKinsey. Three years, three countries: New Jersey, Geneva, Tokyo. Pharmaceutical clients, global health clients. The kind of work where you learn to see health systems as systems - with incentives and structures that shape what gets measured and what gets ignored.
Then Harvard Medical School. Then emergency medicine residency at Brigham and Women's and Massachusetts General Hospitals. Then a faculty position at Harvard. Then a chair at UC Berkeley, where he currently holds the Blue Cross of California Distinguished Professorship in Health Policy and Management at the School of Public Health.
The 2019 Science paper was remarkable not just for what it found, but for how it found it. Obermeyer and his co-authors - Brian Powers, Christine Vogeli, and Harvard economist Sendhil Mullainathan - didn't start with a hypothesis about bias. They started by looking at the algorithm's predictions against actual patient health, using objective measures that the model wasn't designed to optimize. When they compared the algorithm's risk scores to actual illness burden, the disparity was stark. At the same score, Black patients were sicker than white patients. The algorithm was systematically underestimating the health needs of Black patients.
The algorithm's racial bias wasn't intentional and race wasn't in the model. The bias emerged from using healthcare costs as a proxy for health needs - costs that reflected centuries of unequal access to care, not underlying health status.
The paper's policy impact was immediate and broad. It changed how organizations build algorithms. It shaped how lawmakers think about AI regulation. It ended up in Congressional testimony - twice, with Obermeyer himself in the witness chair. In February 2024, he testified before the U.S. Senate Finance Committee on AI in healthcare. In December 2025, he was back before the House Oversight Subcommittee, this time talking about technology's role in driving healthcare affordability - and pointing out that U.S. data regulations had gotten so restrictive that he'd had to conduct some of his research in Sweden.
Here is the thing about Ziad Obermeyer that is easy to lose in a bio full of titles: he still works in the emergency room. Despite being a full professor, running a research lab, co-founding two companies, advising at the Chan Zuckerberg Biohub, serving as a research associate at the National Bureau of Economic Research, and testifying in Congress - he still practices emergency medicine in underserved parts of the United States.
This is not purely symbolic. It's structural to how he thinks. The emergency room is where algorithms meet patients. It's where the gap between what a model predicts and what a person actually needs becomes visceral and immediate. For Obermeyer, the clinical work and the research work aren't separate tracks - they're the same investigation, run from different angles.
Throughout my ten years of practicing medicine, I have agonized over missed diagnoses, futile treatments, unnecessary tests and more.
- Ziad Obermeyer, Senate TestimonyHis research program, described in lectures and presentations as "Bedside to Bench: Reinventing Medicine with AI," reflects this dual vantage point. He's not interested in AI that automates existing mediocre care more efficiently. He thinks there's a lack of ambition in how people apply AI today - that it tends to optimize existing patterns rather than discover new ones, which means it can optimize existing inequities just as efficiently as it optimizes good outcomes. He's after something harder: AI that reveals what good medicine actually looks like, independent of what has historically been practiced.
You cannot build medical AI without medical data. In 2020, Obermeyer co-founded Dandelion Health to try to solve that problem commercially. Dandelion partners with large U.S. health systems to access rich clinical data - ECG waveforms, sleep monitoring signals, digital pathology slides - and makes it available to AI developers at low or no cost. The bet is that accelerating healthcare AI development broadly creates more value than any single proprietary application.
In 2021, he went further with Nightingale Open Science, a non-profit initiative launched with $6 million in funding. Nightingale builds massive medical imaging datasets in partnership with health systems including Emory University and Brigham and Women's Hospital, and makes them available for research. A paper in Nature Medicine describes it as solving medicine's data bottleneck - the structural barrier that has kept healthcare AI research behind other fields.
There's a through-line in Obermeyer's career that his history of science background makes visible: he's always been interested in how the questions we ask shape the answers we get. The McKinsey years showed him how incentives shape what health systems measure. Medical school showed him what happens to patients when those measures are wrong. His research has been, systematically, an attempt to surface and correct those gaps.
The racial bias paper is the most famous example, but it's part of a larger pattern. His lab uses machine learning not primarily to predict - the standard AI-in-medicine frame - but to investigate. To ask: what are we measuring, why, and who is harmed when we measure it wrong?
He's done this with pain management, with ECG analysis, with ICU prognostication, with the social determinants of health. Each project follows roughly the same logic: take a clinical algorithm or proxy variable, subject it to the kind of rigorous scrutiny a historian of science would apply to a primary source, and see what you find. What you find, often, is that the algorithm is doing something different from what it claims to be doing.
In presentations at the Stockholm School of Economics in November 2025, at the USC Schaeffer Center in September 2025, and at dozens of conferences before those, the message has been consistent: AI in medicine will be transformative, but only if we're honest about what we're building and who it's built to serve. The technology isn't neutral. The data isn't neutral. The question of who benefits is always there, whether or not you're asking it.
That's the thing that stays with you after reading about Ziad Obermeyer - not the awards, not the congressional testimony, not even the landmark paper. It's that he's been asking that question in every room he's entered, from Cambridge seminars to Senate committee rooms to ERs in underserved America. The history student who wanted to understand how science gets made ended up remaking a small but important corner of it.
The algorithm didn't include race as a variable. The bias emerged from using healthcare costs as a proxy for health needs - but costs reflected historic disparities in access, not actual health. Black patients with the same health burden were consistently scored as "lower risk" and diverted away from additional care. Co-authored with Brian Powers, Christine Vogeli, and Sendhil Mullainathan.
A venture-backed AI innovation platform that partners with large U.S. health systems to access rich clinical data - ECG waveforms, sleep monitoring, digital pathology slides - and makes it available to AI developers at low or no cost. The goal: accelerate healthcare AI by breaking the data bottleneck without forcing every company to build its own proprietary data access deals.
A non-profit medical imaging data initiative launched with $6 million and health system partners including Emory University and Brigham and Women's Hospital. Nightingale builds massive open datasets for research use, featured in Nature Medicine as a solution to medicine's data bottleneck. Named after Florence Nightingale's own pioneering use of data to reform healthcare.
His undergraduate and graduate degrees were in the history and philosophy of science - he was trained to ask how knowledge is produced and what assumptions it encodes. That lens, applied to medical algorithms, is what made the racial bias paper possible. He wasn't looking for bias in a dataset; he was looking for what the algorithm was actually measuring versus what it claimed to measure.
Despite a career that includes Harvard faculty positions, two venture-backed companies, Congressional testimony, and a named professorship at Berkeley, Obermeyer still practices emergency medicine in underserved communities. The clinical work isn't a credential - it's his primary source of data about the gap between what algorithms predict and what patients actually experience.
He's genuinely skeptical of how AI is being applied in medicine - "there is a certain lack of ambition" is a damning critique from someone in his position. But rather than stopping at criticism, he's built the infrastructure to do it right: open datasets, accessible data platforms, and frameworks for evaluating what algorithms actually optimize versus what they claim to.