There is a recursive joke buried inside everything Azalia Mirhoseini does. She uses AI to design chips. Those chips train better AI. That better AI designs even better chips. She named her company Ricursive Intelligence and did not explain the pun, because the pun is the point. This is not a metaphor. This is her actual business model, her research agenda, and apparently her life philosophy.
Start with what she built. AlphaChip is a deep reinforcement learning system - one that Mirhoseini and her team at Google Brain trained to solve a problem that had stumped engineers for decades. Chip floorplanning: the act of deciding where, on a silicon wafer the size of a fingernail, to place millions of logic components so that signals travel fast, heat dissipates cleanly, and power consumption stays manageable. Human engineers spend months on a single chip layout. AlphaChip does it in hours. Not worse than humans. Better than humans. The system's work was published in Nature in 2021 and is now used in production across Google's entire Alphabet portfolio - including the Tensor Processing Units (TPUs) that train the AI models the world now depends on.
That is the recursive loop made physical. The AI Mirhoseini built is now building the hardware that trains the AI. She closed the circle.
It's time to use machine learning and AI to develop better computers and close the loop.
- Azalia Mirhoseini
But AlphaChip is only half the story of her technical contribution to AI as it exists today. In 2017, while a researcher at Google Brain, Mirhoseini co-authored a paper that most people have never heard of but that now underpins virtually every large language model worth running. "Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer" introduced MoE - an architecture that activates only a fraction of a neural network's parameters for any given input, allowing models to grow enormous without growing proportionally slower or more expensive. GPT-4 uses it. Claude uses it. Gemini uses it. The architecture has 5,433 citations and counting. It is, quietly, one of the foundational papers of the generative AI era.
Two landmark contributions to AI - one on the hardware side, one on the software side. Most researchers would spend a career chasing one of these. She published both before turning 40, from positions at Google Brain, Anthropic, and Google DeepMind.
The career before the company tells you everything about the company. Mirhoseini studied electrical engineering at Sharif University of Technology in Tehran - the most selective engineering program in Iran - before earning her PhD in electrical and computer engineering at Rice University in Houston, Texas. Her doctoral thesis won the Best ECE Thesis Award in 2015. She is not someone who coasted into elite institutions; she earned her way into every room she's ever occupied.
At Google Brain from 2017, she co-founded the ML for Systems team - a research group dedicated to using machine learning to optimize the infrastructure of machine learning itself. This is a very particular kind of ambition: not just building AI tools for end users, but pointing AI at the problem of AI's own inefficiencies. The floorplanning work came out of that team. So did her broader interest in systems-level thinking that would define the next decade of her research.
The Anthropic chapter (2021-2024) produced yet another deeply-cited paper. Her co-authorship on "Constitutional AI: Harmlessness from AI Feedback" contributed to some of the central ideas behind how AI safety researchers now think about aligning language models. Over 3,440 citations. The ability to move between chip design, neural architectures, and AI safety - and to produce landmark work in each domain - is not typical. It is, arguably, the defining characteristic of her career.
The Stanford Chapter - and the Company That Followed
In 2024, Mirhoseini joined Stanford University as an Assistant Professor of Computer Science, founding the Scaling Intelligence Lab. The lab's research agenda - developing scalable, self-improving AI systems toward AGI - is not shy about what it's aiming at. She teaches courses called "Systems for Machine Learning" and "Self-Improving AI Agents." The latter title is not a marketing phrase. It is a research direction.
Then, in December 2025, she and longtime collaborator Anna Goldie co-founded Ricursive Intelligence. The partnership had been building for years: the two met at Stanford, worked together at Google Brain, crossed paths again at Anthropic, and apparently decided that building the future required starting a company rather than publishing more papers about it. Sequoia Capital led the seed round. The seed raised $35 million. The valuation at seed was $750 million. By January 2026 - six weeks later - Ricursive had closed a $300 million Series A at a $4 billion valuation. In four months of existence, the company raised $335 million and became one of the fastest-growing startups in recent Silicon Valley memory.
The mission is audacious in its clarity: compress chip design cycles from years to weeks using AI, and create a recursive self-improvement loop where AI-designed silicon enables more powerful AI training, which enables better chip design. It is AlphaChip, scaled into a company, aimed at the entire semiconductor industry rather than just Google's internal TPU roadmap.
What Mirhoseini and Goldie are describing - and now building with $335 million in capital - is not incremental. It is a structural bet that the current AI scaling paradigm is bottlenecked by hardware design timelines, and that whoever solves hardware-AI co-evolution fastest will have an enormous advantage in the race toward more capable AI systems. Whether you find that prospect thrilling or sobering, the bet is serious and the funding is real.
The Details That Matter
Here is something worth noting: most of her landmark contributions came from collaborative, long-term research partnerships. AlphaChip was a team project at Google Brain. The MoE paper was co-authored with some of the field's best researchers. Ricursive Intelligence was co-founded with Anna Goldie. Mirhoseini's research output is staggering by any measure - 20,000+ citations, an H-index of 42 in her mid-thirties - but she is not the kind of scientist who builds a moat around solo credit. The recursive theme again: her collaborative approach to research mirrors the feedback loops she builds into her systems.
In 2019, MIT Technology Review named her to their Innovators Under 35 list. In 2025, she received both the Okawa Research Grant Award and Google's inaugural ML and Systems Junior Faculty Award. She has spoken at NeurIPS, Cornell Tech, and a string of industry workshops. She remains an active researcher even while running a $4 billion startup and teaching at Stanford simultaneously - which is either inspiring or exhausting, depending on your perspective.
She grew up in Iran, trained at one of the country's most rigorous engineering programs, built a career across three of the most consequential AI research institutions on earth, and is now closing the recursive loop she saw from the beginning: machines that make machines that make better machines. The story is not finished. In fact, by the timeline, it is only just starting.