BREAKING
AZALIA MIRHOSEINI TAUGHT AI TO DESIGN CHIPS RICURSIVE INTELLIGENCE HITS $4B VALUATION IN UNDER 2 MONTHS ALPHACHIP COMPRESSES CHIP DESIGN FROM MONTHS TO HOURS $335M RAISED IN 4 MONTHS - SEQUOIA LED SEED 20,000+ RESEARCH CITATIONS - H-INDEX 42 MIXTURE-OF-EXPERTS ARCHITECTURE NOW POWERS GPT, CLAUDE & GEMINI MIT TECH REVIEW INNOVATOR UNDER 35 STANFORD SCALING INTELLIGENCE LAB DIRECTOR
STANFORD PROF + FOUNDER
Azalia Mirhoseini - Founder of Ricursive Intelligence, Stanford Professor
AI Researcher • Chip Designer • $4B Founder

Azalia Mirhoseini

"Close the loop. Let machines build the machines."

She built AlphaChip - the reinforcement learning system that now designs Google's most powerful processors. Then she co-invented the architecture inside GPT, Claude, and Gemini. Then she founded a company worth $4 billion. She's still in her mid-thirties.

AlphaChip Ricursive Intelligence Stanford Faculty Google DeepMind MoE Architect
20K+ Research Citations
42 H-Index
$4B Startup Valuation
$335M Funds Raised
4+ TPU Generations Designed
76 High-Impact Papers

The Woman Who Taught Machines to Build Machines

There is a recursive joke buried inside everything Azalia Mirhoseini does. She uses AI to design chips. Those chips train better AI. That better AI designs even better chips. She named her company Ricursive Intelligence and did not explain the pun, because the pun is the point. This is not a metaphor. This is her actual business model, her research agenda, and apparently her life philosophy.

Start with what she built. AlphaChip is a deep reinforcement learning system - one that Mirhoseini and her team at Google Brain trained to solve a problem that had stumped engineers for decades. Chip floorplanning: the act of deciding where, on a silicon wafer the size of a fingernail, to place millions of logic components so that signals travel fast, heat dissipates cleanly, and power consumption stays manageable. Human engineers spend months on a single chip layout. AlphaChip does it in hours. Not worse than humans. Better than humans. The system's work was published in Nature in 2021 and is now used in production across Google's entire Alphabet portfolio - including the Tensor Processing Units (TPUs) that train the AI models the world now depends on.

That is the recursive loop made physical. The AI Mirhoseini built is now building the hardware that trains the AI. She closed the circle.

It's time to use machine learning and AI to develop better computers and close the loop.

- Azalia Mirhoseini

But AlphaChip is only half the story of her technical contribution to AI as it exists today. In 2017, while a researcher at Google Brain, Mirhoseini co-authored a paper that most people have never heard of but that now underpins virtually every large language model worth running. "Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer" introduced MoE - an architecture that activates only a fraction of a neural network's parameters for any given input, allowing models to grow enormous without growing proportionally slower or more expensive. GPT-4 uses it. Claude uses it. Gemini uses it. The architecture has 5,433 citations and counting. It is, quietly, one of the foundational papers of the generative AI era.

Two landmark contributions to AI - one on the hardware side, one on the software side. Most researchers would spend a career chasing one of these. She published both before turning 40, from positions at Google Brain, Anthropic, and Google DeepMind.

To advance the state of the art in AI, we must operate at the Pareto frontier of intelligence and computational efficiency. Ricursive is building toward a future where rapid AI and hardware co-evolution becomes reality.

- Azalia Mirhoseini, on founding Ricursive Intelligence

The career before the company tells you everything about the company. Mirhoseini studied electrical engineering at Sharif University of Technology in Tehran - the most selective engineering program in Iran - before earning her PhD in electrical and computer engineering at Rice University in Houston, Texas. Her doctoral thesis won the Best ECE Thesis Award in 2015. She is not someone who coasted into elite institutions; she earned her way into every room she's ever occupied.

At Google Brain from 2017, she co-founded the ML for Systems team - a research group dedicated to using machine learning to optimize the infrastructure of machine learning itself. This is a very particular kind of ambition: not just building AI tools for end users, but pointing AI at the problem of AI's own inefficiencies. The floorplanning work came out of that team. So did her broader interest in systems-level thinking that would define the next decade of her research.

The Anthropic chapter (2021-2024) produced yet another deeply-cited paper. Her co-authorship on "Constitutional AI: Harmlessness from AI Feedback" contributed to some of the central ideas behind how AI safety researchers now think about aligning language models. Over 3,440 citations. The ability to move between chip design, neural architectures, and AI safety - and to produce landmark work in each domain - is not typical. It is, arguably, the defining characteristic of her career.

The Stanford Chapter - and the Company That Followed

In 2024, Mirhoseini joined Stanford University as an Assistant Professor of Computer Science, founding the Scaling Intelligence Lab. The lab's research agenda - developing scalable, self-improving AI systems toward AGI - is not shy about what it's aiming at. She teaches courses called "Systems for Machine Learning" and "Self-Improving AI Agents." The latter title is not a marketing phrase. It is a research direction.

Then, in December 2025, she and longtime collaborator Anna Goldie co-founded Ricursive Intelligence. The partnership had been building for years: the two met at Stanford, worked together at Google Brain, crossed paths again at Anthropic, and apparently decided that building the future required starting a company rather than publishing more papers about it. Sequoia Capital led the seed round. The seed raised $35 million. The valuation at seed was $750 million. By January 2026 - six weeks later - Ricursive had closed a $300 million Series A at a $4 billion valuation. In four months of existence, the company raised $335 million and became one of the fastest-growing startups in recent Silicon Valley memory.

The mission is audacious in its clarity: compress chip design cycles from years to weeks using AI, and create a recursive self-improvement loop where AI-designed silicon enables more powerful AI training, which enables better chip design. It is AlphaChip, scaled into a company, aimed at the entire semiconductor industry rather than just Google's internal TPU roadmap.

What Mirhoseini and Goldie are describing - and now building with $335 million in capital - is not incremental. It is a structural bet that the current AI scaling paradigm is bottlenecked by hardware design timelines, and that whoever solves hardware-AI co-evolution fastest will have an enormous advantage in the race toward more capable AI systems. Whether you find that prospect thrilling or sobering, the bet is serious and the funding is real.

The Details That Matter

Here is something worth noting: most of her landmark contributions came from collaborative, long-term research partnerships. AlphaChip was a team project at Google Brain. The MoE paper was co-authored with some of the field's best researchers. Ricursive Intelligence was co-founded with Anna Goldie. Mirhoseini's research output is staggering by any measure - 20,000+ citations, an H-index of 42 in her mid-thirties - but she is not the kind of scientist who builds a moat around solo credit. The recursive theme again: her collaborative approach to research mirrors the feedback loops she builds into her systems.

In 2019, MIT Technology Review named her to their Innovators Under 35 list. In 2025, she received both the Okawa Research Grant Award and Google's inaugural ML and Systems Junior Faculty Award. She has spoken at NeurIPS, Cornell Tech, and a string of industry workshops. She remains an active researcher even while running a $4 billion startup and teaching at Stanford simultaneously - which is either inspiring or exhausting, depending on your perspective.

She grew up in Iran, trained at one of the country's most rigorous engineering programs, built a career across three of the most consequential AI research institutions on earth, and is now closing the recursive loop she saw from the beginning: machines that make machines that make better machines. The story is not finished. In fact, by the timeline, it is only just starting.

Quick Facts
Origin: Iran
Current Role: Founder & CTO, Ricursive Intelligence; Asst. Prof., Stanford CS
Previous: Google Brain, Anthropic, Google DeepMind
PhD: Rice University (2015), Best ECE Thesis Award
BSc: Sharif University of Technology, Iran
Citations: 20,000+ | H-Index: 42
Valuation: $4B (Ricursive Intelligence, Jan 2026)
Education
Rice University
PhD, Electrical & Computer Engineering (~2010-2015)
Best ECE Thesis Award
Sharif University of Technology
BSc, Electrical Engineering (Tehran, Iran)
Awards & Recognition
2025
Okawa Research Grant Award
2025
Google ML & Systems Junior Faculty Award (inaugural cohort)
2021
Nature publication: AlphaChip chip design paper
2019
MIT Technology Review Innovators Under 35
2015
Best ECE Thesis Award, Rice University
Latest Updates
FEB 2026
TechCrunch covers how Ricursive raised $335M in 4 months
JAN 2026
Ricursive Intelligence raises $300M Series A at $4B valuation
DEC 2025
Co-founds Ricursive Intelligence; $35M seed led by Sequoia at $750M
DEC 2025
Speaks at NeurIPS 2025 on Test-Time Scaling
2024
Joins Stanford as Asst. Prof.; founds Scaling Intelligence Lab
Traits
Systems Thinker Collaborative Cross-Disciplinary Prolific Researcher Practical Visionary Recursive Problem Solver
"I'm very excited to share that I've started as an Assistant Professor of Computer Science at Stanford University! My lab will focus on self-improving AI methodologies and systems." - Azalia Mirhoseini

What She Actually Built

🧠

AlphaChip

Deep reinforcement learning system for chip floorplanning. Compresses months of expert engineering work into hours. Now used in production for 4+ generations of Google TPUs and Alphabet data center CPUs.

Mixture of Experts

Co-invented the sparsely-gated MoE architecture in 2017. With 5,433+ citations, it became the dominant design pattern inside GPT-4, Claude, Gemini, and virtually every large-scale language model.

💰

$335M Raised in 4 Months

Co-founded Ricursive Intelligence in December 2025. Seed of $35M at $750M valuation (Sequoia), Series A of $300M at $4B valuation - one of the fastest funding trajectories in recent startup history.

📊

20,000+ Citations

Total research citations exceeding 20,000. H-index of 42. Over 76 publications with 10+ citations. Multiple papers in the top tier of their fields, spanning chip design, neural architecture, and AI safety.

🎓

Stanford Faculty

Assistant Professor of Computer Science, director of the Scaling Intelligence Lab. Teaching "Self-Improving AI Agents" while simultaneously running a $4 billion company. Multitasking at a different level entirely.

🌟

MIT Innovators Under 35

Named to MIT Technology Review's Innovators Under 35 in 2019. Also: Okawa Research Grant Award (2025), Google ML and Systems Junior Faculty Award (2025 inaugural cohort), Nature publication (2021).

The Career Arc

2015
PhD from Rice University. Best ECE Thesis Award. Enters the research world on her own terms.
2017
Joins Google Brain. Co-founds ML for Systems team. Co-authors the Mixture-of-Experts paper that will shape the next decade of AI.
2019
MIT Technology Review names her to Innovators Under 35. The recognition catches up to the work.
2021
AlphaChip published in Nature. AI-designed chip layouts go into production at Google. The loop closes for the first time.
2021-2024
Anthropic and Google DeepMind. Works on Claude, Constitutional AI, and Gemini. Adds 3,440+ citations to her name from the safety and frontier model research.
2024
Stanford faculty appointment. Founds the Scaling Intelligence Lab. Starts teaching "Self-Improving AI Agents."
Dec 2025
Co-founds Ricursive Intelligence with Anna Goldie. $35M seed. $750M valuation. Sequoia leads.
Jan 2026
$300M Series A. $4B valuation. $335M total in four months. The recursive loop becomes a company worth betting on.

The Details Worth Knowing

The Lockstep Partnership
Her co-founder Anna Goldie is not a chance encounter. The two met at Stanford, worked together at Google Brain building AlphaChip, crossed paths again at Anthropic, and reunited to found Ricursive Intelligence. Their collaboration has been, as described publicly, "in lockstep" for years. Ricursive is what happens when that much accumulated shared context finally has a budget.
AI Designing Its Own Infrastructure
AlphaChip was deployed to design the TPUs that train the AI models. So the same category of AI system that AlphaChip helped enable - large-scale neural networks - is now trained on chips that AI designed. This is not a thought experiment. It is the current state of affairs inside Alphabet's hardware pipeline.
The MoE Effect
When Mirhoseini co-authored the Mixture-of-Experts paper in 2017, GPT-3 did not exist. By the time GPT-4 arrived, MoE was baked into the architecture. Her 2017 insight became infrastructure before the world knew it needed infrastructure. The paper has 5,433 citations because the entire field eventually caught up.
Speed of Capital
Ricursive Intelligence was valued at $750M at seed in December 2025. Six weeks later the Series A valued it at $4B. The $3.25B valuation increase in six weeks reflects both investor confidence in the team and the moment: hardware-AI co-design is precisely where the bottleneck in AI capability growth currently sits.

Things That Sound Made Up But Aren't

Her company is literally named "Ricursive" - a nod to the recursive self-improvement loop at the core of her research. The name is not a typo. It is the thesis statement as a brand.

The TPU chips she helped design using AI (via AlphaChip) are the same chips currently used to train AI models like Gemini - making her a key enabler of both the hardware and the software in the AI stack simultaneously.

At an H-index of 42 in her mid-thirties, she ranks among the most impactful young researchers in any scientific field - not just AI. An H-index of 42 means 42 of her papers have each been cited at least 42 times.

She went from Iranian university student to Stanford professor and $4 billion startup founder in roughly 15 years. That timeline is not normal. Even in Silicon Valley, that timeline is not normal.

Her Stanford lab teaches a course called "Self-Improving AI Agents." This is not a metaphor for good study habits. She is literally teaching AI systems to improve their own capabilities as a formal course curriculum.