BREAKING Mihail Eric teaches 17,000+ subscribers how to ship ML that actually works   /   Stanford CS146S: the course that asks "what does software engineering look like when LLMs write the code?"   /   Former Amazon Alexa ML Scientist who built some of Alexa's first large language models   /   YC-backed founder. Acquired founder. Head of AI. Still writing the newsletter.   /   "MLOps is a mess but that's to be expected" - Mihail Eric, correctly, in public   /   ML Ops Notes: production AI wisdom without the conference-talk polish   /   2,431+ Google Scholar citations from the researcher who then actually shipped things     
Mihail Eric - MLOps Engineer and Stanford Lecturer HEAD OF AI
Stanford Lecturer / ML Engineer / Builder

Mihail
Eric

The person who said MLOps is a mess - and then spent a decade fixing it anyway.

MLOps Stanford Faculty Newsletter Author Serial Founder

Palo Alto, CA  |  Mihail Eric, ML engineer, Stanford adjunct lecturer, and author of ML Ops Notes

17K+ Newsletter Subscribers
2,431 Research Citations
3 Companies Founded
10+ Years in Production ML

The Builder Who Actually Ships

There is a version of an ML career that stays safely inside academia, accumulating citations. There is another version that stays safely inside big tech, accumulating stock grants. Mihail Eric chose a third path: build real systems, found real companies, teach real engineers, and write about the gap between all three - honestly, publicly, and at scale.

Today, Mihail serves as Head of AI at Monaco GTM, a stealth-mode startup building AI-powered revenue infrastructure for go-to-market teams. He is simultaneously an Adjunct Lecturer at Stanford University, where he created CS146S: "The Modern Software Developer" - a course that asks the question every computer science department is quietly nervous about: what does software engineering actually look like when large language models write the code?

The newsletter that got him there - ML Ops Notes - now lands in the inboxes of more than 17,000 practitioners each week. Not practitioners who want to hear about AI in theory. Practitioners who need to know what breaks in production, what tooling actually holds up under load, and what the gap between a research paper and a shipping product really looks like from the inside.

MLOps is a mess but that's to be expected.

- Mihail Eric, in a post that resonated because it was simply true

Stanford, Alexa, and the Real World

Mihail's academic foundation is serious by any measure. He earned both his undergraduate and master's degrees in Computer Science from Stanford, spending years embedded in the Stanford NLP Group under three of the field's most influential researchers: Christopher Manning, Percy Liang, and Christopher Potts. He published at ACL, AAAI, and NeurIPS. He created the DialoGLUE benchmark, which became a standard tool for evaluating dialogue understanding systems.

But what separates Mihail from the pure academic track is what came next. He joined Amazon Alexa's Conversational Modeling team as a Senior ML Scientist and became a founding member of Alexa's first special projects team - the group tasked with figuring out what large language models could actually do inside a consumer product at Amazon's scale. He built some of Alexa's earliest LLMs. He saw what worked and what spectacularly did not.

He also wrote about it. His post "How Alexa Dropped the Ball on Being the Top Conversational System on the Planet" is the kind of insider critique that only someone with real credentials and real candor can produce. Not a hot take from the outside. A considered diagnosis from someone who was in the room.

Three Companies. One Acquisition. One YC Batch.

Mihail is not the kind of person who builds one company, sells it, and retires. He has founded three distinct ventures, each targeting a different part of the ML ecosystem, and each revealing something about what he thinks is actually missing.

Pametan Data Innovation came first - an ML consultancy that helped organizations across industries turn data strategy into deployed systems. The kind of work that teaches you exactly where production ML falls apart, because you are the one cleaning it up.

Then came Confetti AI, an ML interview prep and education platform that was acquired by Towards AI in 2022. Mihail later wrote about the experience in a piece titled "Honey, I Sold My First Bootstrapped SaaS Company" - an essay that is refreshingly honest about what that process actually feels like, for both the person selling and the person learning from it.

Most recently, he co-founded Storia AI, a Y Combinator-backed startup building AI-powered tools for image and video generation. That one went through YC - which means a different level of scrutiny, a different speed, and a different set of problems to solve.

Behind the Scenes

In one corner of the internet, Mihail published a post titled "Claude Code Demystified: Whirring, Skidaddling, Flibbertigibetting." The headline choice alone tells you something: this is an engineer who takes the work seriously and the posturing less so. He has built LLMs for Amazon Alexa, published NLP research at top conferences, and still has enough fun with language to title an explainer piece like it is a fever dream. That combination - technical depth and genuine playfulness - is rarer than the industry likes to admit.

The Course Nobody Else Would Build

CS146S: "The Modern Software Developer" is the first Stanford course of its kind - dedicated entirely to how software engineering is changing because of coding LLMs. Mihail built it, pitched it to Stanford, and now teaches it to students who will graduate into an industry that has already been restructured by the tools they are learning to use.

He also took the curriculum public. Through Maven, he runs "AI Software Development: From First Prompt to Production Code" - a course designed for working engineers who did not have access to a Stanford classroom but still need to understand what building with LLMs actually requires. The newsletter became the proof of concept. The course became the product. The student base is now north of 17,000 and climbing.

His earlier stint as a teaching assistant at Stanford - covering CS106A, CS106B, CS224N (Natural Language Processing with Deep Learning), and CS109 - suggests this is not a recent pivot to educator mode. Teaching has always been part of how Mihail thinks about the work.

The gap between ML research and production is where companies win or lose.

- Mihail Eric

Citations, Benchmarks, and the NLP Foundation

Before there were newsletters and Stanford courses, there was peer-reviewed research. Mihail's academic work concentrated on task-oriented dialogue systems, knowledge-grounded conversation, and intent classification - the plumbing underneath every voice assistant and chatbot that was supposed to understand what you actually meant.

His contribution to DialoGLUE - a benchmark for dialogue understanding - gave the research community a shared standard for measuring progress on one of NLP's genuinely hard problems. With 2,431+ citations on Google Scholar, the work has reached the kind of scale where other researchers build on it without always knowing who built the foundation.

His papers at ACL, AAAI, and NeurIPS represent the top tier of ML publishing. The Amazon Science author page fills in the middle chapter: someone who carried that research credibility into a production environment and had to figure out what of it actually transferred.

Closing the Gap

The through-line across Mihail's career - research, Amazon, consultancy, two startups, a newsletter, and a Stanford course - is a persistent focus on the distance between what AI can do in a paper and what it can do in a product. That gap is where most ML projects fail. It is also where most ML practitioners feel least prepared.

His writing, teaching, and building are all aimed at the same problem: giving engineers the practical knowledge to close that distance. Not the theory. The actual techniques for deploying models that do not collapse under production load, the tooling choices that do not become technical debt, and the organizational patterns that let ML teams ship without constant firefighting.

The MLOps Community podcast - which Mihail co-hosts, regularly bringing in senior ML practitioners from across the industry - is another surface for that mission. Episode 200, "Founding, Funding, and the Future of MLOps," featured Mihail reflecting on what the field has learned and what it is still getting wrong.

The headline on that post, "MLOps is a mess but that's to be expected," is not pessimism. It is the kind of thing you say when you have built enough systems to know that complexity is not a bug in the process - it is the nature of shipping ML in the real world. And you say it publicly because someone has to.

The Highlight Reel

LLM

Built some of Amazon Alexa's earliest large language models as a founding member of Alexa's special projects team

ACQ

Founded and sold Confetti AI to Towards AI in 2022 - an ML education platform he bootstrapped from zero

YC

Co-founded Storia AI through Y Combinator - AI-powered tools for creative image and video generation

17K

Grew ML Ops Notes newsletter to 17,000+ subscribers - one of the leading voices in production AI education

SU

Created and teaches CS146S at Stanford University - the first Stanford course on LLM-driven software development

ACL

Published research at ACL, AAAI, and NeurIPS; 2,431+ Google Scholar citations; co-created the DialoGLUE benchmark

The Path So Far

2014
Enrolled at Stanford University; joined the Stanford NLP Group under Christopher Manning, Percy Liang, and Christopher Potts
2017
Served as Teaching Assistant at Stanford for CS106A, CS106B, CS224N (NLP with Deep Learning), and CS109
2018
Helped build out teams at RideOS, an autonomous vehicle startup; published NLP research at top conferences
2019
Joined Amazon Alexa as Senior ML Scientist; became founding member of Alexa's first special projects team; built some of Alexa's earliest LLMs
2020
Founded Pametan Data Innovation, an ML consultancy helping organizations deploy data-driven solutions
2021
Founded Confetti AI, an ML interview prep and education platform
2022
Confetti AI acquired by Towards AI; co-founded Storia AI (Y Combinator); launched ML Ops Notes newsletter
2023
Became co-host of the MLOps Community Podcast; newsletter crosses early growth milestones
2024
Joined Monaco GTM as Head of AI; newsletter reaches 17,000+ subscribers; appeared on MLOps Podcast episode 200
2025
Launched CS146S "The Modern Software Developer" at Stanford; released public course on Maven for working engineers
5 Things That Make Mihail Eric, Mihail Eric
01

He studied NLP at Stanford under three of the field's most cited researchers simultaneously: Christopher Manning, Percy Liang, and Christopher Potts. Most people get one mentor. He got the panel.

02

His GitHub username is "mihail911" - a leftover from his early coding days that has stuck through Amazon, multiple startups, and a Stanford teaching appointment.

03

He wrote a public post titled "Claude Code Demystified: Whirring, Skidaddling, Flibbertigibetting." A man who names things that way cannot be accused of taking himself too seriously.

04

He co-created DialoGLUE - a benchmark that shaped how researchers measure dialogue understanding. 2,431+ other researchers have cited his work and built on top of it.

05

Right now, Mihail is simultaneously: Head of AI at a stealth startup, adjunct lecturer at Stanford, newsletter author for 17,000+ readers, and podcast co-host. The calendar must be a work of art.