The Builder Who Actually Ships
There is a version of an ML career that stays safely inside academia, accumulating citations. There is another version that stays safely inside big tech, accumulating stock grants. Mihail Eric chose a third path: build real systems, found real companies, teach real engineers, and write about the gap between all three - honestly, publicly, and at scale.
Today, Mihail serves as Head of AI at Monaco GTM, a stealth-mode startup building AI-powered revenue infrastructure for go-to-market teams. He is simultaneously an Adjunct Lecturer at Stanford University, where he created CS146S: "The Modern Software Developer" - a course that asks the question every computer science department is quietly nervous about: what does software engineering actually look like when large language models write the code?
The newsletter that got him there - ML Ops Notes - now lands in the inboxes of more than 17,000 practitioners each week. Not practitioners who want to hear about AI in theory. Practitioners who need to know what breaks in production, what tooling actually holds up under load, and what the gap between a research paper and a shipping product really looks like from the inside.
MLOps is a mess but that's to be expected.
- Mihail Eric, in a post that resonated because it was simply trueStanford, Alexa, and the Real World
Mihail's academic foundation is serious by any measure. He earned both his undergraduate and master's degrees in Computer Science from Stanford, spending years embedded in the Stanford NLP Group under three of the field's most influential researchers: Christopher Manning, Percy Liang, and Christopher Potts. He published at ACL, AAAI, and NeurIPS. He created the DialoGLUE benchmark, which became a standard tool for evaluating dialogue understanding systems.
But what separates Mihail from the pure academic track is what came next. He joined Amazon Alexa's Conversational Modeling team as a Senior ML Scientist and became a founding member of Alexa's first special projects team - the group tasked with figuring out what large language models could actually do inside a consumer product at Amazon's scale. He built some of Alexa's earliest LLMs. He saw what worked and what spectacularly did not.
He also wrote about it. His post "How Alexa Dropped the Ball on Being the Top Conversational System on the Planet" is the kind of insider critique that only someone with real credentials and real candor can produce. Not a hot take from the outside. A considered diagnosis from someone who was in the room.
Three Companies. One Acquisition. One YC Batch.
Mihail is not the kind of person who builds one company, sells it, and retires. He has founded three distinct ventures, each targeting a different part of the ML ecosystem, and each revealing something about what he thinks is actually missing.
Pametan Data Innovation came first - an ML consultancy that helped organizations across industries turn data strategy into deployed systems. The kind of work that teaches you exactly where production ML falls apart, because you are the one cleaning it up.
Then came Confetti AI, an ML interview prep and education platform that was acquired by Towards AI in 2022. Mihail later wrote about the experience in a piece titled "Honey, I Sold My First Bootstrapped SaaS Company" - an essay that is refreshingly honest about what that process actually feels like, for both the person selling and the person learning from it.
Most recently, he co-founded Storia AI, a Y Combinator-backed startup building AI-powered tools for image and video generation. That one went through YC - which means a different level of scrutiny, a different speed, and a different set of problems to solve.
In one corner of the internet, Mihail published a post titled "Claude Code Demystified: Whirring, Skidaddling, Flibbertigibetting." The headline choice alone tells you something: this is an engineer who takes the work seriously and the posturing less so. He has built LLMs for Amazon Alexa, published NLP research at top conferences, and still has enough fun with language to title an explainer piece like it is a fever dream. That combination - technical depth and genuine playfulness - is rarer than the industry likes to admit.
The Course Nobody Else Would Build
CS146S: "The Modern Software Developer" is the first Stanford course of its kind - dedicated entirely to how software engineering is changing because of coding LLMs. Mihail built it, pitched it to Stanford, and now teaches it to students who will graduate into an industry that has already been restructured by the tools they are learning to use.
He also took the curriculum public. Through Maven, he runs "AI Software Development: From First Prompt to Production Code" - a course designed for working engineers who did not have access to a Stanford classroom but still need to understand what building with LLMs actually requires. The newsletter became the proof of concept. The course became the product. The student base is now north of 17,000 and climbing.
His earlier stint as a teaching assistant at Stanford - covering CS106A, CS106B, CS224N (Natural Language Processing with Deep Learning), and CS109 - suggests this is not a recent pivot to educator mode. Teaching has always been part of how Mihail thinks about the work.
The gap between ML research and production is where companies win or lose.
- Mihail EricCitations, Benchmarks, and the NLP Foundation
Before there were newsletters and Stanford courses, there was peer-reviewed research. Mihail's academic work concentrated on task-oriented dialogue systems, knowledge-grounded conversation, and intent classification - the plumbing underneath every voice assistant and chatbot that was supposed to understand what you actually meant.
His contribution to DialoGLUE - a benchmark for dialogue understanding - gave the research community a shared standard for measuring progress on one of NLP's genuinely hard problems. With 2,431+ citations on Google Scholar, the work has reached the kind of scale where other researchers build on it without always knowing who built the foundation.
His papers at ACL, AAAI, and NeurIPS represent the top tier of ML publishing. The Amazon Science author page fills in the middle chapter: someone who carried that research credibility into a production environment and had to figure out what of it actually transferred.
Closing the Gap
The through-line across Mihail's career - research, Amazon, consultancy, two startups, a newsletter, and a Stanford course - is a persistent focus on the distance between what AI can do in a paper and what it can do in a product. That gap is where most ML projects fail. It is also where most ML practitioners feel least prepared.
His writing, teaching, and building are all aimed at the same problem: giving engineers the practical knowledge to close that distance. Not the theory. The actual techniques for deploying models that do not collapse under production load, the tooling choices that do not become technical debt, and the organizational patterns that let ML teams ship without constant firefighting.
The MLOps Community podcast - which Mihail co-hosts, regularly bringing in senior ML practitioners from across the industry - is another surface for that mission. Episode 200, "Founding, Funding, and the Future of MLOps," featured Mihail reflecting on what the field has learned and what it is still getting wrong.
The headline on that post, "MLOps is a mess but that's to be expected," is not pessimism. It is the kind of thing you say when you have built enough systems to know that complexity is not a bug in the process - it is the nature of shipping ML in the real world. And you say it publicly because someone has to.
The Highlight Reel
Built some of Amazon Alexa's earliest large language models as a founding member of Alexa's special projects team
Founded and sold Confetti AI to Towards AI in 2022 - an ML education platform he bootstrapped from zero
Co-founded Storia AI through Y Combinator - AI-powered tools for creative image and video generation
Grew ML Ops Notes newsletter to 17,000+ subscribers - one of the leading voices in production AI education
Created and teaches CS146S at Stanford University - the first Stanford course on LLM-driven software development
Published research at ACL, AAAI, and NeurIPS; 2,431+ Google Scholar citations; co-created the DialoGLUE benchmark