BREAKING
Coined "vibe coding" - Collins Dictionary 2025 Word of the Year Has not written code since December 2025 nanoGPT: 57,500+ GitHub stars Eureka Labs: AI-native school, founded 2024 AutoResearch AI ran 700 experiments in 2 days, improved training by 11% 2.3M Twitter followers TIME 100 Most Influential in AI - 2024 Co-founded OpenAI in 2015 Coded "Dobby the Elf Claw" to control his entire smart home via WhatsApp Coined "vibe coding" - Collins Dictionary 2025 Word of the Year Has not written code since December 2025 nanoGPT: 57,500+ GitHub stars Eureka Labs: AI-native school, founded 2024 AutoResearch AI ran 700 experiments in 2 days, improved training by 11% 2.3M Twitter followers TIME 100 Most Influential in AI - 2024 Co-founded OpenAI in 2015 Coded "Dobby the Elf Claw" to control his entire smart home via WhatsApp
Andrej Karpathy
Andrej Karpathy - AI Pioneer
Person / Engineer / Founder

ANDREJ
KARPATHY

The man who taught the world to think in neural nets - and then handed the keyboard to the machines.

OpenAI co-founder. Tesla Autopilot architect. Eureka Labs founder. He coined "vibe coding," built nanoGPT, and as of March 2026, runs his entire codebase through grids of AI agents - without touching a keyboard.

AI Pioneer Vibe Coding Eureka Labs nanoGPT CS 231n Tesla Autopilot
2.3M
X Followers
57.5k
nanoGPT Stars
180k
GitHub Followers
50k+
Academic Citations
30+
Peer-Reviewed Papers
5
Years at Tesla AI
750+
CS 231n Students (Peak)
700
AutoResearch Experiments

The Architect of Learning Machines

He grew up in communist-era Bratislava, moved to Toronto at fifteen, and by twenty-eight was teaching hundreds of Stanford students to build the neural networks that now run inside three million Teslas. Andrej Karpathy does not ease you into the deep end. He jumps first and narrates the fall in real time.

In 2015, when most AI researchers guarded their methods like trade secrets, Karpathy put CS 231n - his graduate course on convolutional neural networks - on YouTube for free. Enrollment grew from 150 students to 750 in two years. The lectures became a rite of passage. His course notes are still the first thing serious ML practitioners recommend to beginners, half a decade later.

// FROM BRATISLAVA TO THE BLEEDING EDGE //

He co-founded OpenAI in 2015, then left for Tesla in 2017 to build something operationally real. As Director of AI, he championed a vision-only approach to Autopilot - cameras, no LiDAR - when the rest of the industry thought he was wrong. He was not wrong. The system he built got deployed at scale. He spent five years arguing with physics, writing training loops, and occasionally arguing with Elon Musk. In July 2022, he left.

The hottest new programming language is English.

- Andrej Karpathy, on Software 3.0

The brief return to OpenAI in 2023 lasted exactly one year. He spent it on midtraining and synthetic data before founding Eureka Labs in July 2024 - an AI-native school where, as he put it, AI tutors do what the best human tutors do, just without the scarcity problem. The first course: LLM101n, teaching students to train language models from scratch without touching an external API.

He has a gift that is rarer than technical ability: he can explain a thing at the exact level of abstraction you need to actually understand it. Not dumbed down. Not drowned in notation. Just right. The three-and-a-half-hour YouTube deep dive he released in February 2025 - "Deep Dive into LLMs like ChatGPT" - is a masterclass in this. No fluff, no product pitch. Just the machinery, laid bare.

Software 1.0, 2.0, 3.0

Karpathy has a habit of naming things before the industry realizes it needs the name. His 2017 Medium essay "Software 2.0" reframed neural networks not as tools but as a new paradigm of programming - one where you define the goal and the data, and the network writes its own weights. Seven years later, he added a third act.

Traditional
1.0
Human Logic
Explicit rules. Developers write if-then code. You know exactly what the program does - and why.
Neural Networks
2.0
Learned Weights
Neural nets trained on data. The weights are the program. Introduced in Karpathy's 2017 Medium essay.
LLMs / Agents
3.0
English as Code
Prompts as programs. Presented at Sequoia AI Startup School, 2025. The hottest new programming language is English.

Vibe Coding: The Accidental Word of the Year

In February 2025, Karpathy posted what he called a "shower of thoughts throwaway tweet." It described a way of building software by surrendering to AI suggestions, running code without reading it, and going entirely on vibes. One year later he noted: "I've had a Twitter account for 17 years and I still can't predict tweet engagement basically at all." Collins Dictionary could predict it. They named it 2025 Word of the Year.

Collins Dictionary Word of the Year 2025

By the Numbers

GitHub Stars / nanoGPT
57,500+
His reference GPT implementation. The go-to starting point for anyone training a language model from scratch.
Days Since Last Manual Code
130+
As of March 2026, Karpathy has not written a single line of code manually. He orchestrates AI agents instead.
AutoResearch Experiments
700
His AI agent ran 700 ML experiments over two days, found forgotten weight decay settings, and improved training by 11%.
Rubik's Cube Personal Best
~17s
Under alias "badmephisto," his YouTube speedcubing tutorials accumulated 9M+ views. Feliks Zemdegs, world record holder, watched them.

The Man Behind the Method

There is a certain type of person who, when they do not understand something, builds it from scratch. Karpathy is this person taken to an extreme. micrograd - his minimal autograd engine - is 150 lines of Python that teaches the entire chain rule. nanoGPT trains a real GPT model in a single readable file. llm.c does LLM training in pure C. He keeps stripping away abstraction until the thing is so simple it can no longer lie to you.

This is the pedagogy: not explanation, but reconstruction. You do not understand a neural network by reading about it. You understand it by watching it fail at predicting the next character in a Shakespeare text and then nudging it until it succeeds. The moment of failure is the curriculum.

Don't read papers, implement them. Understanding comes from getting hands dirty.

- Andrej Karpathy

At Tesla, this philosophy met industrial reality. His Autopilot team was not building academic models - they were deploying neural nets to three million vehicles in real-world traffic. The vision-only approach, which removed LiDAR and radar from the sensor stack entirely, was a bet that cameras + computation could outscale any multi-sensor rig. The counterintuitive logic: humans navigate roads with eyes alone. Build systems that work like that, then scale the data.

Karpathy has never quite fit the standard archetype of either academic or tech executive. He maintains a deliberately minimal web presence - karpathy.ai is deliberately fast-loading, intentionally spartan. His email is ROT13-encrypted because the volume is unmanageable. He replies to roughly one percent. He wears t-shirts. He reads five to ten AI papers per week. He starts work at six in the morning.

In December 2025, Karpathy crossed what he calls the "Coherence Threshold" - the point at which AI agents became coherent enough that manual coding felt like a bottleneck. He stopped. He has not gone back.

The Eureka Labs project is where these threads come together. If nanoGPT was "here is the simplest possible thing that works," Eureka Labs is "here is the simplest possible thing that scales." The model: expert human teachers design the curriculum, AI tutors deliver it one-on-one to everyone simultaneously. The implicit critique of existing education: scarcity of good teachers is the only real barrier, and that barrier is now technically removable.

He proposed, half-seriously, that "being funny" should be a legitimate benchmark for artificial general intelligence. The argument: genuine humor requires theory of mind, cultural knowledge, timing, and subverted expectations. A system that is reliably funny is doing something interesting. He is right that it is hard. He is also, for a man who insists he does not do social media strategy, extremely good at writing tweets that stick.

Quotable Karpathy

There's a new kind of coding I call 'vibe coding', where you fully give in to the vibes, embrace exponentials, and forget that the code even exists.

I'm still pretty sure I'm an NPC, but an NPC can't know it's an NPC.

A good AI system is 10% algorithms and 90% data pipelines, infrastructure, and iteration.

The things that agents can do, they can probably do better than you. The things agents cannot do is your job now.

The Timeline

1986

Born in Bratislava, Czechoslovakia (now Slovakia) on October 23.

2001

Family moves to Toronto, Canada. He is fifteen. Starts programming seriously around this time.

2006

Begins posting Rubik's cube speedsolving tutorials on YouTube as "badmephisto." The channel eventually reaches 9M+ views. World champion Feliks Zemdegs watches them.

2009

Completes BSc at University of Toronto (Computer Science + Physics double major, Mathematics minor).

2011

MSc from UBC. Begins PhD at Stanford under Fei-Fei Li. Interns at Google Brain, Google Research, DeepMind.

2015

Creates CS 231n at Stanford. Co-founds OpenAI as a founding research scientist. Two pivotal moves, same year.

2017

Publishes "Software 2.0" on Medium. Joins Tesla as Director of AI and Autopilot Vision. Reports directly to Elon Musk.

2022

Leaves Tesla in July after five years leading Autopilot AI. The vision-only approach has been deployed to over three million vehicles.

2023

Rejoins OpenAI in February. Builds teams on midtraining and synthetic data generation.

2024

Leaves OpenAI in February after exactly one year. Founds Eureka Labs in July - an AI-native education platform.

2025

Coins "vibe coding" in a February tweet. Collins Dictionary names it Word of the Year. Releases 3h31m "Deep Dive into LLMs" on YouTube.

2026

Stops writing code manually in December 2025. Releases microgpt (GPT in 243 lines of Python). AutoResearch runs 700 autonomous experiments.

Achievements

2015
Co-founded OpenAI as a founding research scientist - one of the most consequential AI labs in history.
2015-2017
Created and taught CS 231n at Stanford - course grew from 150 to 750+ students and became a global AI education reference.
2017-2022
Led Tesla Autopilot AI. Championed vision-only approach; deployed neural nets to 3+ million vehicles.
2020
Named to MIT Technology Review's "Innovators Under 35" list.
2024
Named to TIME Magazine's "100 Most Influential People in AI."
2025
Coined "vibe coding" - named Collins Dictionary 2025 Word of the Year. 30+ papers, 50,000+ academic citations.