The Man Who Builds
Things From Scratch
Why "From Scratch" Matters
The phrase "from scratch" appears in two of Raschka's book titles and runs through everything he does. It is not a marketing claim. It reflects a genuine pedagogical conviction: that you do not truly understand a technology until you have built it yourself. Not called an API. Not imported a library. Built it, token by token, layer by layer, loss curve by loss curve.
This is a position that has consequences. It means his books are longer than they need to be to sell well. It means his GitHub repositories are more documented than they need to be to accumulate stars. It means his newsletter articles are more thorough than they need to be to attract subscribers. The work is tuned for comprehension, not conversion.
Raschka spent part of his PhD applying ML to sea lamprey pheromone inhibitor research - funded by the Great Lakes Fishery Commission. This is the actual origin of his machine learning career. Biology, chemistry, statistics, and Great Lakes conservation. The path to becoming one of the world's most-read AI educators ran through invasive fish species.
The Newsletter Model
Most newsletters in tech eventually take sponsors. The economics make sense: a large audience is valuable, and newsletter sponsorships are relatively unobtrusive. Raschka has built 184,000+ subscribers without going this route. He maintains a paid tier at $6/month for readers who want to directly fund the work - but every article remains free.
The logic is transparent: he wants to write about what interests him, not what is convenient for a sponsor's product cycle. This is not a moral stance so much as a practical one. The moment external incentives enter the picture, the selection of topics shifts. He is writing about AI for readers who work in AI and will notice if the coverage tilts.
His process is methodical. He reads papers throughout the month, flagging the ones worth covering. Each summary gets a dedicated 30-minute to 1-hour edit. The final newsletter is assembled, polished, and sent. No team. No editorial staff. Just Raschka and his morning hours.
The Independence Bet
In 2025, Raschka left Lightning AI to found RAIR Lab as a fully independent entity. The timing was notable: he was leaving a well-funded AI company at a moment when most researchers were trying to get into one. The bet was that the newsletter, the books, the consulting, and the teaching work would be enough to sustain serious research without institutional backing.
It is a bet that runs against the conventional wisdom in AI research, which holds that you need compute, a team, and funding to do anything interesting. Raschka's counterargument is implicit in his output: books that explain frontier concepts, repositories that implement them from scratch, a newsletter that synthesizes what is actually happening week to week. None of that requires a data center.
Whether this model scales is a different question. But as of April 2026, he is delivering keynote addresses, appearing on Lex Fridman's podcast for four and a half hours, and publishing his fifth book. The independence bet appears to be working.
What He Teaches
Raschka spent years teaching machine learning and deep learning as an assistant professor. His stated goal for students was that they should "walk away with the confidence that they now know enough to do something useful." This is a precise and somewhat unusual ambition.
Most technical education aims for comprehension or certification. Raschka aims for confidence in action. The distinction shows up in how he structures explanations: he tends to lead with the concrete implementation, then work backward to the theory, rather than the reverse. You see how it works before you see why. The why becomes more legible once you have already done the thing.
This is visible in the LLMs-from-scratch repository, which mirrors the book exactly. You start with a character-level language model. You add attention. You add the full transformer architecture. You pretrain. You fine-tune. By the end, you have built a GPT-2-level model and watched every component come into existence. The confidence that results is not incidental - it's the product.
The Long Run
Raschka is a long-distance runner. He is also someone who has been building things in public since 2012, when he first started publishing about machine learning as a graduate student. That is fourteen years of consistent output: books, papers, blog posts, repositories, newsletters, talks.
The output suggests someone who thinks in terms of years, not news cycles. He does not appear to have published hot takes on whatever the AI story of the week is. He does not appear to have rushed books to market to catch a trend. He has built a body of work that gets more valuable as the field gets more complicated, because his work explains the foundations that do not change even as the applications do.
In a field that rewards novelty above everything, that is its own kind of differentiation.