THE KEYBOARD IS A RELIC
The keyboard is 150 years old. The QWERTY layout was designed to keep typewriter arms from jamming - a mechanical problem solved by deliberately slowing down typists. That design survived into the digital age unchanged. The most common input device in the world was engineered to be slow on purpose.
Meanwhile, a typical knowledge worker types emails, Slack messages, code comments, docs, prompts to AI tools - hundreds of short text bursts every day. Each one burns time and attention. Not because writing is hard. Because typing is inefficient.
People speak at 130-150 words per minute. They type at about 40. The gap is not a preference gap - it is a hardware gap that no amount of mechanical keyboards or autocorrect has closed.
"Your hands didn't evolve for keyboards. Your mouth did. We're just making software catch up with biology."- Willow Team
WHAT WILLOW ACTUALLY DOES
Willow is a voice-dictation keyboard that installs at the system level on Mac and iOS. It intercepts wherever you would normally type and lets you speak instead. Not just transcription - smart transcription.
It understands context. If you're in a coding environment, it knows you are typing code. If you're in Slack, it knows you are sending a message. That context shapes how it interprets your words, which is why it handles technical terms, product names, and proper nouns better than generic dictation.
There is no "dictation mode" to turn on. No waiting. You speak, text appears. Under 500 milliseconds. That is fast enough that the brain does not perceive a delay. In practical terms: it feels like magic. In engineering terms: it is a very hard latency problem, solved.
WILLOW VS THE ALTERNATIVES
| Feature | Willow | macOS Dictation | Generic Speech-to-Text |
|---|---|---|---|
| Latency | <500ms | ~1-2s | Variable |
| Context-aware accuracy | Yes - app-aware | No | No |
| Custom dictionary / jargon | Yes | No | Limited |
| Smart formatting + punctuation | Automatic | Manual | Inconsistent |
| Works across all apps | Yes | Partial | No |
| Privacy - no data stored | Yes | Yes (local) | Often stores |
WHO BUILT THIS
Dropped out of Stanford to build companies. Before Willow, Allan and his co-founder spent over a year trying other ideas - including healthcare software for assisted living facilities - and went through more than 10 pivots before landing on the voice problem. That willingness to throw away bad ideas is, arguably, what made the good one possible.
The technical architect behind Willow's sub-500ms pipeline. Lawrence solved one of the core hard problems in consumer voice AI: getting latency low enough that users never feel friction. The result is a system that responds within the same window as human-to-human conversation. Fast enough that your brain stops noticing the technology.
Ten pivots is not failure. It's data. Allan and Lawrence collected enough of it to find the one problem worth solving.- YesPress Editorial
ROAD TO THE KEYBOARD'S FUNERAL
WHO WROTE THE CHECK
$4.5M Seed Round - November 2025
The angel list tells a story. Dharmesh Shah built HubSpot into a billion-dollar company on the thesis that software should feel human. Alexis Ohanian built Reddit on the thesis that communities, not algorithms, drive the internet. Both bets took a long time and a lot of ridicule before they paid off.
That they are behind Willow suggests something: the investors who backed outsider theses before are betting this one is bigger than it looks.
ENTERPRISE EARLY ADOPTERS
The companies using Willow are not small experiments. They are organizations where knowledge workers are paid to produce output fast - and where any tool that makes output faster has measurable ROI. Uber's operations teams, GitHub's engineering squads, Canva's design org: these are power users, and they chose to speak instead of type.
A Voice Operating System, Not Just a Keyboard
Willow's stated long-term vision is not to be a better dictation tool. It is to build the infrastructure layer for voice-first computing - an AI that starts by writing, moves to taking actions, and eventually anticipates what you need before you say it.
THINGS THAT MAKE WILLOW INTERESTING
Fun Facts
- Both founders dropped out of Stanford - not for Willow specifically, but for the idea that the right problem was worth leaving for.
- They tried over 10 different startup ideas before Willow. The graveyard includes a healthcare software product for assisted living facilities.
- Engineers use Willow to send prompts to AI coding tools like Cursor 4x faster than typing. The product they build, they use to build the product.
- Willow claims 40%+ accuracy improvement over macOS built-in dictation - a comparison that is easy to verify and hard to argue with if you try both.
- Privacy: Willow processes voice in the cloud for speed but does not store or log what you say. Cloud latency, local privacy.
<500msTime from spoken word to text on screen
10+Pivots before finding Willow
6People building what could replace a 150-year-old interface
WHY THE KEYBOARD MIGHT ACTUALLY LOSE
Past voice-interface attempts failed for two reasons: accuracy and latency. Accuracy meant the thing typed "their" instead of "there" and drove you insane. Latency meant you spoke and then watched the cursor blink for two seconds before letters appeared. Both made the technology feel like a downgrade.
Willow's bet is that both problems are now solved - or solved enough. Sub-500ms processing eliminates the delay. Context-aware AI handles the accuracy problem where generic speech recognition couldn't: it knows what words are likely in your environment and uses that to disambiguate. A developer typing "git commit" and a doctor typing "git commit" would get different corrections.
The real test is habit change, not capability. People are attached to keyboards - physically, emotionally, ergonomically. Changing input methods is not a feature evaluation, it is a behavioral shift. Willow's 50% month-over-month growth suggests they found the cohort of users willing to make that shift. Whether that cohort expands or plateaus will define whether this is a startup or an industry.
The enterprise adoption list helps. When Uber and GitHub build workflows around voice input, the technology gets normalized. Normalization makes it easier for the next company, the next manager, the next engineer to try. That diffusion pattern is how most new interfaces go mainstream.