BREAKING
YC W25 GRADUATE RAISES $4.2M SEED ROUND LED BY MOONFIRE & BURST CAPITAL WEAVE RUNS LLMS ON EVERY PULL REQUEST TO MEASURE REAL ENGINEERING OUTPUT 25% OF NEW Y COMBINATOR COMPANIES NOW USE WEAVE CUSTOMERS SHIP 16% MORE OUTPUT AFTER TWO MONTHS ON PLATFORM NEW: WOOLY - ASK ANYTHING ABOUT YOUR ENGINEERING ORG YC W25 GRADUATE RAISES $4.2M SEED ROUND LED BY MOONFIRE & BURST CAPITAL WEAVE RUNS LLMS ON EVERY PULL REQUEST TO MEASURE REAL ENGINEERING OUTPUT 25% OF NEW Y COMBINATOR COMPANIES NOW USE WEAVE CUSTOMERS SHIP 16% MORE OUTPUT AFTER TWO MONTHS ON PLATFORM NEW: WOOLY - ASK ANYTHING ABOUT YOUR ENGINEERING ORG
YC W25 ENGINEERING ANALYTICS

Weave.
AI that
counts the
real work.

Your engineers are shipping code. Some of it is theirs. Some of it is AI's. Weave tells you which is which - and whether any of it matters.

Weave connects to your GitHub, runs large language models on every pull request, and converts the results into a number called the Weave Hour: how long it would have taken an experienced engineer to make that change. No story points. No line counts. Actual work.

Weave - AI Engineering Analytics
Founded 2024
HQ: San Francisco, CA
Team: ~5 people
Funding: $4.2M seed
Backed: YC, Moonfire, Burst Capital
ACTIVE

"How long would it take an experienced engineer to make this change?" - The Weave Hour.

Not lines of code. Not story points. Real work.

134 engineers interviewed before building a single feature.

THE PROBLEM

Engineering teams have been flying blind since the first commit.

For decades, engineering managers measured the wrong things. Lines of code rewarded bloat. Story points became a negotiation. Commit counts treated a typo fix and a distributed system redesign as equivalent. Nobody ever solved this cleanly - and then AI arrived and made everything worse.

When Cursor and GitHub Copilot and Claude Code started writing significant portions of every codebase, the old metrics didn't just become misleading. They became actively deceptive. A junior engineer running AI tools could out-commit a senior engineer doing architecture work. The numbers said one thing. Reality said something else entirely.

134 Engineers interviewed before building the first model
16% Average output increase for Weave customers after 2 months
25% Of new YC companies now running Weave on their repos
THE PRODUCT

LLMs read every pull request. Then math happens.

Weave integrates with GitHub (and other tools - Jira, Slack, Linear, project management systems) in around 30 seconds. From there, it runs its own trained models on every PR: analyzing what changed, why it was complex, how much cognitive load it represented, and whether AI was involved. The output is a dashboard that tells you, per engineer and per team, how much work is actually getting done.

The central unit is the Weave Hour - Weave's own metric, designed to be resistant to gaming. Unlike story points, which teams set themselves, a Weave Hour represents what an experienced engineer would need to replicate a given change. It's calibrated to your codebase, not some abstract standard.

📊

Code Output & Quality

Tracks real engineering output per engineer, per team, over time. Identifies who is consistently high-output - and what "high-output" actually means in your codebase.

🤖

AI Impact & ROI

Measures how much code is AI-generated, whether AI tools are improving velocity or creating review debt, and the actual financial return on AI tool spend.

🔍

Code Reviews

Scores review quality, not just review quantity. One customer found that review quality had the highest correlation to team output - and used that finding to reset standards.

🐑

Wooly NEW

Weave's AI agent. Ask anything about your engineering org in natural language - output trends, team comparisons, bottlenecks. Answers come with the data behind them.

📈

DORA & Benchmarks

Standard deployment frequency and lead time metrics, plus comparisons against industry benchmarks so teams understand where they stand, not just how they trend.

💰

Dev FinOps

Connects engineering activity to cost data. Shows the financial impact of engineering decisions - useful when justifying headcount or AI tool budgets to finance teams.

WHAT THE "WEAVE HOUR" MEASURES - VS TRADITIONAL METRICS

Lines of Code
20%
Story Points
30%
Commit Count
15%
Weave Hour
88%

Conceptual illustration of signal-to-noise ratio for measuring actual engineering output.

"A manager noticed that code review quality has the highest correlation to output. He reset code review standards and team output went up by 15%."

- Weave customer story, shared in YC launch post
THE STORY

Two founders, one question: where did the work go?

Adam Cohen is a salesman turned operator. He grew up in Toronto, ran a lemonade stand on a wagon as a kid, and spent years at Top Hat building revenue operations from the ground up. His father worked in venture capital. Before Weave, he was VP of Operations at Causal - a spreadsheet-meets-financial-modeling startup that was acquired by Lucanet in 2024.

Andrew Churchill is the technical counterpart. MIT computer science and math. Employee number one at Causal, where he built the spreadsheet interface, the access control system, and the AI onboarding engine. He also spent time at Microsoft and Salesforce, and did research at MIT's CSAIL lab. When Adam and Andrew started Weave, Andrew already knew what it took to build AI-native products from scratch.

Adam and Andrew went through several product iterations before landing on Weave's core insight: summarizing AI, integrating Git and Jira - nothing stuck until they decided to focus on the underlying data itself. The realization that measurement was the actual problem, not the tooling, is what shaped everything that followed.

They interviewed 134 engineers before building the first model. Not 10 customers. Not a handful of design partners. 134 interviews. The pattern was consistent: engineering leaders knew AI tools were being used. They had no way to know if the investment was paying off. Weave was built to answer that question specifically.

They ran through Y Combinator's Winter 2025 batch in San Francisco. By the time the seed round closed - $4.2M, led by Moonfire and Burst Capital with YC participating - 25% of new YC companies were already running Weave on their repositories.

Advertisement
FOR YOU

Stop guessing. Start seeing.

If you manage an engineering team and you're spending money on Cursor, GitHub Copilot, or Claude Code - Weave tells you whether those tools are actually accelerating delivery or just generating code that sits in review queues and creates new technical debt. You connect your repository and get your first dashboard in roughly 30 seconds.

For engineering leaders at startups, Weave surfaces who your best AI adopters are, so they can share practices with the rest of the team. For enterprise teams, it provides the DORA metrics and financial framing that engineering organizations need to make budget arguments to non-technical stakeholders. For individual engineers, Wooly - Weave's new AI agent - lets you ask questions about your own output trends in plain language.

⚡ CONNECTS TO YOUR REPO IN ~30 SECONDS. NO JOKE.
COMPETITION

Engineering analytics is a real category. Weave is the AI-native entrant.

The engineering intelligence space has incumbents. LinearB, Jellyfish, and Waydev all measure engineering work and have done so for several years. Pluralsight Flow (formerly GitPrime) has been in this space since before AI was a factor. The gap Weave is targeting is specific: none of those tools were built for a world where a meaningful percentage of each codebase is AI-generated.

Platform AI Code Attribution Built for AI Era Stage
Weave Yes - PR-level attribution Core focus Seed / YC W25
LinearB Limited Retrofitted Growth
Jellyfish No No Series B
Waydev Partial Partial Early growth
Pluralsight Flow No No Acquired / Enterprise
TIMELINE
2024
Company founded Adam Cohen and Andrew Churchill start WorkWeave Inc. in San Francisco after both leave Causal (acquired by Lucanet).
Jan 25
First AI model ships Weave builds and deploys the initial LLM-based model that quantifies engineering work by analyzing pull requests.
W25
Y Combinator Winter 2025 batch Weave joins YC. By the time the batch ends, 25% of new YC companies are using the platform.
Aug 25
$4.2M seed round announced Moonfire and Burst Capital lead the round with YC participating. Customers include Reducto, Superpower, PostHog, and Laurel.
2026
Wooly agent launches New AI agent lets engineering leaders ask questions about their org in natural language. Enterprise tier added for larger teams.
THE BIGGER PICTURE

Every company is buying AI tools. Almost none can tell if they're working.

The trend Weave is riding is straightforward: AI coding tools have gone from novelty to standard infrastructure in under two years. Most engineering organizations are now paying for Cursor, GitHub Copilot, Claude Code, or some combination. The procurement decision was often made quickly. The ROI question was left unanswered.

Weave is betting that engineering observability - actually understanding what your team is producing - becomes a category in its own right, the way security monitoring or performance monitoring did before it. The company's culture page describes a preference for tackling the hardest problems first, pricing month-to-month to keep themselves accountable, and maintaining a shared Slack channel with every customer to stay close to feedback. That last part is not typical at a seed-stage company.

"We've decided to focus on the hardest problems first. We've always made it harder for ourselves up front so we can expedite learnings."

- Weave, company culture page