The Guy Who Wrote the Glue
Harrison Chase was not trying to disrupt anything. He was a machine learning engineer at Robust Intelligence - a startup building ML model testing and validation tools - and in late 2022 he kept showing up to San Francisco AI meetups where everyone was experimenting with the same thing: language models. The problem was always the same too. The models were impressive. Getting them to do something consistently useful was a different matter.
LLMs could generate text. But chaining them - feeding one model's output into another, attaching tools, managing memory, creating loops - that was duct tape and prayer. Chase, trained in statistics and computer science at Harvard and seasoned by years of structured data work at Kensho and Robust Intelligence, saw the problem as a software engineering problem. So he solved it the way engineers do: he wrote a framework.
Chase got into machine learning through sports analytics. At Harvard, studying statistics, he kept finding that stats and computer science were the same discipline wearing different jerseys. He double-majored, graduated in 2017, and headed to Kensho Technologies - a fintech startup that S&P Global later acquired for $550 million. At Kensho, he led the entity linking team, connecting messy real-world data points into structured, actionable knowledge graphs. At Robust Intelligence, he led the ML team focused on testing and validating complex models. None of this is glamorous. All of it is exactly the kind of work that teaches you why reliability matters more than novelty.
From GitHub Repo to Enterprise Infrastructure
LangChain, Inc. was formally incorporated in February 2023 with co-founder Ankush Gola. Sequoia and Benchmark were early backers - firms that recognized the open-source traction and what it meant for developer adoption. By mid-2023, LangChain had 93,000 Twitter followers and 31,000 Discord members, built almost entirely through community momentum rather than marketing spend.
LangSmith launched in beta in July 2023 - a cloud-based monitoring and evaluation platform for LLM applications. It was the beginning of Chase's answer to a question every developer building with LLMs was asking: how do you know if this is working? Evaluation in generative AI is genuinely hard. There's no single correct answer to grade against. Chase's solution was systematic: build infrastructure that lets developers trace, monitor, and evaluate every step of every chain.
LangGraph became the framework for building agents that actually work in production - stateful, controllable, fault-tolerant. The same high-level abstractions that made LangChain easy to get started with were, Chase acknowledged openly, now getting in the way of production use. "The same high-level interfaces that made it easy to get started were now getting in the way," he wrote. So they fixed it.
In October 2025, LangChain closed a $125 million Series B at a $1.25 billion valuation, led by IVP with participation from CapitalG, Sapphire Ventures, Sequoia, Benchmark, Amplify Partners, ServiceNow Ventures, Workday Ventures, Cisco Investments, Datadog, and Databricks. The same month, LangChain 1.0 was released - described by Chase as "far more curated than anything you've seen from our team before" - and LangSmith evolved into a full Agent Engineering Platform.
Agents as Digital Labor
Chase's mental model for where AI is going is precise and unsentimental. Agents, in his framing, are digital labor. They browse the web, navigate file systems, call APIs, write code, and execute workflows - not because they're intelligent in a philosophical sense, but because the tooling is finally good enough to let them do those things reliably. The shift he's watching closely: as models improve, the value of the harness around them grows, not shrinks.
He's particularly interested in long-term memory - agents that accumulate knowledge across sessions, learn from interactions, and become genuinely more useful over time. "I think the idea of long-term memory is really interesting," he's said. "Having agents remember things over time... that's a really interesting step in this idea of more personalized agents that know more about you." This isn't science fiction speculation. It's a product roadmap.
Chase has also been direct about what's hard. Evaluation remains the most underrated problem in production AI. When you can't define "correct," measuring quality requires creativity - using language models to evaluate other language models, building structured rubrics, tracking regressions in behavior over time. LangSmith is LangChain's answer to this. It's where much of the company's enterprise value lives.
On agent reliability: "I don't think we've kind of nailed the right way to interact with these agent applications," he said in one interview - a characteristically honest assessment from someone running the company that more developers use to build agents than anyone else. The gap between prototype and production isn't closing as fast as the hype suggests. Chase is building the infrastructure to close it.
Harrison Chase on AI
The Road to LangChain
What He's Built
-
01Founded LangChain, the most widely adopted open-source LLM framework, now with 80 million monthly downloads and over 1 million developers in the community.
-
02Raised $260M total across multiple rounds from Sequoia, Benchmark, IVP, CapitalG, Sapphire, and strategic investors including Datadog and Databricks.
-
03Built LangSmith into a comprehensive Agent Engineering Platform used by enterprises including Rippling, Cloudflare, Replit, Harvey, LinkedIn, Uber, JPMorgan, and BlackRock.
-
04Took LangChain from 0 to $1.25 billion valuation in three years, with a team of 98 people - one of the most capital-efficient growth stories in AI infrastructure.
-
05Named to BigDATAwire People to Watch 2024. Spoke at TED AI San Francisco. Featured on Sequoia Capital's Training Data podcast. Taught on Coursera.