Tristan Zajonc's career began not in Silicon Valley but in Cambridge - and before that, at the World Bank. His 2012 Harvard dissertation, "Essays on Causal Inference for Public Policy," included fieldwork on learning outcomes in Pakistan and India. He was measuring whether school quality metrics actually predicted what students learned. The obsession with operationalizing good theory - turning rigorous models into real-world change - never left him.
It just migrated from development economics to machine learning infrastructure.
By the early 2010s, Zajonc was watching data scientists struggle with the same problem he'd studied in education research: the gap between what models could theoretically reveal and what organizations could actually do with them. In 2012, he founded Sense, Inc. - one of the earliest enterprise data science platforms. Before Jupyter notebooks were ubiquitous, before MLOps was a job title, Sense was trying to make collaborative data science tractable at scale.
"Every customer I talked to was all bought into the idea of the AI-first enterprise... but they were really all struggling to actually make that vision a reality."
- Tristan Zajonc, The Data Stack ShowThe Cloudera Chapter
When Cloudera acquired Sense in 2016, Zajonc stayed inside the machine. He spent three years there - first as Head of Data Science Platform Engineering, then as CTO for Machine Learning - helping build Cloudera Data Science Workbench into the product that large enterprises used to put data scientists to work. At KubeCon 2018, he was on stage explaining "Enterprise Machine Learning on K8s: Lessons Learned and the Road Ahead" to practitioners wrestling with exactly the same deployment complexity he'd been attacking since Sense.
The post-acquisition years at Cloudera gave Zajonc something rare: a front-row seat to why AI fails inside large companies. Not because the models are bad. Not because the data isn't there. But because the operational machinery to keep predictions fresh, reliable, and connected to actual business processes is brutally difficult to maintain at scale.
Cloudera's enterprise customer base gave Zajonc a clear view of the gap: companies had invested heavily in data infrastructure and ML talent, yet most AI initiatives stalled at the operationalization step. This pattern became the founding thesis of Continual.
Round Two: Continual
In 2019, Zajonc left Cloudera and co-founded Continual with Tyler Kohn, who became CTO. The original thesis was pointed: the modern data stack - Snowflake, dbt, Fivetran - was becoming the new enterprise data operating system, and AI should live inside it, not beside it. Rather than asking data teams to learn Python ML frameworks, Continual let them use SQL and dbt to define features, train models, and keep predictions continuously refreshed.
The pitch was declarative machine learning - the Terraform of AI. Define what you want, let the system figure out the operational machinery. In December 2021, Continual launched publicly with $4 million in seed funding. By June 2022, they'd doubled active users, deployed models, and annual recurring revenue - in three months - and raised a $14.5 million Series A led by Innovation Endeavors. The investor roster told its own story: dbt Labs founder Tristan Handy and Dremio founder Tomer Shiran both wrote checks as angels.
"It gives you this opportunity to build something very, very complex in terms of how it operates, because that's how the world works, but hide it behind a lot of simplicity."
- Tristan Zajonc on Continual's design philosophyThe Generative AI Pivot
Then GPT-4 arrived, and Zajonc pivoted again. Not an anxious pivot - a deliberate one. In late 2023, Continual relaunched as an AI copilot platform: a developer-facing SDK and API layer that let SaaS companies embed LLM-powered assistants directly into their products. The insight was structural: every SaaS application was about to need an AI layer, and most teams didn't have the infrastructure, the context management, or the evaluation tooling to build it well.
By 2024 and 2025, the platform had evolved again, this time into a full AI agent orchestration layer for enterprise operations - enabling organizations to build agents that automate complex workflows, integrate with existing tools, and run continuously. Zajonc speaks at Databricks Data + AI Summit, Data Council, and DataOps conferences as a practitioner, not a pundit. His GitHub has 52 repositories. He posts on X about benchmark rankings for GPT-5.1-Codex with the level of specificity that suggests he's actually run the tests.
"We ultimately see Continual as being powered by an ecosystem of AI capabilities both developed internally and externally."
- Tristan Zajonc on Continual's platform approachSQL as the Unlikely AI Hero
One of Zajonc's recurring arguments - unfashionable in a world obsessed with Python and neural network frameworks - is that SQL remains the most powerful tool in the enterprise AI stack. "Increasingly, SQL really is this incredibly powerful lingua franca... especially when you deal with scale," he said on The Data Stack Show. At Strata 2022, he gave a talk titled "The Case for Declarative Machine Learning." At Data Council 2023, it was "Generative AI for Product Builders."
The through-line is consistent: the best enterprise AI tools meet data teams where they already are, rather than demanding they adopt a new paradigm from scratch. It's a philosophy rooted in his academic background - building systems that are rigorous enough to work, simple enough to use, and honest about the gap between theory and practice.
An Unusual Resume
Zajonc holds a PhD in public policy and an MPA in international development from Harvard's Kennedy School, and a BA in economics from Pomona College. He co-authored peer-reviewed research with World Bank economists - including work on value-added models in education in South Asia. He was a Visiting Fellow at Harvard's Institute for Quantitative Social Science. None of this shows up in the typical AI founder bio, and that's probably the point.
The academic training in causal inference gave him something most ML founders lack: a precise vocabulary for the difference between correlation and prediction, between a model that works in the lab and one that produces reliable decisions in production. He's been applying that vocabulary to enterprise AI problems for over a decade, through two companies, one acquisition, and a series of technology cycles that most observers have called paradigm shifts and he's treated as infrastructure problems.