ZOA RESEARCH FORECASTING THE FUTURE ACROSS EVERY DOMAIN Y COMBINATOR S24 NEW YORK, NY JANE STREET MEETS HARVARD LAW CROSS-DOMAIN AI FORECASTING ENGINES ALPHAEVOLVE FOR FORECASTING Z-GRANTS: $10K FOR ML RESEARCH EARTHQUAKES. OPTIONS. ENERGY. ONE ENGINE. ZOA RESEARCH FORECASTING THE FUTURE ACROSS EVERY DOMAIN Y COMBINATOR S24 NEW YORK, NY JANE STREET MEETS HARVARD LAW CROSS-DOMAIN AI FORECASTING ENGINES ALPHAEVOLVE FOR FORECASTING Z-GRANTS: $10K FOR ML RESEARCH EARTHQUAKES. OPTIONS. ENERGY. ONE ENGINE.
Greg Volynsky, CEO of Zoa Research
Profile / Company

Zoa
Research

The lab that bets on better forecasting - across every domain on earth

A Harvard Law dropout meets a Jane Street algo trader. They both decided the real edge isn't in the trade - it's in the prediction. Meet the New York lab training AI to see around corners.

Y Combinator • Summer 2024 • New York, NY

One Engine. Every Domain.

Most forecasting tools are built for one thing. Bloomberg's terminal knows bonds. OpenWeather knows rain. Your options desk quant knows implied vol. But the world doesn't divide itself neatly into silos - and the best predictions are made by people who can read signals from across disciplines.

That's the premise behind Zoa Research. The New York-based AI lab, founded by Greg Volynsky and Sam Damashek and backed by Y Combinator's Summer 2024 cohort, is building quantitative forecasting models that don't specialize. They generalize. One cross-domain engine that can be pointed at earnings surprises, earthquake risk, energy demand, or supply chain disruption - and produce calibrated, probabilistic predictions.

The bet isn't that AI can beat a domain expert at their own game. The bet is that a model trained across thousands of different event types will spot patterns that no single expert ever could.

"Better forecasting doesn't just improve trading. It improves science - enabling more effective experimentation and better updating of beliefs."

- Zoa Research

What makes this more than a clever pitch is the mechanism. Zoa embeds large language models inside multi-agent optimization loops with fixed evaluation policies. The system builds models, tests them, and iterates - automatically. Internally, they describe it as "AlphaEvolve for forecasting." The AlphaGo parallel is intentional: you don't train a superhuman Go player by having them study games. You train them by having them play millions of games and learn from every loss.

Zoa is doing the same thing, except the game is predicting what happens next in the real world.

2024 Founded
YC S24 Batch
$10K Z-Grant Amount
5+ Team Members
Domains Covered

A Lawyer, A Trader, and a Forecasting Engine

Founders rarely get their origin story right on the first try. Greg and Sam met at Carnegie Mellon in 2017 - not in a machine learning lab, but somewhere in the intersection of curiosity and ambition that CMU tends to produce. Greg went to Harvard Law. Sam went to Jane Street.

Co-Founder & CEO
Greg Volynsky
Carnegie Mellon → Harvard Law

Greg is the kind of person who reads political history for fun and listens to Soviet bard music while debating AI ethics. Harvard Law gave him the frameworks for structured reasoning. Carnegie Mellon gave him the quantitative foundation. Zoa gave him the question worth asking: what if we could forecast everything?

Co-Founder & CTO
Sam Damashek
CMU CS → Jane Street (NYC & HK) → Zoa

Three years at Jane Street trading options algorithmically in New York. Then a move to Hong Kong to build out a satellite development team for Asia markets. Sam didn't just understand how financial models work - he built them at one of the most demanding quant shops on earth. He's also a CTF player, which explains the adversarial mindset.

Together, they pulled in a team that includes a CMU CS PhD in automated reasoning (proof complexity, SAT solvers), a Princeton CS full-stack/ML engineer, and an operations lead. The team reads like a YC dream list, except they actually met each other before the application.

The Forecasting Pipeline

Traditional forecasting relies on human intuition and domain knowledge. Zoa's system automates the entire build-test-improve cycle using LLMs inside multi-agent optimization loops.

01 Cross-Domain Data Financial, scientific, geophysical signals
02 LLM Agents Build candidate forecasting models
03 Fixed Eval Policy Rigorous backtesting & scoring
04 Auto Iteration Loop until calibration improves
05 Generalized Engine Deploy as capital or API

The insight isn't the model. It's the loop. AlphaGo didn't study games - it played millions of them. Zoa doesn't study forecasting domains - it builds and breaks models until they work.

- The ZOA Approach

Where the Engine Points

The same underlying architecture is used across domains that have nothing obvious in common. That's by design. A model that can switch between earthquake forecasting and energy demand isn't a generalist - it's an arbitrageur of pattern recognition.

📈Finance
🌐Supply Chain
Energy
🌋Geophysical
🧬Pharma / Bio
🔬Scientific Research

Three Ways to Bet on Better Predictions

💰
Proprietary Trading

Zoa puts its models to work with real capital. The lab isn't just building forecasting tools - it deploys them against live markets. Skin in the game as a feature, not a footnote.

🔌
Forecast-as-a-Service

An API and console product giving researchers, quant funds, and enterprise teams access to Zoa's cross-domain forecasting engines. Probabilistic event prediction, available as infrastructure.

🎓
Z-Grants

$10,000 grants for ML research projects. Recipients share their learnings and join Zoa's research community. Part funding mechanism, part talent pipeline, part intellectual network.

The Z-Grant: Research With a Return

Most research grants come from foundations with agendas, or corporations with NDAs. Zoa's approach is different. The Z-Grant program hands $10,000 to ML researchers - no strings attached beyond sharing what they learn and joining the research community.

It's a classic YC move: build the network before you need it. A lab that funds researchers creates a distributed brain trust of people who understand forecasting from first principles. When you need to hire or collaborate, you're not cold-calling - you're texting people you already funded.

$10K

Z-Grants for ML Research

Open grants for machine learning projects that advance the science of forecasting. Recipients share learnings and join Zoa's growing research network. Apply at zoaresearch.com/zgrant.

The Story So Far

2017
Greg Volynsky and Sam Damashek meet at Carnegie Mellon University.
Pre-2024
Sam spends three years at Jane Street trading options algorithmically in NYC, then builds out a satellite development team in Hong Kong for Asia markets. Greg attends Harvard Law School.
Early 2024
Zoa Research is founded in New York City. The team assembles around a shared thesis: generalized forecasting models outperform specialized ones across every domain.
Summer 2024
Accepted into Y Combinator's Summer 2024 batch - one of the ~75% AI startups in the cohort. Funding closed, product development accelerates.
2024
Launches the Z-Grants program offering $10,000 for ML research. Begins recruiting from Princeton, CMU, and top quant finance programs.
2024 - Present
Forecast-as-a-Service API development continues. Proprietary trading operations running. Team growing with PhD-level talent in automated reasoning and ML.

Forecasting as Infrastructure

The interesting thing about Zoa's dual business model isn't that they trade with their own models. It's that the trading operation functions as a proof-of-work for the API product.

Every time Zoa deploys capital on a prediction, they're generating evidence that the models are real. Not a research demo. Not a whitepaper. An actual position in a real market. When they eventually sell API access to a quant fund or a pharmaceutical lab, the track record is built-in.

It's a credibility flywheel. The prop trading makes the FaaS product trustworthy. The FaaS product funds more model development. The models get better. Rinse, repeat.

Prop Trading PROOF OF WORK
FaaS API B2B REVENUE
Z-Grants TALENT NETWORK
Research Loop COMPOUNDING MOAT

Things Worth Knowing About Zoa

🎵
Greg Volynsky listens to Soviet bard music and debates political history for fun. His LinkedIn reads like someone who refused to pick one interesting thing to be obsessed with.
🏦
Sam Damashek played CTFs with PPP at CMU - one of the most competitive hacking teams in collegiate computer science. He brings the same adversarial problem-solving to market prediction.
🌏
Before Zoa, Sam built and managed a satellite dev team in Hong Kong for Jane Street's Asia options markets. He went from NYC trading floors to HK skyscrapers before landing back in New York to build a forecasting company.
🏫
The team includes a CMU PhD in automated reasoning - specifically proof complexity and SAT solvers. That's the kind of hire you make when you're serious about building systems that can prove their own predictions are well-calibrated.
🎓
Princeton Career Services listed a Zoa Research coffee chat event. A two-year-old startup doing campus coffee chats at Princeton is either charming or savvy. Probably both.
⚗️
The founding thesis treats forecasting and scientific discovery as the same problem. Better predictions don't just make better trades - they make better experiments. Zoa is betting both at once.