BREAKING
Predibase acquired by Rubrik for $100M+ LoRA Land: beat GPT-4 for under $8 1 million team hours saved at Marsh McLennan Checkr cut costs 5x vs GPT-4 LoRAX: 1,000+ models from a single GPU First managed RFT platform for open-source LLMs $28.4M raised from Greylock & Felicis 10,000+ SLMs fine-tuned on the platform Predibase acquired by Rubrik for $100M+ LoRA Land: beat GPT-4 for under $8 1 million team hours saved at Marsh McLennan Checkr cut costs 5x vs GPT-4 LoRAX: 1,000+ models from a single GPU First managed RFT platform for open-source LLMs $28.4M raised from Greylock & Felicis 10,000+ SLMs fine-tuned on the platform
Predibase - AI Fine-Tuning Infrastructure
San Francisco · Founded 2020 · Acquired 2025

Predibase

They made GPT-4 look expensive. Then Rubrik wrote a cheque.

AI Infrastructure LLM Fine-Tuning Open Source Enterprise Acquired by Rubrik

Predibase built the infrastructure stack that any enterprise could use to fine-tune open-source language models and run them in production - cheaper, faster, and often more accurately than commercial AI providers. Born inside Uber's AI team, it spent five years proving that you don't need a nine-figure model to get nine-figure results. In June 2025, Rubrik agreed and acquired them for over $100 million.


$28M
Total Raised
$100M+
Acquired For
10K+
Models Fine-Tuned
1M+
Hours Saved
The Origin

A Bet Made Inside Uber's AI Lab

In 2020, Piero Molino and Travis Addair were doing what many talented engineers do inside a big tech company - building incredible tools that most of the world would never see. Molino had created Ludwig, a declarative deep learning framework that let teams define what they wanted a model to do without writing thousands of lines of training code. Addair had led the ML infrastructure platform at Uber and co-built Horovod, a distributed training framework that would eventually rack up 13,000 GitHub stars. Both were good at their jobs. Both were thinking bigger.

The question they kept returning to: why should cutting-edge machine learning be reserved for companies with hundred-person ML teams? The tools existed. The models were getting better. What was missing was an opinionated, easy-to-use layer that stitched everything together.

They left Uber, pulled in Stanford professor Chris Ré (whose Snorkel framework had quietly become essential at tech companies and who had previously built Apple's first production declarative ML system) and Devvret Rishi - who'd spent five years as a product manager at Google working across Firebase, Kaggle, and Google AI. Predibase was incorporated in 2020. They came out of stealth in May 2022 with $16.25 million led by Greylock and a mission that could fit on a card: make ML easy for any engineer.

The Founding Insight
"Make it dead simple for novices and experts alike to build ML applications and get them into production with just a few lines of code."
- Predibase founding mission
The Pivot
When large language models changed everything in 2023, Predibase didn't panic - it sharpened its focus. Fine-tuning and serving open-source LLMs was exactly what their infrastructure had been built to do. They just needed to tell everyone that.
The Team

Four People Who'd Already Built the Future Once

PM
Piero Molino
Co-Founder & original CEO
Creator of Ludwig at Uber AI - the declarative ML framework with 9,000+ GitHub stars. Researcher with roots at Stanford, IBM Watson, and Yahoo. Built the philosophical backbone of Predibase: you define what, the platform figures out how.
TA
Travis Addair
Co-Founder & CTO
Led ML infrastructure at Uber. Co-architect of Horovod (13,000+ stars, used in production at almost every serious ML shop). MS from Stanford. The person who built the plumbing that made Predibase's efficiency claims technically credible.
DR
Devvret Rishi
Co-Founder & CEO
Five years as a PM at Google, across Firebase, Kaggle, and Google AI. The first Google AI PM at Kaggle - which means he helped grow the world's largest data science community. Took over as CEO and steered the company through the LLM era and into acquisition.
CR
Chris Ré
Co-Founder & Academic Advisor
Stanford CS professor and head of the Hazy ML research group. Creator of the Snorkel data programming framework. Built Overton at Apple - one of the first declarative ML systems deployed at industrial scale. The academic credibility behind the tech.
What They Built

The Stack That Put GPT-4 on Notice

Core Platform
Fine-Tuning Platform
Managed, serverless supervised fine-tuning using LoRA adapters. Multi-GPU training, large dataset support, function-calling fine-tuning for agentic workflows, and continuous model updates. You write the config, they handle the cluster.
Open Source
LoRAX
Multi-LoRA inference server with 3,700+ GitHub stars. Serves thousands of fine-tuned models from a single GPU using dynamic adapter loading and heterogeneous continuous batching. The only 20% latency increase when switching between 128 different adapters is the engineering flex here.
Proprietary Stack
Inference Engine
Launched October 2024. Combines Turbo LoRA, LoRAX, and FP8 quantization for 3-4x throughput vs. base models and over 50% cost reduction. The thing that made enterprises stop running the OpenAI cost calculations.
Feb 2025
RFT Platform
First fully managed Reinforcement Fine-Tuning SDK using GRPO. Enables reasoning model training with as few as 10 labeled examples. Beat GPT-4 with 100 data points. The idea that you need vast amounts of training data got a little harder to defend after this launched.
Feb 2024
LoRA Land
25+ fine-tuned Mistral-7B models that outperform GPT-4 on domain tasks, all fine-tuned for under $8 on average, all served from a single A100. This was the announcement that made people in the industry pay attention in a different way.
Enterprise
Predibase VPC
Virtual Private Cloud deployment for enterprises where data sovereignty isn't optional. Isolated GPU infrastructure for fine-tuning and inference. Because some companies can't send their data to a shared pool, no matter how cheap it is.
In the Wild

When the Numbers Speak for Themselves

Marsh McLennan
1M+ Hours Saved
Built LenAI - an employee assistant that surfaces institutional knowledge across 90,000+ staff. In year one, it saved the equivalent of roughly 500 full-time workers. The accuracy was 10-12% better than their previous solution too.
Checkr
5x Cost Reduction
Replaced GPT-4 with a Predibase fine-tuned Llama-3-8B for background check classification. Five times cheaper, improved accuracy, faster inference. The economics of custom fine-tuning vs. paying per-token to a closed model made themselves obvious.
Convirza
60 Adapters Live
Serving 60 LoRA adapters for conversation analytics, at scale, under 2 seconds average response time. Built on LoRAX - which makes serving that many models simultaneously economically viable rather than prohibitive.

Other customers included Forethought, Nubank, and Qualcomm - across customer support automation, financial services, and edge AI. The verticals that were already thinking about data privacy found the VPC offering particularly interesting.

On the Record

The Milestones That Mattered

Open Source Roots

Three Projects That Shaped the Field

Before Predibase raised a dollar, its founders had already built some of the most widely used open-source ML infrastructure in the world. The credibility that came with those repos was not accidental - it was how the founding team had been thinking about open, composable ML for years.

LoRAX
★ 3,700+ Stars
Multi-LoRA inference server created by Predibase. Serves thousands of fine-tuned models from a single GPU. Apache 2.0 license. The technical foundation of the entire Predibase serving stack.
Ludwig
★ 9,000+ Stars
Declarative deep learning framework created by Piero Molino at Uber. 136+ contributors. Define what you want; Ludwig handles the model architecture. The philosophical precursor to everything Predibase became.
Horovod
★ 13,000+ Stars
Distributed deep learning training framework co-created by Travis Addair at Uber. 169+ contributors. Used at scale across the ML industry. One of the most deployed distributed training tools in production.
The Road

From Uber AI to Rubrik

2020
Piero Molino and Travis Addair begin building Predibase inside and around Uber's AI team. The idea: productize the declarative ML concepts behind Ludwig and Horovod into a platform any engineer can use.
May 2022
Exits stealth. Announces $16.25M Series A led by Greylock, with Factory and angel Anthony Goldbloom (Kaggle founder) participating. Launches as a "declarative low-code ML platform."
May 2023
Extends Series A to $28M with $12.2M from Felicis Ventures. Pivots messaging toward LLMs - announces any engineer can now "build their own GPT." The open-source LLM era has arrived and Predibase is ready.
Feb 2024
Launches LoRA Land. 25+ fine-tuned Mistral-7B models, each costing under $8 to train, all outperforming GPT-4 on domain-specific benchmarks, all served from a single GPU. The story travels.
Oct 2024
Releases Predibase Inference Engine. Turbo LoRA plus LoRAX plus FP8 quantization equals 3-4x throughput improvement and over 50% cost reduction. Over 10,000 SLMs fine-tuned on the platform.
Feb 2025
Launches the first fully managed Reinforcement Fine-Tuning platform for open-source LLMs. Using GRPO, it can train reasoning models with as few as 10 labeled examples - a number that makes the RFT space more accessible by an order of magnitude.
May 2025
First provider to offer private on-demand Qwen 3 model endpoints on AWS.
June 25, 2025
Rubrik (NYSE: RBRK) acquires Predibase for over $100M. Predibase technology becomes the engine of Rubrik Agent Cloud. The fine-tuning and serving stack that started in a notebook at Uber now governs AI agents at enterprise scale.
Worth Knowing

Six Things That Make Predibase Interesting

The idea for Predibase came from a simple question inside Uber's AI team: why are we building tools this good just for ourselves? Ludwig and Horovod were already used by the broader community. Predibase was the attempt to make that accessible as a product.
LoRAX can serve over 1,000 different fine-tuned models from a single GPU. The latency overhead when switching between 128 different adapters simultaneously is only 20%. That number is a systems engineering achievement that most GPU infrastructure teams would quietly envy.
Marsh McLennan saved over a million employee hours in year one using Predibase-powered AI. The maths work out to roughly 500 full-time workers worth of time. The accuracy improved by 10-12% over the previous approach.
Checkr replaced GPT-4 with a Predibase fine-tuned Llama-3-8B. The result: five times cheaper, better accuracy, faster inference. At some point the cost-per-call numbers for closed models stop being theoretical and become a line item someone has to defend in a budget meeting.
Between Molino (Ludwig: 9K stars) and Addair (Horovod: 13K stars), the Predibase founding team had co-built two of the highest-starred open-source ML frameworks before they ever founded a company together. The GitHub receipts were already there.
Academic co-founder Chris Ré built Overton at Apple - one of the first declarative ML systems deployed at real industrial scale - before returning to Stanford. He also created Snorkel, the data programming framework that became quietly essential at many of the world's largest tech companies.
Share This Profile