BREAKING
$120M ARRONLY $22M RAISED500,000+ DEVELOPERS183 COUNTRIESSTARTED IN A BASEMENT90% YoY GROWTH8 EXABYTES OF NETWORK TRAFFICGPU CLOUD FOR THE REST OF USFROM REDDIT POST TO $120M ARR $120M ARRONLY $22M RAISED500,000+ DEVELOPERS183 COUNTRIESSTARTED IN A BASEMENT90% YoY GROWTH8 EXABYTES OF NETWORK TRAFFICGPU CLOUD FOR THE REST OF USFROM REDDIT POST TO $120M ARR
RunPod logo
GPU CLOUD · FOUNDED 2022 · NEW JERSEY

RunPod

The GPU cloud that started with basement mining rigs and a Reddit post

Two Comcast engineers turned obsolete Ethereum miners into the developer GPU cloud that's giving AWS, GCP, and Azure an uncomfortable conversation to have with their boards.

GPU CLOUD AI INFRASTRUCTURE SERVERLESS $120M ARR DEVELOPER TOOLS OPEN SOURCE
$120M ARR (Jan 2026)
500K+ Developers
183 Countries
$22M Total Raised

From Crypto Mining Rigs to the Cloud Everyone's Talking About

Late 2021. Two software engineers at Comcast - Zhen Lu and Pardeep Singh - decide to do what a lot of people were doing: invest in GPU rigs to mine Ethereum. They each put in somewhere between $25,000 and $50,000. Racks in basements. The whole thing.

Then came The Merge. Ethereum switched to Proof-of-Stake in September 2022, and GPU mining became pointless overnight. Most people in that position sold the hardware at a loss, or let it gather dust. Lu and Singh looked at their racks and saw something else: compute.

Within months, they had pivoted the rigs into AI inference servers. They spent three months writing an MVP in Golang, kept the interface clean and the deployment fast, and launched quietly. Their first marketing strategy was posting on Reddit - offering free GPU access to anyone who'd try the product and give feedback.

It worked. Within nine months of launching, RunPod hit $1M in annual recurring revenue.

"We didn't set out to build a cloud company. We set out to make use of what we had. The AI timing just happened to be right."

- Origin story ethos, as told in RunPod's Founder Series

The pairing made sense. Lu, who'd wanted the CEO role and said so directly when the conversation came up, had a manager's instinct for building and scaling teams. Singh, the CTO, had been building side projects for years - a fitness app, a jewelry resale business, a music playlist tool - and brought a serial builder's tolerance for ambiguity.

What they built was not a polished enterprise pitch. It was a developer tool that worked, priced honestly, and deployed fast. The kind of thing that spreads through engineering forums before anyone's even heard of a company's PR strategy.

Today RunPod is headquartered in Moorestown, New Jersey, employs roughly 80 to 90 people across a remote-first team, and processes 8 exabytes of network traffic annually. The basement GPU rigs are presumably retired.

Seven Ways to Run Something on RunPod

Whether you're fine-tuning a model, running production inference, or training a 64-node cluster - RunPod has a product for the job. Per-second billing, no egress fees, and no minimum commitment.

GPU PODS

Persistent, dedicated GPU instances from RTX 4090s to H100s and B200s. Full control over OS, drivers, and container environment. Think traditional VM, but for AI workloads.

POPULAR
SERVERLESS ENDPOINTS

Auto-scaling inference endpoints with sub-500ms cold starts via FlashBoot. Zero idle costs. For production AI APIs that see unpredictable traffic.

NEW 2025
INSTANT CLUSTERS

Provision 16 to 64 H100s for distributed multi-node training - in minutes, not days. Launched March 2025 for teams that need serious scale without the enterprise procurement nightmare.

NEW 2025
SERVERLESS CPU

For the parts of your pipeline that don't need a GPU - data prep, agent orchestration, backend processing. Keeps everything in one billing relationship.

RUNPOD HUB

One-click deployment of open-source AI projects. A marketplace that takes up to 7% of compute spend. The "app store" layer sitting on top of raw infrastructure.

AI MODEL APIs

Pre-deployed models accessible via HTTP. No infrastructure setup, no container management - just call an endpoint. For developers who want results, not configuration.

From Hobbyists to Fortune 500s

RunPod's customer base spans a wide range that most infrastructure companies would struggle to serve simultaneously. Individual researchers spend $10 a month running experiments. Enterprise teams from companies like OpenAI, Wix, and Zillow have multi-million dollar annual commitments. The platform doesn't distinguish much - the same billing model applies to both.

Civitai, the AI image model community, generates over 800,000 LoRA fine-tunes per month on RunPod using 500 or more concurrent GPUs. Replit, Cursor, and Perplexity all run workloads on the platform. The 500,000+ developers come from 183 countries.

Net Dollar Retention sits at 120% - meaning customers spend more over time, not less. That's not a vanity metric for a cloud infrastructure company. It means the product earns continued expansion without sales calls.

OpenAI
Replit
Cursor
Perplexity
Wix
Zillow
Civitai
Glam Labs
500K+ Developers

The Numbers That Make VCs Nervous

In a sector where companies routinely raise hundreds of millions before hitting $10M ARR, RunPod's metrics are an argument against conventional fundraising wisdom.

How You Get $22M Without a Pitch Deck Tour

RunPod's funding story reads less like a traditional Silicon Valley roadshow and more like a series of accidental discoveries. Each investor found the company, not the other way around.

JULIEN CHAUMOND

Co-founder of Hugging Face. Became an investor after reaching out via RunPod's customer support chat - as a regular paying user. He liked the product enough to ask if he could put money in. The company said yes.

DELL TECHNOLOGIES CAPITAL

Partner Radhika Malik found RunPod through its Reddit posts. The founders weren't pitching - they were building in public and posting about it. Dell's VC arm cold-reached out. Lead co-investor in the $20M seed round.

INTEL CAPITAL

Lead investor in the May 2024 $20M seed round. Intel's corporate VC arm has a strong track record in cloud and AI infrastructure. RunPod's enterprise trajectory made it a fit for their portfolio thesis.

NAT FRIEDMAN & AMJAD MASAD

Former GitHub CEO Nat Friedman and Replit CEO Amjad Masad both participated as angel investors in early rounds. Masad's company Replit is also a RunPod customer - the alignment between product use and investment is not coincidental.

What Happens When You Process 8 Exabytes of AI Traffic

In March 2026, RunPod published its inaugural State of AI Report - built from anonymized production data across its platform, covering 183 countries. When you run the GPU infrastructure for a significant slice of the world's self-hosted AI workloads, you see things that survey-based reports can't capture.

Recent Moves

MARCH 2026
Published inaugural 2026 State of AI Report - the first production-data-based industry analysis from anonymized GPU utilization across the platform. Qwen overtook Llama. B200 usage scaled 25x.
JANUARY 2026
Announced $120M ARR milestone via TechCrunch exclusive. 500,000+ developers on platform. 90% YoY revenue growth. 155% YoY developer signup growth.
DECEMBER 2025
Launched Serverless CPU - extending the platform beyond GPU compute into data prep, agent orchestration, and general backend processing.
AUGUST 2025
Launched Public Endpoints - pre-deployed AI models accessible via simple HTTP API, no infrastructure setup required.
MARCH 2025
Launched Instant Clusters - provision 16 to 64 H100s for distributed multi-node training in minutes.
MAY 2024
Raised $20M seed round co-led by Intel Capital and Dell Technologies Capital. Reached 100,000 developer milestone.

The Details That Actually Matter

ORIGIN

RunPod's founders didn't sell their Ethereum mining rigs when The Merge happened. They pointed them at AI workloads instead. That pivot decision is now worth $120M in ARR.

FIRST CUSTOMERS

The first customers came from a Reddit post. Not a press release, not a Product Hunt launch. A forum post offering free GPU time in exchange for feedback.

INVESTOR DISCOVERY

Hugging Face's co-founder Julien Chaumond found RunPod by using it. He reached out through the support chat. Dell Technologies Capital found it through Reddit posts. No pitch tour required.

CAPITAL EFFICIENCY

$22M raised. $120M ARR. In cloud infrastructure, where raising $200M to get to $50M ARR is considered normal, this ratio is genuinely unusual.

CIVITAI STAT

Civitai trains over 800,000 AI image model fine-tunes every single month on RunPod, using 500+ concurrent GPUs. That's more model training in one month than most labs do in a year.

THE PRICE GAP

RunPod claims GPU compute up to 90% cheaper than AWS, GCP, and Azure. At the prices hyperscalers charge for H100 hours, that's a number with a lot of zeros behind it for high-volume users.

SHARE THIS STORY
Link copied to clipboard!
SOURCES