The GPU cloud that started with basement mining rigs and a Reddit post
Two Comcast engineers turned obsolete Ethereum miners into the developer GPU cloud that's giving AWS, GCP, and Azure an uncomfortable conversation to have with their boards.
Late 2021. Two software engineers at Comcast - Zhen Lu and Pardeep Singh - decide to do what a lot of people were doing: invest in GPU rigs to mine Ethereum. They each put in somewhere between $25,000 and $50,000. Racks in basements. The whole thing.
Then came The Merge. Ethereum switched to Proof-of-Stake in September 2022, and GPU mining became pointless overnight. Most people in that position sold the hardware at a loss, or let it gather dust. Lu and Singh looked at their racks and saw something else: compute.
Within months, they had pivoted the rigs into AI inference servers. They spent three months writing an MVP in Golang, kept the interface clean and the deployment fast, and launched quietly. Their first marketing strategy was posting on Reddit - offering free GPU access to anyone who'd try the product and give feedback.
It worked. Within nine months of launching, RunPod hit $1M in annual recurring revenue.
"We didn't set out to build a cloud company. We set out to make use of what we had. The AI timing just happened to be right."
- Origin story ethos, as told in RunPod's Founder SeriesThe pairing made sense. Lu, who'd wanted the CEO role and said so directly when the conversation came up, had a manager's instinct for building and scaling teams. Singh, the CTO, had been building side projects for years - a fitness app, a jewelry resale business, a music playlist tool - and brought a serial builder's tolerance for ambiguity.
What they built was not a polished enterprise pitch. It was a developer tool that worked, priced honestly, and deployed fast. The kind of thing that spreads through engineering forums before anyone's even heard of a company's PR strategy.
Today RunPod is headquartered in Moorestown, New Jersey, employs roughly 80 to 90 people across a remote-first team, and processes 8 exabytes of network traffic annually. The basement GPU rigs are presumably retired.
Whether you're fine-tuning a model, running production inference, or training a 64-node cluster - RunPod has a product for the job. Per-second billing, no egress fees, and no minimum commitment.
Persistent, dedicated GPU instances from RTX 4090s to H100s and B200s. Full control over OS, drivers, and container environment. Think traditional VM, but for AI workloads.
Auto-scaling inference endpoints with sub-500ms cold starts via FlashBoot. Zero idle costs. For production AI APIs that see unpredictable traffic.
Provision 16 to 64 H100s for distributed multi-node training - in minutes, not days. Launched March 2025 for teams that need serious scale without the enterprise procurement nightmare.
For the parts of your pipeline that don't need a GPU - data prep, agent orchestration, backend processing. Keeps everything in one billing relationship.
One-click deployment of open-source AI projects. A marketplace that takes up to 7% of compute spend. The "app store" layer sitting on top of raw infrastructure.
Pre-deployed models accessible via HTTP. No infrastructure setup, no container management - just call an endpoint. For developers who want results, not configuration.
RunPod's customer base spans a wide range that most infrastructure companies would struggle to serve simultaneously. Individual researchers spend $10 a month running experiments. Enterprise teams from companies like OpenAI, Wix, and Zillow have multi-million dollar annual commitments. The platform doesn't distinguish much - the same billing model applies to both.
Civitai, the AI image model community, generates over 800,000 LoRA fine-tunes per month on RunPod using 500 or more concurrent GPUs. Replit, Cursor, and Perplexity all run workloads on the platform. The 500,000+ developers come from 183 countries.
Net Dollar Retention sits at 120% - meaning customers spend more over time, not less. That's not a vanity metric for a cloud infrastructure company. It means the product earns continued expansion without sales calls.
In a sector where companies routinely raise hundreds of millions before hitting $10M ARR, RunPod's metrics are an argument against conventional fundraising wisdom.
RunPod's funding story reads less like a traditional Silicon Valley roadshow and more like a series of accidental discoveries. Each investor found the company, not the other way around.
Co-founder of Hugging Face. Became an investor after reaching out via RunPod's customer support chat - as a regular paying user. He liked the product enough to ask if he could put money in. The company said yes.
Partner Radhika Malik found RunPod through its Reddit posts. The founders weren't pitching - they were building in public and posting about it. Dell's VC arm cold-reached out. Lead co-investor in the $20M seed round.
Lead investor in the May 2024 $20M seed round. Intel's corporate VC arm has a strong track record in cloud and AI infrastructure. RunPod's enterprise trajectory made it a fit for their portfolio thesis.
Former GitHub CEO Nat Friedman and Replit CEO Amjad Masad both participated as angel investors in early rounds. Masad's company Replit is also a RunPod customer - the alignment between product use and investment is not coincidental.
In March 2026, RunPod published its inaugural State of AI Report - built from anonymized production data across its platform, covering 183 countries. When you run the GPU infrastructure for a significant slice of the world's self-hosted AI workloads, you see things that survey-based reports can't capture.
RunPod's founders didn't sell their Ethereum mining rigs when The Merge happened. They pointed them at AI workloads instead. That pivot decision is now worth $120M in ARR.
The first customers came from a Reddit post. Not a press release, not a Product Hunt launch. A forum post offering free GPU time in exchange for feedback.
Hugging Face's co-founder Julien Chaumond found RunPod by using it. He reached out through the support chat. Dell Technologies Capital found it through Reddit posts. No pitch tour required.
$22M raised. $120M ARR. In cloud infrastructure, where raising $200M to get to $50M ARR is considered normal, this ratio is genuinely unusual.
Civitai trains over 800,000 AI image model fine-tunes every single month on RunPod, using 500+ concurrent GPUs. That's more model training in one month than most labs do in a year.
RunPod claims GPU compute up to 90% cheaper than AWS, GCP, and Azure. At the prices hyperscalers charge for H100 hours, that's a number with a lot of zeros behind it for high-volume users.