SambaNova raises $350M Series E - Feb 2026 SN50 chip: 5x faster than competition, 3x lower TCO than GPUs Intel strategic partnership announced alongside SN50 SoftBank becomes first SN50 customer - Japanese data centers Rodrigo Liang at RAISE Conference 2025, Paris SambaNova: $1.45B+ total funding raised Sovereign AI cloud partnerships on three continents From SPARC chips at Oracle to dataflow AI silicon SambaNova means "New Dance" in Portuguese Born Taipei - Raised Brazil - Stanford trained - Palo Alto built SambaNova raises $350M Series E - Feb 2026 SN50 chip: 5x faster than competition, 3x lower TCO than GPUs Intel strategic partnership announced alongside SN50 SoftBank becomes first SN50 customer - Japanese data centers Rodrigo Liang at RAISE Conference 2025, Paris SambaNova: $1.45B+ total funding raised Sovereign AI cloud partnerships on three continents From SPARC chips at Oracle to dataflow AI silicon SambaNova means "New Dance" in Portuguese Born Taipei - Raised Brazil - Stanford trained - Palo Alto built
Rodrigo Liang, Co-Founder and CEO of SambaNova Systems
Co-Founder & CEO

Rodrigo Liang

The man who flipped the compute paradigm

Taipei-born. Brazil-raised. Stanford-forged. He spent two decades designing the chips that ran the internet - then walked away to build chips the internet's never seen before.

$1.45B Total Funding
430 Employees
2017 Founded
5x SN50 Speed vs. Rivals

The Dance Nobody Else Was Doing

In 2017, Rodrigo Liang left Oracle's air-conditioned campus in Redwood Shores and did something engineers rarely do: he declared the dominant paradigm wrong. Not slightly miscalibrated - wrong at the architectural level, the kind of wrong that only reveals itself when you spend two decades staring at silicon from the inside.

Liang had shipped 12 major SPARC processors and ASICs at Oracle and Sun Microsystems. He understood, bone-deep, why the instruction-centric model that had powered computing for fifty years was quietly strangling AI. The problem wasn't compute. It was the constant, murderous traffic of moving data to compute. Data movement kills you. That wasn't a slogan. It was the diagnosis he'd been building toward for twenty years.

We're going to flip the paradigm on its head - not worry as much about the instructions, but worry about the data.

- Rodrigo Liang, SambaNova CEO

The company he co-founded with Stanford professors Kunle Olukotun and Chris Re is named SambaNova - "New Dance" in Portuguese, a nod to the Brazil where Liang grew up after being born in Taipei, Taiwan. The name is either whimsical branding or a literal description of what they built: a processor that moves differently, thinks differently, computes differently.

That processor - the Reconfigurable Dataflow Unit, or RDU - replaces the add/subtract/multiply fundamentals of classical chips with map, reduce, and filter. It's not a tweak. It's a different definition of what a chip does. When a former Oracle colleague described Liang as someone who "understood better than most that for AI workloads, data movement kills you," they were describing a man who had spent his career watching that problem grow, and spent the next chapter solving it.


Twenty Years of Other People's Chips

Before SambaNova, Liang's resume reads like a tour of the machines that built Silicon Valley's reputation. Hewlett-Packard in the early 1990s. InSilicon. Afara Websystems, where he focused on multi-core processor design until Sun Microsystems acquired the company in 2002. Then a decade at Sun leading the Niagara line of multi-core chips - the processors that made Sun's enterprise servers faster without turning data centers into furnaces.

When Oracle absorbed Sun in 2010, Liang became Senior Vice President of SPARC Processor and ASIC Development. By the time he left in 2017, he had overseen 12 major chips and ASICs, accumulated an encyclopedic knowledge of where the instruction-centric architecture struggled, and assembled connections to a network of chip architects, software engineers, and AI researchers that would become SambaNova's founding talent pool.

SambaNova SN50 vs. Competing AI Chips (2026)

SambaNova SN50 Inference Speed (relative)
SN50 - Fastest
GPU (competitors) 5x slower benchmark
GPU baseline
SambaNova SN50 TCO 3x lower than GPU
SN50 cost
GPU TCO Baseline cost
GPU baseline cost

Source: SambaNova February 2026 announcement. SN50 claims 5x speed advantage and 3x lower TCO vs. Nvidia B200.

The co-founders brought different superpowers. Olukotun, a Cadence Design Professor at Stanford, had pioneered chip multiprocessor design and founded Afara Websystems - the same company Liang had worked at before Sun's acquisition. They weren't strangers; they were veterans of the same architectural battles. Chris Re, a Stanford CS professor directing the InfoLab, brought the machine learning research depth. Liang brought two decades of knowing exactly what happens when you try to run modern AI on hardware designed for a different era.


Built for Large, on Purpose

SambaNova's strategy has never been broad. Liang's clearest public articulation of the company's positioning is three words: we're built for large. Massive neural networks. Enormous data sets. The AI workloads that make conventional GPU clusters sweat.

Rather than competing with NVIDIA at volume, SambaNova positioned itself as the infrastructure layer for organizations that need AI in production, at enterprise scale, without the specialized in-house teams most companies don't have. The business model matches the architecture: instead of selling hardware, SambaNova deploys models as managed services. If the hardware isn't working, they know immediately. No lag, no finger-pointing across vendor boundaries.

AI is no longer a contest to build the biggest model. The real race is about who can light up entire data centers with AI agents that answer instantly, never stall, and do it at a cost that turns AI from an experiment into the most profitable engine in the cloud.

- Rodrigo Liang, February 2026

The 2024 pivot from training to inference wasn't a retreat - it was a bet that the real volume play in AI was always going to be inference at scale, not the training runs that get the press coverage. SambaNova Cloud launched. SambaManaged products followed. The company had been building toward the agentic AI moment that arrived in 2025 and 2026, when enterprises stopped asking "can we run an LLM?" and started asking "can we run a thousand AI agents simultaneously, in real-time, at a cost we can justify to finance?"

SambaNova Funding Journey

$56M Series A - 2018
$250M Series B - 2019
$676M Series D - 2021
$350M+ Series E - 2026

The SN50 Moment

In February 2026, Liang stood behind a product announcement that felt like the culmination of everything SambaNova had been building toward. The SN50 chip - targeting 10-trillion parameter models for agentic AI workloads - arrived alongside $350M in Series E funding led by Vista Equity Partners and Cambium Capital, with Intel Capital, GV, Battery Ventures, T. Rowe Price, and others in the coalition.

The Intel collaboration was the strategic piece that signaled a shift. SambaNova and Intel would work together on a heterogeneous inference design: SN50 for the heavy AI lifting, Intel Xeon 6 CPUs handling the surrounding workloads. SoftBank signed on as the first SN50 customer, deploying the chips in Japanese data centers. A company that had spent years fighting for enterprise attention was now partnering with the processor company that built the PC era.

The sovereign AI angle has been equally deliberate. In 2025, SambaNova announced partnerships with SCX in Australia, Infercom in Germany, and Argyll in the United Kingdom - three "sovereign AI cloud" deals positioning SambaNova as the infrastructure backbone for governments and enterprises that want AI that runs in their jurisdiction, on their terms, without sending data to American hyperscalers.

🧠
RDU Inventor
Pioneered the Reconfigurable Dataflow Unit architecture for AI
📈
$1.45B Raised
Total funding secured across 5 rounds in under 9 years
🌐
3 Continents
Sovereign AI cloud partnerships across Australia, UK, Germany

Where the Dance Comes From

Rodrigo Liang's origin story is genuinely intercontinental. Born in Taipei. Raised in Brazil. Studied in Germany. Stanford-educated. Palo Alto-headquartered. Most tech executives have backgrounds that read as a straight line; his reads as a map.

The company name is not incidental. "Samba" is the rhythm of Brazilian street culture - improvisational, high-energy, rooted in community. "Nova" is new. The name reflects a founder who chose to name his AI chip company after a dance from the country where he grew up, which is either a deeply personal statement or the most confident act of branding in the semiconductor industry.

Former colleagues describe Liang as technically brilliant and operationally methodical - the kind of engineer who mastered the existing system completely before deciding it needed to be replaced. His 20 years at HP, Sun, and Oracle weren't detours; they were the education that made his argument against traditional chip architecture credible. He wasn't theorizing from the outside. He was diagnosing from within.

The agentic AI revolution demands 10X to 100X more inference compute. The infrastructure has to be built for that - not patched onto hardware that was designed for a different problem.

- Rodrigo Liang, 2025

At the 2025 RAISE Conference in Paris, Liang shared a panel with Hugging Face's Thomas Wolf, Tony Kim, and CNBC's Arjun Kharpal on the subject of open source, fast inference, and the agentic revolution. He has contributed to the World Economic Forum's AI agenda and appeared at TEDAI San Francisco in 2025. The circuit isn't just for PR - Liang's message is consistent across every venue: AI infrastructure isn't a background concern. It's the entire game.

Whether SambaNova's dataflow architecture becomes the defining compute paradigm of the AI era - or a distinguished chapter in a longer story - is still being written. But Liang has spent eight years building toward a specific future, and the February 2026 announcements suggest he's still moving in the same direction he chose when he walked out of Oracle: forward, fast, and unconcerned with the instruction that told him this was impossible.


1993 Joined Hewlett-Packard as Hardware Manager - first professional foray into chip systems
2000 Moved to inSilicon as Director of Engineering
2001 Joined Afara Websystems - focused on multi-core processor design
2002 Sun Microsystems acquired Afara; Liang became Director/VP of UltraSPARC engineering, leading Niagara chip line
2010 Oracle acquired Sun; Liang promoted to SVP of SPARC Processor and ASIC Development
2017 Left Oracle; co-founded SambaNova Systems in November with Stanford professors Kunle Olukotun and Chris Re
2018 SambaNova emerged from stealth with $56M Series A (Walden International, GV)
2021 Raised $676M Series D at $5B+ valuation - one of the largest AI funding rounds of the era
2024 Launched SambaNova Cloud; pivoted strategic focus to AI inference; announced SN40L chip
2025 Secured sovereign AI cloud deals in Australia, Germany, UK; spoke at RAISE Conference Paris and TEDAI San Francisco
2026 Unveiled SN50 chip; announced Intel strategic collaboration and $350M+ Series E; SoftBank signed as first SN50 customer