Breaking
Zettascale ships first XPU Grasshopper prototype cards - fully manufactured in America /// 27.6x more energy-efficient than NVIDIA H100 GPUs - company claims /// YC S24 startup rebrands from Exa Laboratories to Zettascale Computing Corp /// Founding engineer turns down NVIDIA offer to join Zettascale /// Seed round closed from 10 investors including Climate Capital and Y Combinator /// Commercial XPU-G0 launch targeted for 2026 /// Zettascale ships first XPU Grasshopper prototype cards - fully manufactured in America /// 27.6x more energy-efficient than NVIDIA H100 GPUs - company claims /// YC S24 startup rebrands from Exa Laboratories to Zettascale Computing Corp /// Founding engineer turns down NVIDIA offer to join Zettascale /// Seed round closed from 10 investors including Climate Capital and Y Combinator /// Commercial XPU-G0 launch targeted for 2026 ///
Zettascale Computing Corporation founders Elias Almqvist and Prithvi Raj
YC S24   |   San Francisco   |   AI Hardware

Zettascale

The chip that learns your AI's shape - then becomes it.

Every GPU in every data center is the same shape. AI models are not.

There is a strange consensus at the heart of the AI industry: the world's most diverse collection of computational models - transformers, diffusion networks, sparse architectures, everything - all run on hardware designed to do one thing well. The GPU was built for parallelizing matrix math in games. It became the de facto standard for AI not because it was the best tool for the job, but because it was available and it was fast enough, and now half a trillion dollars of infrastructure runs on that accident of history.

Zettascale Computing Corporation thinks that's worth fixing. The San Francisco startup - founded in 2024 by a 21-year-old Swedish dropout and a Cambridge-trained engineer - has spent the last year designing a chip that does something NVIDIA's hardware cannot: reconfigure its own internal dataflow architecture to match whichever AI model it's running. They call it a polymorphic XPU. Their performance claim is 27.6x greater energy efficiency than the H100. If they're right, the economics of AI infrastructure will look very different by the late 2020s.

The company went through Y Combinator's Summer 2024 batch, raised a seed round from ten investors, and as of 2025 has prototype Grasshopper cards coming off their San Francisco assembly line - fully manufactured in America - and shipping to early testers. They've patented the core technology, signed a major fabrication partnership, and hired founding engineers who chose them over NVIDIA. Commercial launch of the XPU-G0 is planned for 2026.

The ambition is written into the name. An exaFLOP is a million trillion floating-point operations per second. A zettaFLOP is a thousand times larger. The founders originally called the company Exa Laboratories. Then their research broke through the ceiling they'd set for themselves. So they moved the ceiling.

// Quick Facts
Founded 2024
HQ San Francisco, CA
Accelerator Y Combinator S24
Product XPU Grasshopper G0
Claim vs H100 27.6x efficiency gain
Funding Seed (multi-million)
Commercial Launch 2026 (planned)
Originally Exa Laboratories

A Dropout From Gothenburg and a Cambridge Engineer Walk Into YC

Elias Almqvist
Co-Founder & CEO

Started coding at age 9 in Gothenburg, Sweden. Enrolled in Computer Science and Computer Engineering at Chalmers University of Technology. Left at 21, moved to San Francisco, got into YC. Not because he dropped out, but despite it - the product was compelling enough on its own. His public talking points have a clarity that's hard to fake: he thinks LLMs are a bubble within an AI story that isn't, and he thinks NVIDIA's dominance rests on inertia rather than optimality.

Prithvi Raj
Co-Founder & CTO

Holds an MEng from Cambridge's Computational Statistics and Machine Learning Lab, covering mechanical engineering, generative modeling, and electrical engineering. His GitHub shows research into Kolmogorov-Arnold Networks, Fourier Neural Operators, and fixed-point emulations - the kind of theoretical groundwork that eventually becomes a chip architecture. At Zettascale, he turns that research into silicon.

AI is not a bubble - but LLMs are, and betting the future of intelligence on Transformer models alone is a failure of imagination.

- Elias Almqvist, CEO

A Chip That Rewires Itself

Every GPU processes data through a fixed architecture. You feed it a neural network, it runs the network through its pre-wired pathways, and energy gets spent moving data around those pathways whether or not they're the right shape for the job. The mismatch between model topology and hardware topology is where efficiency goes to die.

Zettascale's XPU is polymorphic - it reconfigures its own internal dataflow to match the specific model it's running. The three mechanisms doing the work are localization (keeping data close to where it's processed), instruction fusion (combining operations to reduce passes), and layer fusion (collapsing sequential computations into single passes through the hardware). The result, according to the company, is that data moves significantly less, and each movement does significantly more.

The Grasshopper G0 - Zettascale's first commercial product, targeting 2026 launch - supports not just transformers and GPT-class models but novel architectures including Kolmogorov-Arnold Networks, a class of models that may replace MLPs in certain applications. The chip was designed with the assumption that AI architecture diversity is going to increase, not converge.

// Energy Efficiency: XPU Grasshopper vs NVIDIA H100
XPU Grasshopper
27.6x
NVIDIA H100
1x
* Per company claims. Independent benchmarks pending commercial release.

The efficiency multiplier isn't just a benchmark to brag about. Data centers running large AI workloads spend hundreds of millions annually on electricity. A 27x improvement in energy efficiency per unit of compute would fundamentally change the unit economics of AI infrastructure - and with it, which companies can afford to run which models at what scale. That's the market Zettascale is aiming at.

27.6x
Energy efficiency gain
Claimed vs NVIDIA H100 GPUs for AI training and inference workloads.
10
Seed investors
Including Climate Capital, Failup Ventures, Geek Ventures, Multimodal Ventures, Olive Tree Capital, and Y Combinator.
1,000x
Scale ambition
A zettaFLOP is 1,000x an exaFLOP. The company upgraded its ambition when its research outgrew it.
9
Age Elias started coding
From Gothenburg bedrooms to Y Combinator San Francisco. The journey from curiosity to silicon.
2026
Commercial launch
XPU-G0 commercial release planned, with Grasshopper prototype cards already shipping to early testers as of 2025.
0
NVIDIA offers accepted
At least one founding engineer turned down an offer from NVIDIA to join Zettascale instead.

When Your Ambition Outgrows Your Name

The company was called Exa Laboratories. The target: make exascale AI computing sustainable. An exaFLOP - 10^18 floating-point operations per second - is already an almost incomprehensible scale. Most AI research happens orders of magnitude below it. Making it sustainable seemed like enough.

Then something happened in 2024 and 2025 that the founders have been diplomatically vague about - a series of research breakthroughs and performance unlocks that moved the goalposts internally. Exascale started looking like a ceiling rather than a destination. The team sat down and did the arithmetic on what their chip architecture could theoretically deliver. The answer was zettascale. A zettaFLOP is a thousand exaFLOPs. That's 10^21 operations per second.

So they changed the name. The rebrand to Zettascale Computing Corporation wasn't a marketing exercise - it was an engineering acknowledgment. The chip had gotten better than the original vision. The new name just tells you where they're now pointing.

The reason the entire AI industry runs on hardware that people don't actually like is inertia - and the window to displace it is opening faster than most realize.

- Elias Almqvist, CEO

From Gothenburg to Grasshopper

Summer 2024
Founded and accepted into Y Combinator S24 batch. Raised $500K pre-seed. Began operating as Exa Laboratories.
September 2024
Closed seed round from 10 investors including Climate Capital, Failup Ventures, Geek Ventures, Multimodal Ventures, and Olive Tree Capital.
Late 2024
Launched XPU pilot program, inviting early customers to test first-generation chips.
Early 2025
Completed chip research and design phase. Achieved major performance breakthrough. Patented core XPU polymorphic dataflow technology.
Mid 2025
Signed major fabrication partnership (details undisclosed). Rebranded from Exa Laboratories to Zettascale Computing Corporation. Published new generative model architecture research paper.
Late 2025
First XPU Grasshopper prototype cards manufactured fully in America begin shipping. Built XPU assembly line and cluster at SF headquarters. Hired first two founding engineers - one turned down NVIDIA.
2026 (planned)
Commercial launch of XPU-G0 product for AI data centers and enterprises.

Who's Betting on the XPU

Seed round closed September 2024 from 10 investors.

Y Combinator
Climate Capital
Failup Ventures
Geek Ventures
Multimodal Ventures
Olive Tree Capital
+ 4 undisclosed

The Energy Problem Nobody Talks About Enough

Every major AI lab in the world is fighting the same constraint: compute costs money, and electricity is a large part of that cost. A single NVIDIA H100 GPU draws between 350 and 700 watts under load. A data center running thousands of them - the configuration needed to train frontier models - consumes electricity at a rate comparable to a mid-size industrial facility. As model sizes grow and inference traffic scales, the energy footprint of AI infrastructure is becoming a significant operational, environmental, and regulatory concern.

Zettascale's pitch is that this problem has a hardware solution. If you can design a chip that produces equivalent or better AI compute results per watt - not by making GPUs slightly better, but by rethinking the chip architecture from first principles - you can change the economics of every AI workload on the planet. Data centers save hundreds of millions annually in electricity. Smaller companies can afford to run larger models. The carbon footprint of AI improves without requiring anyone to do less AI.

The counterargument is that NVIDIA has won before and it will win again, that the ecosystem lock-in around CUDA is formidable, and that chip startups have a long graveyard. Zettascale's answer to that is a chip that's genuinely different - not a faster GPU, but a different class of accelerator - and a market where the incumbents are already being questioned. The window to displace inertia, the CEO has argued, is opening.

The Parts That Don't Fit Anywhere Else

The name Zettascale is a unit upgrade: 1 zettaFLOP = 1,000 exaFLOPs = 10^21 operations per second. The founders changed the name because their chip exceeded their original exascale ambitions.
Elias Almqvist started coding at age 9 in Gothenburg, Sweden. He enrolled in Computer Science at Chalmers University of Technology and left at 21 to move to San Francisco and found the company.
CTO Prithvi Raj's public GitHub research includes Kolmogorov-Arnold Networks - an emerging neural architecture that the XPU is specifically designed to support, unlike current GPU hardware.
The team built their own XPU assembly line at their San Francisco headquarters. The first prototype Grasshopper cards were manufactured entirely in the United States.
At least one of their founding engineers received an offer from NVIDIA and turned it down to join Zettascale. In 2025, that counts as a data point.
The company's research extends to generative model architectures - they published a new architecture paper in 2025. The hardware and the software are being developed in parallel.