He managed 50,000 GPUs at Intel. Deployed one of the world's largest supercomputers. Joined Apple when its chip team had 25 people. Now he's making AI compute available to everyone who doesn't have an Intel-sized budget.
Profile
At age ten, Brijesh Tripathi was already taking step-down transformers apart. Not the Michael Bay kind - the electrical kind. The kind that converts high-voltage current into something manageable. It was the earliest version of what he would spend the next four decades doing: taking complex, powerful, inaccessible systems and making them work for people who couldn't otherwise touch them.
That impulse carried him from India to NVIDIA's Silicon Valley offices - one of the company's earliest hires via its IIT recruitment program, back when recruiting in India was still a novel idea for US tech giants. From there, a path that reads less like a career plan and more like a guided tour of every major compute platform of the last quarter century.
"Don't over-plan, take life as it comes to you, have an open mind to what's coming to you and be receptive. Don't be too rigid on plans."
- Brijesh Tripathi, EE Times interviewHe joined Apple's chip design team in 2010, when the group numbered 25 people working in the immediate aftermath of the iPhone launch. The team now has 17,000 engineers. He was there at the foundation. Then Tesla - working hardware engineering directly under Elon Musk - where the dominant philosophy was that the only real constraints were science and physics, not organizational inertia or conventional wisdom. He describes that stint as his best professional experience.
Intel came next, in the role that would prove most directly relevant to what he's building today. As Vice President of the Accelerated Computing Systems and Graphics (AXG) division, Tripathi oversaw the deployment of Aurora - one of the world's largest supercomputers - and managed over 50,000 GPUs. He understood, at industrial scale, exactly what it takes to provision, orchestrate, and sustain AI compute. He also understood exactly who couldn't do it at that scale: nearly everyone.
The Company
The problem Brijesh Tripathi set out to solve at FlexAI is one he watched up close for two decades: AI compute is extraordinarily powerful and extraordinarily inaccessible. The hardware is expensive, the configurations are brittle, the expertise required to run it reliably is rare, and the entire system locks you into a single vendor's ecosystem the moment you commit.
FlexAI's answer is Workload-as-a-Service - a software orchestration layer that routes AI workloads (training, fine-tuning, inference) to the best available hardware automatically, whether that's NVIDIA, AMD, Intel Gaudi, cloud, or on-premises. The platform's intelligence lives in the routing layer: matching workload requirements to compute characteristics, dynamically adjusting as conditions change, and handling the operational complexity so developers don't have to.
"We believe in AI's transformative power to solve some of humanity's biggest problems, but it will require a 1000x magnitude more compute to be able to realise this vision. The availability of AI compute today is limited to a select few. Our vision is to unlock access to compute for the many."
- Brijesh Tripathi, FlexAI launch statementFlexAI operates as an aggregator of AI compute demand - sourcing hardware from Intel, AMD, and others at preferential rates (leveraging Tripathi's deep relationships from his Intel days), and distributing those economics across its customer base. It's not primarily an NVIDIA play, which is a meaningful strategic differentiator in a market where most AI infrastructure startups live and die by CUDA availability.
"Using any infrastructure in the AI space is complex; it's not for the faint of heart," Tripathi told TechCrunch at launch. The long-term vision is simpler to state than to build: bring AI compute infrastructure to the same level of simplicity that general-purpose cloud has achieved over the past decade.
Infrastructure should never slow down innovation.
Career Path
When Brijesh joined Apple's chip design team, there were 25 people on it. He was part of a small group building what would become one of the most dominant silicon programs in history. That team now has 17,000 engineers. He was present at a moment that most people only read about in retrospectives.
"There are no constraints other than science and physics." The philosophy Elon Musk ran Tesla with is the one Brijesh Tripathi carried directly into FlexAI. Ask why something can't be done before accepting that it can't.
Vision
"Bankers now know how to use GPUs as collateral."
- Brijesh Tripathi on the new economics of AI infrastructure financingIn His Words
Infrastructure should never slow down innovation.
The availability of AI compute today is limited to a select few. Our vision is to unlock access to compute for the many.
We want to bring AI compute infrastructure to the same level of simplicity that general purpose cloud has.
There are no constraints other than science and physics.
Using any infrastructure in the AI space is complex; it's not for the faint of heart.
Don't over-plan, take life as it comes to you, have an open mind and be receptive.
Fun Facts
Achievements
Watch
In this EE Times interview, Brijesh Tripathi traces his path from a 10-year-old tinkering with transformers in India to deploying some of the world's most powerful compute infrastructure. The title captures his philosophy precisely.
The throughline of his career - NVIDIA, Apple, Tesla, Zoox, Intel, FlexAI - is not a plan. It's a series of open doors, each one taken because it offered a deeper understanding of the machine underneath the machine. The willingness to follow the compute is what made him, eventually, the person building the platform that routes it.
Watch on YouTube →Connect & Follow