The Physicist Who Listened
Ashish Nagar started his professional life studying things that don't talk back - Applied Physics at IIT Delhi, one of India's most demanding institutions. He ended up building AI that listens to everything. The arc feels inevitable in hindsight, which is usually a sign that someone made very deliberate choices nobody else was making at the time.
Before Silicon Valley, Nagar spent years at Boston Consulting Group advising Fortune 500 management teams across African, US, and Indian markets. Strategy consulting is good training for seeing what's broken in large organizations, which turned out to be exactly the skill he'd need when he eventually confronted the contact center industry.
At Kinestral Technologies, Nagar helped grow the company from 3 people and $250,000 in funding to ~50 employees and ~$30 million. A rehearsal for the scaling act he'd run at Level AI.
The Alexa Years: Star Trek Was the Goal
In 2014, Nagar co-founded Relcy, a mobile search startup backed by Sequoia Capital and Khosla Ventures - a company remarkably similar to what Perplexity AI became a decade later. Amazon acquired it, and Nagar went inside. What he found was the most ambitious conversational AI project in the world at the time.
The Alexa Prize project - internally nicknamed the "Star Trek computer" - was Nagar's assignment. The goal: build an AI that could hold a 20-minute conversation on any social topic. He managed a team of roughly 10 scientists and coordinated with researchers from MIT, CMU, Stanford, and Oxford. The scale of the ambition was matched by the scale of the collaboration.
The Amazon Alexa experience was really interesting and formative from a technology perspective with the genesis of Level AI.
- Ashish NagarWhat Alexa taught Nagar wasn't just how to build conversational AI. It taught him what happens when you point extremely capable AI at real human problems - and how much infrastructure is needed to get from research lab to something that actually works at scale.
The Pivot That Built a Company
Level AI started in 2019 as a voice assistant for frontline workers. Wrong idea, right technology. After raising $2M in seed funding and talking to customers, Nagar pivoted. Contact centers were the target: a $35 billion market, wildly underserved by technology, full of human-AI interaction that nobody had instrumented properly.
The pitch was simple and devastating: every contact center in America was manually sampling 2% of calls to check quality. The other 98% were invisible. Level AI would score 100% - automatically, in real time, with an AI built from the ground up for customer service, not retrofitted from a general-purpose model.
What Level AI Actually Does
- Agent Assist - real-time hints and prompts during live calls
- Automated QA - scores every conversation without human reviewers
- CX Copilot - manager-facing dashboard for team performance
- Voice of the Customer - trend detection across all interactions
- Sentiment Analysis - detects 7 distinct customer emotions per call
- Coaching Tools - personalized training based on actual call performance
The Stack Is the Moat
When Nagar talks about Level AI's technology, he's describing something that most enterprise AI companies don't have: full vertical control. Not a wrapper around GPT. Not a prompt engineering layer on top of a rented model. A customer service-native large language model, built from scratch, controlled from the GPU up.
We have an LLM-native architecture right from the GPU layer all the way to the app layer and we control every part of our AI stack.
- Ashish NagarThat architecture isn't just a technical choice. It's a trust decision. Level AI handles conversations between some of the most regulated companies in the world and their customers - healthcare, finance, insurance. None of that data leaves the platform to train an external model. When enterprise buyers want to know where their data goes, Nagar can say: it stays here, and here means our GPU.
On Not Automating Everything
The most interesting thing about Nagar's AI thesis is what it refuses to claim. At a moment when the contact center industry is flooded with vendors promising full automation, Level AI's CEO is publicly arguing for 30-40% automation as the realistic target across industries. Not because AI isn't capable of more, but because customers still need humans - and making those humans dramatically better is the bigger lever.
Using AI to make human productivity better - augmenting human productivity with AI - that's what we set out to do.
- Ashish NagarThe numbers back the approach. Level AI customers see 10-25% reduction in call handling time, 30-50% faster agent onboarding, and 10-15% improvement in agent productivity. The AI doesn't replace the human. It removes the dead air, the uncertainty, the knowledge gaps that make calls drag.
The $73M Question
By July 2024, Level AI had closed a $39.4M Series C led by Adams Street Partners, with Battery Ventures and Eniac Ventures participating alongside Cross Creek and Brightloop. Total raised: $73.1 million. The company employed 135 people and had clients including Affirm, Penske, and Carta.
Nagar's stated target: $50 million in annual recurring revenue within two years of the Series C. The serviceable market he's chasing is $7 billion. The broader TAM he's playing toward is $35 billion. For a company that started as a voice assistant for warehouse workers, that's a long way from the original whiteboard.
In October 2025, Nagar delivered the keynote at Level AI's Virtual Summit, laying out his roadmap for the next chapter of customer experience. The physicist who once tried to make Alexa hold a 20-minute conversation is still, by every measure, trying to close that gap between what AI can do and what human conversations actually need.
Off the clock: Nagar is "passionate about hard sciences, spending time with his young family and stargazing." An applied physicist who made his career measuring the world's messiest conversations still prefers to spend his weekends looking at something that doesn't talk back.