The Engineer Who
Took a Stand
Laura Nolan does not do things halfway. She is a Principal Engineer at Stanza Systems, a member of the USENIX Board of Directors, a contributor to three canonical SRE books, a TED speaker, a systems-thinking evangelist, and an autonomous-weapons activist who has actually briefed diplomats. She also lives in a small village in rural Ireland - with medieval ruins in the vicinity, naturally.
The through-line of her career is this: systems fail in predictable ways if you understand them deeply enough, and technology fails society in predictable ways too, if you let the wrong people build it with no accountability. Nolan has spent fifteen years applying that logic to both production infrastructure and international arms policy.
At Google from 2013 to 2018, she worked as a Staff Site Reliability Engineer in Dublin - the company's European headquarters. Her remit ranged across data infrastructure, alerting, networking, and pipelines. She also wrote the chapter titled "Managing Critical State" for the O'Reilly Site Reliability Engineering book: the industry text that more or less defined the SRE profession as a distinct discipline. Getting into that book, at that moment, was the equivalent of being cited in a founding document.
She left Google in June 2018. Not for a better offer - for a better principle. When she was asked to work on modifications related to Project Maven, the US Department of Defense's initiative to apply AI to drone surveillance footage for targeting, she said no. Then she resigned. This was not a reactive move; it preceded the wave of public employee protest that forced Google to ultimately cancel the Project Maven contract. Nolan was ahead of the curve, as engineers who understand what their code will actually do tend to be.
It is absolutely impossible for a machine to make determinations of proportionality in combat, as only a human could assess the overall strategic context.
- Laura NolanPost-Google, she joined Slack Technologies as a Senior Staff Engineer, spending seven years working on service networking and ingress load balancing. The work was unglamorous in the best possible way - the kind of infrastructure work that, when done right, nobody notices. When done wrong, it makes headlines. She wrote outage reports for the Slack Engineering blog, served on the SREcon Steering Committee, and kept speaking at conferences about the things that actually matter: how complex systems fail, why humans need to stay in the loop, and what resilience actually requires.
In 2025, she moved to Stanza Systems as Principal Engineer. Stanza builds tooling for load management and service observability - the exact intersection of reliability engineering and human control that she has been writing and speaking about for a decade. It is a logical fit.
What makes Nolan unusual is the range. Most engineers who work at Google's depth of infrastructure do not also pursue an MA in Ethics and then enroll in an MSc in Human Factors and Systems Safety at Lund University. Most SRE conference speakers do not also found activist organizations or testify before international bodies. Nolan does all of it, and she holds the pieces together with a consistent thesis: that the social and technical systems are the same system, and you cannot build reliable software by ignoring that.
Her 2019 TED talk at TEDxLiverpool, "Why We Must Ban Killer Robots," is a precise, unsentimental case built from engineering principles. She does not rely on dystopian imagery. She argues from what she knows: that autonomous systems have failure modes, that those failure modes in weapons systems cause mass casualties, and that the proportionality calculations required in warfare cannot be delegated to software. It is an engineer's argument, not a philosopher's, and it lands differently for that reason.
Her newsletter, Responsible Computing on Substack, continues that thinking - examining where technology choices intersect with human welfare, written for practitioners rather than policy wonks. It is dry when it needs to be, pointed when it matters.
At SREcon she has covered everything from distributed consensus algorithms to incident write-up craft to systems dynamics models of cascading failure. In 2021 she co-presented with David D. Woods - a pioneer of resilience engineering at Ohio State - bringing academic safety science onto a stage that usually hears from practitioners alone. That kind of bridge-building is her signature.
The Twitter handle @lauralifts doubles, per her own account, as a reference to weightlifting. This is consistent with everything else about her: she does not do the easy version of any pursuit. She lifts heavy weights. She writes about hard systems problems. She challenges the defense contractor projects of her employer before the crowd catches up. She does the MSc while working full-time.
Laura Nolan is the person who understood before most people did that "responsible AI" was not a PR committee's problem - it was an engineer's problem. She acted on that understanding early, at personal cost, and she has spent the years since making the case as clearly as possible. The SRE community respects her for the systems work. The policy community respects her for the ethics work. Very few people operate credibly in both spaces at once.
The medieval ruins outside her window are, frankly, on brand.