BREAKING
OSMAN ALI MIAN WINS AAAI 2026 OUTSTANDING PAPER AWARD /// PhD DEFENDED MAGNA CUM LAUDE - FEBRUARY 2025 /// CAUSAL DISCOVERY PAPER ACCEPTED AT ICLR 2026 /// POSTDOCTORAL RESEARCHER AT LAMARR INSTITUTE & IKIM /// AISTATS 2026 ACCEPTANCE - UNIFIED CAUSAL DISCOVERY & DATA IMPUTATION /// PUBLISHED AT AAAI, ICML, ICLR, AISTATS & KDD /// OSMAN ALI MIAN WINS AAAI 2026 OUTSTANDING PAPER AWARD /// PhD DEFENDED MAGNA CUM LAUDE - FEBRUARY 2025 /// CAUSAL DISCOVERY PAPER ACCEPTED AT ICLR 2026 /// POSTDOCTORAL RESEARCHER AT LAMARR INSTITUTE & IKIM /// AISTATS 2026 ACCEPTANCE - UNIFIED CAUSAL DISCOVERY & DATA IMPUTATION /// PUBLISHED AT AAAI, ICML, ICLR, AISTATS & KDD ///
Osman Ali Mian - Causal AI Researcher
Profile / AI Researcher

Osman
Ali
Mian

Teaching machines to ask "why" - and collecting awards for the answers.

Postdoctoral Researcher • Lamarr Institute • IKIM Essen
AAAI 2026 Award Winner PhD Magna Cum Laude Causal Discovery Trustworthy ML
Outstanding Paper Award - AAAI 2026
10+ Publications
5 Years Publishing
1 AAAI Award
6 Top Venues
2025 PhD Magna Cum Laude
2 AAAI 2026 Orals

The Man Who Wants
Machines to Reason,
Not Just Predict

Most machine learning researchers build systems that are very good at saying "what." Osman Ali Mian is interested in the harder question: "why." That distinction might sound academic. It isn't.

As a postdoctoral researcher at the Lamarr Institute and the Institute for Artificial Intelligence in Medicine (IKIM) at Ruhr University Bochum, Mian works at the frontier of causal discovery - the discipline of teaching algorithms to infer the actual cause-and-effect structure hiding inside data. Not correlations. Not patterns. Causes.

He does this with a practical obsession that is embedded in the title of his PhD thesis: Practically Applicable Causal Discovery. The thesis earned him a Magna Cum Laude distinction from Saarland University in February 2025. That's the German academic system's way of saying: this is exceptional work.

In January 2026, AAAI - the Association for the Advancement of Artificial Intelligence, one of the field's flagship conferences - agreed. Mian co-authored one of just five papers selected for the AAAI 2026 Outstanding Paper Award. The paper, "Causal Structure Learning for Dynamical Systems with Theoretical Score Analysis," introduced a method called CaDyT, which solves a stubborn problem: most causal discovery algorithms discretize continuous-time data, and that discretization introduces errors. CaDyT skips that step entirely, using a Difference-based Structural Causal Model that respects continuous-time dynamics from the ground up.

He described causality as enabling machine learning models to "predict responsibly" - programming intelligent decision-making capabilities into machines.

- Osman Mian, CISPA TL;DR Podcast, Episode 12, 2022

That framing - "predict responsibly" - is the key to understanding what drives Mian's work. Pure prediction is cheap. Models that memorize patterns can score well on benchmarks while failing catastrophically when the world shifts. Causal models don't just predict. They understand structure. They can say: if you change this, that will follow. That is a qualitatively different and more powerful kind of intelligence.

At AAAI 2026, Mian didn't just appear on the award podium. He had two separate papers accepted for oral presentation at the conference - a competitive distinction that puts his work among the top few percent of submissions. The second oral, "SEQRET: Mining Rule Sets from Event Sequences," tackles pattern discovery in sequential data - different tools, same underlying commitment to interpretable, structured reasoning.

Scene by Scene

PANEL 01
The Federated Problem

Healthcare data is sensitive. It can't travel. But causal algorithms need data to train. Mian's answer: don't move the data. Move the algorithm. His privacy-preserving federated causal discovery methods - including the cleverly-named "Nothing but Regrets" - let hospitals learn shared causal structure without ever exposing patient records.

AISTATS 2023
PANEL 02
The Amazon Chapter

Between December 2022 and April 2023, Mian stepped inside Amazon as an Applied Science Intern. His mission: apply causal learning to Natural Language Understanding. Five months bridging academic theory and industrial-scale NLP, then back to Saarbrücken to finish the PhD. The kind of detour that makes a thesis sharper.

Dec 2022 - Apr 2023
PANEL 03
Counterfactual Bandits

Standard reinforcement learning asks: which action gets the best reward? Mian's ICLR 2026 paper asks a harder question: what would have happened if we had chosen differently? Counterfactual Structural Causal Bandits expands the decision space beyond what's observable - into the realm of "what if."

ICLR 2026

Why Causality Matters
More Than Ever

Here is the problem with most AI today: it can spot patterns across millions of examples, but it doesn't know which patterns matter under intervention. A model trained to diagnose disease in hospital A might fail in hospital B - not because the disease changed, but because the data-generating process did.

Causal models are built differently. Instead of learning "X predicts Y," they learn "X causes Y." That's harder. It requires assumptions, mathematical identifiability conditions, and careful algorithmic design. But the payoff is a model that keeps working when the world changes, because it understood the mechanism, not just the correlation.

Mian's contributions span the full stack of this challenge. At the observation layer, his AAAI 2021 paper, "Discovering Fully Oriented Causal Networks," tackled the fundamental problem of orienting causal edges from data. At the inference layer, his ICML 2022 work addressed cause-and-effect identification under heteroscedastic noise - the kind of messy, variable-variance noise that real data generates. And at the systems layer, his CaDyT work handles dynamical systems where the causal relationships evolve through time.

Each paper is a tool. Together, they form a toolkit for applying causal reasoning where it matters: in healthcare, in federated industrial systems, and in any setting where getting the causal story wrong has real consequences.

His AISTATS 2026 paper pushes further still, proposing a unified framework that handles causal discovery and data imputation simultaneously - two problems usually solved in sequence, now solved together. The practical target is medical records, where missingness is not random and imputation errors compound.

2019
Joins CISPA Helmholtz Center for Information Security as PhD student under Prof. Jilles Vreeken in the Exploratory Data Analysis group.
2021
First major publication: "Discovering Fully Oriented Causal Networks" at AAAI 2021.
2022
ICML paper on heteroscedastic noise. KDD workshop on federated causal discovery.
Dec 2022
Applied Science Intern at Amazon - causal learning for NLU.
2023
Two papers: AAAI 2023 (ORION, multi-environment causal discovery) and AISTATS 2023 (privacy-preserving federated causal discovery).
2024
KDD 2024: "Learning Causal Networks from Episodic Data."
Feb 2025
PhD defended Magna Cum Laude at Saarland University.
Dec 2024
Joins IKIM / Lamarr Institute as Postdoctoral Researcher under Dr. Michael Kamp.
Jan 2026
AAAI 2026 Outstanding Paper Award. Two oral presentations. Paper at ICLR 2026 and AISTATS 2026.

The Paper Trail

AAAI 2026 - OUTSTANDING PAPER AWARD
Causal Structure Learning for Dynamical Systems with Theoretical Score Analysis
2026
AAAI 2026 - ORAL
SEQRET: Mining Rule Sets from Event Sequences
2026
ICLR 2026
Counterfactual Structural Causal Bandits
2026
AISTATS 2026
Unified Causal Discovery and Data Imputation
2026
KDD 2024
Learning Causal Networks from Episodic Data
2024
AAAI 2023
Information-Theoretic Causal Discovery and Intervention Detection over Multiple Environments (ORION)
2023
AISTATS 2023
Nothing but Regrets - Privacy-Preserving Federated Causal Discovery
2023
ICML 2022
Inferring Cause and Effect in the Presence of Heteroscedastic Noise
2022
AAAI 2021
Discovering Fully Oriented Causal Networks
2021

Things Worth Knowing

🎮
The NFS Modder

Before causal graphs, there was Need for Speed 13. Mian built one of the first-ever plugins for the Myo gesture armband controller - written in Lua - so players could control the game with hand movements. The same curiosity that leads someone to wire a muscle-sensing armband into a racing game later produces algorithms for inferring cause-and-effect in federated medical networks.

🚁
The Autonomous Drone Builder

Also in his GitHub history: an autonomous drone control system for the Parrot AR Drone, coded in Python. The pattern is clear. Mian is not just a theorist who tolerates implementation. He is someone who builds things - and who has been building things, from drones to award-winning causal algorithms, for as long as he's been coding.

❄️
Frozen in the Arctic

Mian holds the GitHub Arctic Code Vault Contributor badge. That means some of his early code is preserved in a 1,000-year archive stored inside a decommissioned coal mine in the Norwegian Arctic. Whatever happens to the internet, a piece of Osman Mian's work will outlast it.

🎬
The Minority Report Connection

In a 2022 CISPA podcast, his research was compared to the film Minority Report. The predictive angle isn't far off: causal discovery, at its limits, lets you anticipate events by understanding the mechanisms that generate them - not just the historical frequencies. Mian's version, though, doesn't involve pre-cogs. Just math.

Practically Applicable.
Not Just Theoretically
Beautiful.

The title of Mian's PhD thesis wasn't an accident. "Practically Applicable Causal Discovery" is a research agenda, not just a dissertation chapter. His promotion committee - four professors across three institutions, including causal inference heavyweights Murat Kocaoglu and Jilles Vreeken - validated that the work delivers on that promise.

Now, as a postdoctoral researcher, the stakes are higher. IKIM - the Institute for Artificial Intelligence in Medicine - is not an abstract research environment. It sits inside a university hospital ecosystem where the algorithms eventually touch real patients. Federated learning, which lets institutions collaborate without sharing raw data, is not an academic convenience in that context. It's a legal and ethical requirement.

Mian's trajectory from CISPA to Amazon to IKIM is a straight line: build methods that work under the constraints of the real world, not the idealized constraints of benchmark datasets. Privacy. Heterogeneity. Dynamical systems. Missing data. Each constraint is a paper. Each paper is a step toward AI that can be trusted with decisions that matter.

The Outstanding Paper Award at AAAI 2026 is a signal. The field is watching.