I am an AI Safety Researcher at Microsoft, where I red-team frontier AI models and agentic systems on the AI Red Team. My research focuses on emerging risks from increasingly autonomous AI, particularly loss of control.

I hold a PhD in Computer Science from New York University, advised by Dr. Christina Pöpper at the Center for Cyber Security. Previously, I served as Research Manager at MATS, guiding researchers on AI alignment, interpretability, and governance. I also conducted privacy-preserving research at Spotify Tech Research.

Interests
  • Autonomy and Loss of Control
  • Adversarial Machine Learning
  • Privacy Enhancing Technologies
  • Harmful Manipulation
Education
PhD in Computer Science New York University
MPhil in Computer Science NYU Courant Institute
BS in Computer Science NYU Abu Dhabi

Recent Highlights


Feb 2026 Serving on the Program Committee for USENIX Security 2026.
Aug 2025 Distinguished Artifact Award at USENIX Security 2025 for reverse-engineering safety filters in DALL·E.
Jul 2025 Red-teamed OpenAI's GPT-5 Reasoning model at Microsoft, evaluating autonomy, persuasion, and deception capabilities.
Dec 2024 Elevated to IEEE Senior Member.
Aug 2024 Defended PhD dissertation: Towards Responsible AI: Safeguarding Privacy, Integrity, and Fairness.
Jul 2024 Runner-up, Andreas Pfitzmann Best Paper Award at PETS 2024.
Mar 2024 Publication Chair, ACNS 2024.

Research


USENIX Security Symposium, Seattle, US, 2025 Distinguished Artifact Award
ML4H @ NeurIPS, New Orleans, US, 2023 Best Paper Award

Complete list on Google Scholar