I am an AI Safety Researcher at Microsoft, working on the AI Red Team to assess the security and safety of frontier AI models and agentic systems. My research focuses on emerging risks such as loss of control from autonomous AI development and deployment. I hold a PhD in Computer Science from New York University, advised by Dr. Christina Pöpper at the Center for Cyber Security.

Previously, I served as Research Manager at MATS, guiding researchers working on AI alignment, interpretability, and governance. I have held research positions at Spotify Tech Research.

Interests
  • Autonomy and Loss of Control
  • Adversarial Machine Learning
  • Harmful Manipulation
  • Data Privacy
Education
PhD in Computer Science New York University
MPhil in Computer Science NYU Courant Institute
BS in Computer Science NYU Abu Dhabi

Recent Highlights


Fall 2025 Serving on the Program Committee for USENIX Security 2026.
Aug 2025 Distinguished Artifact Award at USENIX Security 2025 for reverse-engineering safety filters in DALL·E.
Jul 2025 Red-teamed OpenAI's GPT-5 Reasoning model at Microsoft, evaluating autonomy, persuasion, and deception capabilities.
Dec 2024 Elevated to IEEE Senior Member.
Aug 2024 Defended PhD dissertation: Towards Responsible AI: Safeguarding Privacy, Integrity, and Fairness.
Jul 2024 Runner-up, Andreas Pfitzmann Best Paper Award at PETS 2024.
Mar 2024 Publication Chair, ACNS 2024.
Dec 2023 Best Paper Award at ML4H 2023 (NeurIPS workshop).
May 2023 Program Committee, SecWeb 2023.

Research


USENIX Security Symposium, Seattle, US, 2025 Distinguished Artifact Award
ML4H @ NeurIPS, New Orleans, US, 2023 Best Paper Award

Complete list on Google Scholar