Doctoral Candidate
Computer Science, Courant Institute
New York University
My research concentrates on enhancing the safety, privacy, and fairness of AI systems. Some of my recent projects involve exposing privacy vulnerabilities in code generation language models, identifying biases in medical imaging foundation models, developing a cybersecurity-inspired framework for disinformation, and auditing the reliability of LLMs for fact-checking. I am honored to be a recipient of the Global PhD Fellowship, working alongside Prof. Christina Pöpper in advancing my research.
January 2024: 📚 Happy to take on the role of Publication Chair for the 22nd Conference on Applied Cryptography & Network Security.
December 2023: 🏆 Exciting news! Our work on the fairness of medical imaging foundation models has been honored with the Health Equity & Global Health Best Proceedings Paper Award at the Machine Learning for Health (ML4H) conference, co-located with NeurIPS. Congratulations to our dedicated team!
November 2023: 🌐 Attended the ACM CCS in Copenhagen; our project on mobile browser extension fingerprinting was presented at the WPES 2023.
August 2023: 💻 Presented our research on code generation with large language models (LLMs) at Usenix Security 2023.
June 2023: 🌍 Engaged with the UN Information Integrity team to evolve existing frameworks, targeting hate speech risks and enhancing online safety.
February 2023: 🗣 Honored to be an invited speaker at the Microsoft Research (MSR) Colloquium Series, where I discussed our work on threat modeling of disinformation campaigns. Thank you, MSR, for this wonderful opportunity!
October 2022: 🎤 Spoke at the MENA Cybersecurity Seminar about emerging threats in disinformation operations and the importance of collaboration among various stakeholders.
Summer 2022: 🎶 Joined Spotify as a Summer Research Scientist. Collaborated with the Content Platform Research and Tech Research teams on exciting projects involving AI and content moderation.
How Fair are Medical Imaging Foundation Models?.
Osama Khan, Muneeb Afzal, Shujaat Mirza, Yi Fang. In Machine Learning for Health (ML4H), New Orleans, US, 2023.
CodexLeaks: Privacy Leaks from Code Generation Language Models in GitHub Copilot.
Liang Niu, Shujaat Mirza, Zayd Maradni and Christina Pöpper. In USENIX Security Symposium, Anaheim, US, 2023.
Trustworthiness of LLMs in Fact-Checking: Stability & Factuality of GPT Models.
Shujaat Mirza, Bruno Gomes Coelho, Yuyuan Cui, Christina Poepper, and Damon McCoy. In Submission, 2024.
Tactics, Threats & Targets: Modeling Disinformation and its Mitigation.
Shujaat Mirza, Labeeba Begum, Liang Niu, Sara Pardo, Azza Abouzied, Paolo Papotti and Christina Pöpper. In Network and Distributed System Security (NDSS), San Diego, US, 2023.
Extending Browser Extension Fingerprinting to Mobile Devices.
Brian Hyeongseok Kim, Shujaat Mirza, and Christina Pöpper. In the 22nd Workshop on Privacy in the Electronic Society (WPES; co-located with ACM CCS), Copenhagen, Denmark, 2023.
Managing Longitudinal Privacy of Publicly Shared Personal Online Data.
Shujaat Mirza*, Theodor Schnitzler*, Markus Dürmuth, and Christina Pöpper. In Proceedings on Privacy Enhancing Technologies (PETS), Virtual Event, Canada, 2021.
* : indicates equal contribution. Detailed list of publications can be found at Google Scholar.
I got a B.S. in Computer Science from NYU Abu Dhabi, where my capstone project was on Reimagining Privacy in Online Social Networks. During high school, I represented the national team in International Chemistry Olympiad.