John Murzaku

John Murzaku

Computer Science PhD student at Stony Brook University

Advisor: Owen Rambow
Email: jmurzaku <at> cs.stonybrook <dot> edu

About me

I'm a 5th year Computer Science PhD candidate at Stony Brook working on belief modeling in text and multimodal contexts. I am advised by Owen Rambow, and work on how to better understand how language models represent, reason about, and generate beliefs in conversation— particularly around mental states, factuality, and common ground.

My work spans computational linguistics and cognitive science, bridging theory-of-mind concepts with modern NLP. I am also interested in how large language models process spontaneous narratives and how we can evaluate or improve their ability to detect hedges, sarcasm, or other subtle markers of speaker intent. This work is a joint and ongoing collaboration with Susan Brennan from the Stony Brook Psychology Department.

Recently, I interned at Adobe working on interactive disambiguation and clarification question generation for enterprise AI assistants. I previously also completed internships at Gap International and Raytheon, focusing on real-world NLP applications such as authorship attribution, topic modeling, and knowledge-grounded dialogue.


Updates

January 2025

My Adobe internship work on Enhanced Clarification for Interactive Responses (ECLAIR) to appear AAAI 2025 (Demo) and IAAI 2025. Another publication on Synthetic Audio Data accepted to Findings of NAACL 2025.

May - Nov. 2024

Interned at Adobe as an ML Engineer, building an interactive disambiguation pipeline for the AEP AI assistant. I had an excellent experience, and was mentored by Zifan Liu and manager Yunyao Li. Two papers accepted and a patent submitted!

Sep. - Dec 2023

Interned at the Genius Institute at Gap International as an NLP Engineer. I helped leverage LLMs to detect patterns of genius and built an unsupervised clustering pipeline for genius detection. I was mentored by my amazing father Alex Murzaku (also fulfilling my childhood dream of one day working together with my dad).



Publications

  1. Zero-Shot Belief: A Hard Problem for LLMs
    Submitted to ACL 2025 J. Murzaku, O. Rambow [PDF]
  2. Projecting Knowledge and Common Ground from Characters’ Utterances in Narratives: A Psycholinguistic Baseline for LLMs
    ToM4AI Workshop (AAAI 2025) A. Soubki, A. Paige, J. Murzaku, O. Rambow, S. Brennan Psychology Dept. Collaboration
  3. ECLAIR: Enhanced Clarification for Interactive Responses in an Enterprise AI Assistant
    To appear at AAAI 2025 (Demo Track) J. Murzaku, Z. Liu, V. Muppala, M. Tanjim, X. Chen, Y. Li Adobe Internship
  4. ECLAIR: Enhanced Clarification for Interactive Responses
    To appear at IAAI 2025 J. Murzaku, Z. Liu, M. Tanjim, V. Muppala, X. Chen, Y. Li 15.7% Acceptance Rate Adobe Internship
  5. Synthetic Audio Helps for Cognitive State Tasks
    Findings of NAACL 2025 A. Soubki*, J. Murzaku*, P. Zeng, O. Rambow [PDF]
  6. Training LLMs to Recognize Hedges in Spontaneous Narratives
    SIGDIAL 2024 A. Paige*, A. Soubki*, J. Murzaku*, O. Rambow, S. Brennan [PDF] Psychology Dept. Collaboration
  7. Views Are My Own, but Also Yours: Benchmarking Theory of Mind Using Common Ground
    Findings of ACL 2024 A. Soubki, J. Murzaku, A. Y. Jordehi, P. Zeng, M. Markowska, S.A. Mirroshandel, O. Rambow [PDF]
  8. Multimodal Belief Prediction
    Interspeech 2024 J. Murzaku*, A. Soubki*, O. Rambow [PDF]
  9. BeLeaf: Belief Prediction as Tree Generation
    NAACL 2024 Demo J. Murzaku, O. Rambow [PDF]
  10. Towards Generative Event Factuality Prediction
    Findings of ACL 2023 J. Murzaku, T. Osborne, A. Aviram, O. Rambow [PDF]
  11. Re-Examining Factbank: Predicting the Author's Presentation of Factuality
    COLING 2022 J. Murzaku, P. Zeng, M. Markowska, O. Rambow [PDF]
* denotes equal contribution