I am a master studnet in LTI at Carnegie Mellon University, advise by Prof. Mona Diab, and work with Profs. Maarten Sap, Daniel Fried and Niloofar Mireshghallah. My research explores the AI safety, LLM Reasoning, Social Responsible AI, Interpretability, Agentic system and Human-AI Interaction.

Before coming to CMU, I spent four years at Northeastern University studying Software Engineering. During my undergraduate studies, I completed projects in distributed systems and microservice development.

I’m always enthusiastic about collaborating with researchers from diverse fields. If you’re interested in working together, please don’t hesitate to reach out to me.

Research Interest

Social Learning: Modeling social intelligence and social reasoning grounded in human cognition; understanding how computational models of human cognition differ from current AI systems, and how insights from human cognitive processes can more effectively guide AI systems toward deeper social reasoning.

AI Safety: Measuring and mitigating unsafe behaviors in AI systems, with the goal of developing AI that is safe, controllable, and trustworthy, and aligned with human values, behaviors and safety needs.

Reasoning and continue learning: Studying how social and causal reasoning abilities emerge through post-training and reinforcement learning, and exploring meta-abilities in reasoning via internal representations and model behavior.

πŸ”₯ News

  • 2025.07: Β πŸŽ‰πŸŽ‰ Join Fujitsu as a Research Scientist Intern!