About me
I am a forth-year PhD student (2021 - present) of Department of Computer Science and Engineering at Shanghai Jiao Tong University (SJTU). I am fortunate to be advised by Prof. Rui Wang. Before that, I received the bachelor degree in Software Engineering from South China University of Technology (SCUT). I am currently a research intern at Tencent AI Lab, co-advised by Dr. Xing Wang and Dr. Zhaopeng Tu. I also work closely with Zhuosheng Zhang.
🔬 Research
Large and Efficient Reasoning Models
- Underthinking issue in o1-like models [Preprint]
- Overthinking issue in o1-like models [Preprint]
- Rank-sharing LoRA [ICLR 2025]
Autonomous Agent powered by Large Language Models
- Multi-agent debate [EMNLP 2024]
- Evaluating and improving agent safety [EMNLP 2024 (Findings)]
Multilinguality & Machine Translation
- Bridging the gap between training signal and real user input [ACL 2022]
- Human-like translation strategy [TACL 2024]
- Improving translation with human feedback [NAACL 2024]
- Cross-lingual consistency for text watermark [ACL 2024 (Oral)]
🔥 News
- 2025.01: 🤯🤯 Revealed underthinking issue in o1-like models (preprint).
- 2024.12: 🎉🎉 One paper about parameter-efficient fine-tuning accepted by ICLR 2025.
- 2024.12: 🤯🤯 Revealed overthinking issue in o1-like models (preprint).
- 2024.08: 🇹🇭🐘 Gave an oral presentation at ACL 2024 on cross-lingual text watermark.
- 2024.06: 🇲🇽🌮 Attended NAACL 2024 @ Mexico.
- 2024.05: 🎉🎉 One paper about cross-lingual text watermark accepted by ACL 2024.
- 2024.03: 🎉🎉 One paper about improving translation with human feedback accepted by NAACL 2024.
- 2023.11: 🎉🎉 One paper about human-like translation strategy accepted by TACL 2024.
- 2023.05: Introduced the MAPS framework, enabling LLMs to mimic the human translation strategy. See also the media coverage 📸.
- 2023.05: Proposed a multi-agent debate framework (MAD) with large language models (EMNLP 2024).
🖨️ Selected preprints
* denotes co-first authors

Thoughts Are All Over the Place: On the Underthinking of o1-Like LLMs
Yue Wang*, Qiuzhi Liu*, Jiahao Xu*, Tian Liang*, Xingyu Chen*, Zhiwei He*, Linfeng Song, Dian Yu, Juntao Li, Zhuosheng Zhang, Rui Wang, Zhaopeng Tu, Haitao Mi, Dong Yuo1-like models underthink, which:
- Occur more frequently on harder problems,
- Lead to frequent switching between thoughts without reaching a conclusion,
- Correlate with incorrect responses due to insufficient exploration.

Do NOT Think That Much for 2+3=? On the Overthinking of o1-Like LLMs
Xingyu Chen*, Jiahao Xu*, Tian Liang*, Zhiwei He*, Jianhui Pang, Dian Yu, Linfeng Song, Qiuzhi Liu, Mengfei Zhou, Zhuosheng Zhang, Rui Wang, Zhaopeng Tu, Haitao Mi, Dong Yuo1-like models overthink, which:
- Contribute minimally to accuracy.
- Lack diversity in reasoning strategies.
- Occur more frequently with simple problems.
📝 Selected publications
* denotes co-first authors
-
RaSA: Rank-Sharing Low-Rank Adaptation
Zhiwei He, Zhaopeng Tu, Xing Wang, Xingyu Chen, Zhijie Wang, Jiahao Xu, Tian Liang, Wenxiang Jiao, Zhuosheng Zhang, Rui Wang
-
R-Judge: Benchmarking Safety Risk Awareness for LLM Agents
Tongxin Yuan*, Zhiwei He*, Lingzhong Dong, Yiming Wang, Ruijie Zhao, Tian Xia, Lizhen Xu, Binglin Zhou, Fangqi Li, Zhuosheng Zhang, Rui Wang, Gongshen Liu
- Are LLM agents aware of safety risks in real-world applications? Let’s find out with R-Judge!
- Are LLM agents aware of safety risks in real-world applications? Let’s find out with R-Judge!
-
Encouraging Divergent Thinking in Large Language Models through Multi-Agent Debate
Tian Liang*, Zhiwei He*, Wenxiang Jiao*, Xing Wang, Yan Wang, Rui Wang, Yujiu Yang, Zhaopeng Tu, Shuming Shi
- We propose a multi-agent debate framework with large language models.
- We propose a multi-agent debate framework with large language models.
-
Zhiwei He*, Binglin Zhou*, Hongkun Hao, Aiwei Liu, Xing Wang, Zhaopeng Tu, Zhuosheng Zhang, Rui Wang
- Text watermarks can be easily removed by translation.
- We analyze and improve the cross-lingual consistency of text watermarks.
-
Zhiwei He, Xing Wang, Wenxiang Jiao, Zhuosheng Zhang, Rui Wang, Shuming Shi, Zhaopeng Tu
- We identify the overoptimization problem when using QE-based reward models for training translation model.
- We address it with a simple yet effective method.
-
Exploring Human-Like Translation Strategy with Large Language Models
Zhiwei He*, Tian Liang*, Wenxiang Jiao, Zhuosheng Zhang, Yujiu Yang, Rui Wang, Zhaopeng Tu, Shuming Shi, Xing Wang
- We propose MAPS, the first machine translation system that mimics human translation strategies.
- Outperforms WMT22 winners in 5 out of 11 translation directions.
- Media coverage
-
Zhiwei He, Xing Wang, Zhaopeng Tu, Shuming Shi, Rui Wang
- Machine translation system for Livonian
- 🥇1st place for English$\Rightarrow$Livonian (unconstrained system)
- 🥈2nd place for Livonian$\Rightarrow$English (unconstrained system)
-
Bridging the Data Gap between Training and Inference for Unsupervised Neural Machine Translation
Zhiwei He, Xing Wang, Rui Wang, Shuming Shi, Zhaopeng Tu
🎖 Honors and Awards
- 2022.8: 1st place in the WMT22 General Translation Task, English to Livonian (Unconstrained System).
- 2022.8: 2nd place in the WMT22 General Translation Task, Livonian to English (Unconstrained System).
- 2018, 2019: First Class Scholarship.
💬 Invited Talks
- 2024.07: Can Watermarks Survive Translation, AITIME | [video] [slide]
- 2023.11: Improving Machine Translation with Human Strategy and Feedback, CJNLP | [slide]
- 2022.08: Unsupervised Neural Machine Translation, CCKS 2022
💻 Internships
- 2021 - present: Tencent AI Lab, Shenzhen, Mentors: Dr. Xing Wang and Dr. Zhaopeng Tu.