I am a PhD candidate advised by Prof. Philippe Langlais. My research spans efficient architectures for LLMs, LLM calibration and reliability, and Length Generalization.
🔥 News
- 2026.01: 🎉🎉 Two papers are accepted by WWW 2026 industry track.
- 2026.01: 🎉🎉 One paper is accepted by EACL 2026 (main).
- 2025.09: 🎉🎉 One paper is accepted by NeurIPS 2025.
- 2025.07: 🎉🎉 Two papers are accepted by ECAI 2025.
- 2025.07: 🎉🎉 One paper is accepted by COLM 2025.
- 2025.05: 🎉🎉 One paper is accepted by ICML 2025.
📝 Selected Publications
-
NeurIPS 2025Mamba Modulation: On the Length Generalization of Mamba Models, Peng LU, Jerry Huang, QIUHAO Zeng, Xinyu Wang, Boxing Chen, Philippe Langlais, Yufei Cui -
ICML 2025Calibrated Language Models and How to Find Them with Label Smoothing, Peng LU, Jerry Huang, QIUHAO Zeng -
COLM 2025Resona: Improving Context Copying in Linear Recurrence Models with Retrieval, Xinyu Wang, Linrui Ma, Jerry Huang, Peng Lu, Prasanna Parthasarathi, Xiao-Wen Chang, Boxing Chen, Yufei Cui -
ECAI 2025An Interpretable Quantum-Inspired Model for Multi-Task Natural Language Understanding, Peng Lu, Jerry Huang, Xinyu Wang, Philippe Langlais ECAI 2025PoT-PTQ: Two-Step Power-of-Two Post-Training for LLMs, Xinyu Wang, Vahid Partovi Nia, Peng Lu, Jerry Huang, Xiao-Wen Chang, Boxing Chen, Yufei Cui-
ICLR 2025ZETA: Leveraging Z-order Curves for Efficient Top-k Attention, Qiuhao Zeng, Jerry Huang, Peng Lu, Gezheng Xu, Boxing Chen, Charles Ling, Boyu Wang NAACL 2025ReGLA: Refining Gated Linear Attention, Peng Lu, Ivan Kobyzev, Mehdi Rezagholizadeh, Boxing Chen, Philippe Langlais-
CIKM 2025 industry trackFinSage: A Multi-aspect RAG System for Financial Filings Question Answering, Xinyu Wang, Jijun Chi, Zhenghan Tai, Tung Sum Thomas Kwok, Muzhi Li, Zhuhong Li, Hailin He, Yuchen Hua, Peng Lu et al. -
EMNLP 2024 FindingsDraft on the fly: Adaptive self-speculative decoding using cosine similarity, Michael R. Metel, Peng Lu, Boxing Chen, Mehdi Rezagholizadeh, and Ivan Kobyzev -
ACL 2024 FindingsResonance RoPE: Improving context length generalization of large language models, Suyuchen Wang, Ivan Kobyzev, Peng Lu, Mehdi Rezagholizadeh, and Bang Liu -
ACL 2023 FindingsLABO: Towards learning optimal label regularization via bi-level optimization, Peng Lu, Ahmad Rashid, Ivan Kobyzev, Mehdi Rezagholizadeh, and Phillippe Langlais -
EMNLP 2023Efficient classification of long documents via state-space models, Peng Lu, Suyuchen Wang, Mehdi Rezagholizadeh, Bang Liu, and Ivan Kobyzev EMNLP 2022 FindingsImproving generalization of pre-trained language models via stochastic weight averaging, Peng Lu, Ivan Kobyzev, Mehdi Rezagholizadeh, Ahmad Rashid, Ali Ghodsi, and Phillippe Langlais-
EMNLP 2021 FindingsRW-KD: Sample-wise loss terms re-weighting for knowledge distillation, Peng Lu, Abbas Ghaddar, Ahmad Rashid, Mehdi Rezagholizadeh, Ali Ghodsi, and Philippe Langlais NAACL 2019SC-LSTM: Learning task-specific representations in multi-task learning for sequence labeling, Peng Lu, Ting Bai, and Philippe Langlais
💻 Internships
- 2020.09 - 2025.12, [Huawei Noah’s Ark Lab ], Canada.