Qingfeng Lan

I’m on the job market and actively looking for a Research Scientist position starting in Summer 2025! Feel free to reach out!

CV / Google Scholar / Github / LinkedIn / Twitter

Research Interest: Reinforcement Learning, Large Language Model with Reasoning and Planning, Continual Learning, Meta-Learning, Embodied Agent.

I’m a final year PhD Student at the University of Alberta, advised by Prof. A. Rupam Mahmood.
My long-term research objective is developing an intelligent agent that is able to extract, accumulate, and exploit knowledge continually and efficiently in real world.
In particular, my PhD research focuses on designing efficient reinforcement learning algorithms by reducing forgetting and maintaining plasticity. I have also worked on meta-learning, exploration, language modeling, and quantum reinforcement learning. Recently, I’ve been interested in large language models and embodied agents.

News

  • 2024.12: I will attend NeurIPS 2024 in Vancouver. Feel free to reach out to chat in person.
  • 2024.11: I start an internship at Huawei Noah’s Ark Lab, working on LLM × RL.
  • 2024.08: After many years of hard work, the “loss of plasticity” work is finally accepted by Nature! Kudos to all co-authors!
  • 2024.06: I start an internship at Meta Reality Lab. See you in California!
  • 2024.05: Three papers are accepted by Reinforcement Learning Conference (RLC) 2024.
  • 2024.01: One paper is accepted by International Conference on Learning Representations (ICLR) 2024.

Experience

Contact

  • Email: qlan3 [AT] ualberta [DOT] ca
  • WeChat/微信: Lancelqf

Publication

  • Loss of Plasticity in Deep Continual Learning
    Shibhansh Dohare, J. Fernando Hernandez-Garcia, Qingfeng Lan, Parash Rahman, A. Rupam Mahmood, Richard S. Sutton
    Nature 2024, Article. [paper] [code] [podcast] [news]

  • Learning to Optimize for Reinforcement Learning
    Qingfeng Lan, A. Rupam Mahmood, Shuicheng Yan, Zhongwen Xu
    RLC 2024, Oral. [paper] [code]

  • Weight Clipping for Deep Continual and Reinforcement Learning
    Mohamed Elsayed, Qingfeng Lan, Clare Lyle, A. Rupam Mahmood
    RLC 2024, Oral. [paper] [code]

  • More Efficient Randomized Exploration for Reinforcement Learning via Approximate Sampling
    Haque Ishfaq, Yixin Tan, Yu Yang, Qingfeng Lan, Jianfeng Lu, A. Rupam Mahmood, Doina Precup, Pan Xu
    RLC 2024, Oral. [paper] [code]

  • Provable and Practical: Efficient Exploration in Reinforcement Learning via Langevin Monte Carlo
    Haque Ishfaq*, Qingfeng Lan*, Pan Xu, A. Rupam Mahmood, Doina Precup, Anima Anandkumar, Kamyar Azizzadenesheli
    ICLR 2024, Poster. [paper] [code]

  • Elephant Neural Networks: Born to Be a Continual Learner
    Qingfeng Lan, A. Rupam Mahmood
    ICML Workshop on High-dimensional Learning Dynamics 2023, Poster. [paper]

  • Overcoming Policy Collapse in Deep Reinforcement Learning
    Shibhansh Dohare, Qingfeng Lan, A. Rupam Mahmood
    EWRL 2023. [paper]

  • Memory-efficient Reinforcement Learning with Value-based Knowledge Consolidation
    Qingfeng Lan, Yangchen Pan, Jun Luo, A. Rupam Mahmood
    TMLR 2023, CoLLAs certification. [paper] [code]

  • Model-free Policy Learning with Reward Gradients
    Qingfeng Lan, Samuele Tosatto, Homayoon Farrahi, A. Rupam Mahmood
    AISTATS 2022, Poster. [paper] [code]

  • Variational Quantum Soft Actor-Critic
    Qingfeng Lan
    arXiv preprint 2021. [paper] [code]

  • Predictive Representation Learning for Language Modeling
    Qingfeng Lan, Luke Kumar, Martha White, Alona Fyshe
    arXiv preprint 2021. [paper]

  • Maxmin Q-learning: Controlling the Estimation Bias of Q-learning
    Qingfeng Lan, Yangchen Pan, Alona Fyshe, Martha White
    ICLR 2020, Poster. [paper] [code] [video]

  • Reducing Selection Bias in Counterfactual Reasoning for Individual Treatment Effects Estimation
    Zichen Zhang, Qingfeng Lan, Lei Ding, Yue Wang, Negar Hassanpour, Russell Greiner
    NeurIPS Workshop on Causal Machine Learning 2019, Poster Spotlight. [paper]

  • A Deep Top-K Relevance Matching Model for Ad-hoc Retrieval
    Zhou Yang, Qingfeng Lan, Jiafeng Guo, Yixing Fan, Xiaofei Zhu, Yanyan Lan and Yue Wang, Xueqi Cheng
    CCIR 2018, Best Paper Award Candidate. [paper] [code]

Open-Source Code

  • Jaxplorer: A Jax reinforcement learning framework for exploring new ideas.
  • Optim4RL: A Jax framework of learning to optimize for reinforcement learning.
  • Explorer: A PyTorch reinforcement learning framework for exploring new ideas.
  • Gym Games: A collection of Gymnasium compatible games for reinforcement learning.
  • Quantum Explorer: A quantum reinforcement learning framework based on PyTorch and PennyLane.
  • Loss of Plasticity: The implementation of continual backpropagation which maintains network plasticity.

Education

  • University of Alberta, Sep 2020-Present

  • University of Alberta, Sep 2018-Aug 2020

    • Master of Science (Thesis-based) in Computing Science
    • Supervisor: Alona Fyshe
  • University of Chinese Academy of Sciences, Sep 2014-July 2018

    • Bachelor of Engineering, major in Computer Science and Technology, minor in Physics
    • Supervisor: Yanyan Lan; Tutor: Guojie Li
  • St Edmund Hall, University of Oxford, Oct 2017-Mar 2018