Lingkai Kong

Lingkai Kong

Postdoctoral Fellow

Harvard University

About Me

Welcome to Lingkai Kong (孔令恺)’s homepage! I am a postdoctoral fellow at Harvard, advised by Prof. Milind Tambe. I obtained my Ph.D. in Computational Science and Engineering from Georgia Institute of Technology, advised by Prof. Chao Zhang.

My research focuses on developing reliable data-driven solutions for high-stakes decision-making scenarios. Specifically, I work on the following areas:

On the practical side, I am passionate about interdisciplinary applications: Public health [ NeurIPS'21, WWW'22, KDD'23] and Scientific discovery [ ICML'23, TMLR, AISTATS'24].

Email: lingkaikong [at] g [dot] harvard [dot] edu

Education


  • Georgia Institute of Technology, Atlanta, USA
    Ph.D. in Computational Science and Engineering, 2019 - 2024
  • Georgia Institute of Technology, Atlanta, USA
    Ph.D. in Electrical and Computer Engineering (transferred to CSE), 2017 - 2019
  • Southeast University, Nanjing, China
    B.E. in Information Engineering, 2013 - 2017

Preprints


  • Aligning Large Language Models with Representation Editing: A Control Perspective
    Lingkai Kong*, Haorui Wang*, Wenhao Mu*, Yuanqi Du, Yuchen Zhuang, Yifei Zhou, Yue Song, Rongzhi Zhang, Kai Wang, Chao Zhang
    [Arxiv]

  • Diffusion Models as Constrained Samplers for Optimization with Unknown Constraints
    Lingkai Kong*, Yuanqi Du*, Wenhao Mu*, Kirill Neklyudov, Valentin De Bortol, Haorui Wang,
    Dongxia Wu, Aaron Ferber, Yi-An Ma, Carla P. Gomes, Chao Zhang
    [Arxiv]

  • DF^2: Distribution-Free Decision-Focused Learning
    Lingkai Kong, Wenhao Mu, Jiaming Cui, Yuchen Zhuang, B Aditya Prakash, Bo Dai and Chao Zhang
    [Arxiv]

  • Efficient Evolutionary Search Over Chemical Space with Large Language Models
    Haorui Wang, Marta Skreta, Cher-Tian Ser, Wenhao Gao, Lingkai Kong, Felix Strieth-Kalthoff, Chenru Duan, Yuchen Zhuang, Yue Yu, Yanqiao Zhu, Yuanqi Du, Alán Aspuru-Guzik, Kirill Neklyudov and Chao Zhang
    [Arxiv]

  • Time-MMD: A New Multi-Domain Multimodal Dataset for Time Series Analysis
    Haoxin Liu, Shangqing Xu, Zhiyuan Zhao, Lingkai Kong, Harshavardhan Kamarthi, Aditya B Sasanur, Megha Sharma, Jiaming Cui, Qingsong Wen, Chao Zhang and B Aditya Prakash
    [Arxiv]

Publication


  • TPD: Enhancing Student Language Model Reasoning via Principle Discovery and Guidance
    Haorui Wang, Rongzhi Zhang, Yinghao Li, Lingkai Kong, Yuchen Zhuang, Xiusi Chen, Chao Zhang
    Conference On Language Modeling (COLM), 2024
    [Paper]

  • Time-Series Forecasting for Out-of-Distribution Generalization Using Invariant Learning
    Haoxin Liu, Harshavardhan Kamarthi, Lingkai Kong, Zhiyuan Zhao, Chao Zhang and B. Aditya Prakash
    International Conference on Machine Learning (ICML), 2024
    [Paper]

  • Two Birds with One Stone: Enhancing Uncertainty Quantification and Interpretability with Graph Functional Neural Process
    Lingkai Kong*, Haotian Sun*, Yuchen Zhuang, Haorui Wang, Wenhao Mu and Chao Zhang
    International Conference on Artificial Intelligence and Statistics (AISTATS), 2024
    [Paper]

  • AdaPlanner: Adaptive Planning from Feedback with Language Models
    Haotian Sun, Yuchen Zhuang, Lingkai Kong, Bo Dai and Chao Zhang
    Advances in Neural Information Processing Systems (NeurIPS), 2023
    [Paper]

  • MUBen: Benchmarking the Uncertainty of Pre-Trained Models for Molecular Property Prediction
    Yinghao Li, Lingkai Kong, Yuanqi Du, Yue Yu, Yuchen Zhuang, Wenhao Mu and Chao Zhang
    Transactions on Machine Learning Research (TMLR), NeurIPS Workshop on AI4Science, 2023
    [Paper]

  • Uncertainty Quantification in Deep Learning
    Lingkai Kong, Harshavardhan Kamarthi, Peng Chen, B Aditya Prakash and Chao Zhang
    ACM SIGKDD Conference on Knowledge Discovery and Data Mining (SIGKDD), 2023 (Conference Tutorial)
    [Tutorial page]

  • When Rigidity Hurts: Soft Consistency Regularization for Probabilistic Hierarchical Time Series Forecasting
    Harshavardhan Kamarthi, Lingkai Kong, Alexander Rodríguez, Chao Zhang and B Aditya Prakash
    ACM SIGKDD Conference on Knowledge Discovery and Data Mining (SIGKDD), 2023
    [Paper]

  • DyGen: Fine-Tuning Language Models with Noisy Labels by Dynamics-Enhanced Generative Modeling
    Yuchen Zhuang, Yue Yu, Lingkai Kong, Xiang Chen and Chao Zhang
    ACM SIGKDD Conference on Knowledge Discovery and Data Mining (SIGKDD), 2023
    [Paper]

  • Autoregressive Diffusion Model for Graph Generation
    Lingkai Kong, Jiaming Cui, Haotian Sun, Yuchen Zhuang, B Aditya Prakash and Chao Zhang
    International Conference on Machine Learning (ICML), 2023
    [Paper]

  • End-to-End Stochastic Optimization with Energy-based Model
    Lingkai Kong, Jiaming Cui, Yuchen Zhuang, Rui Feng, B Aditya Prakash and Chao Zhang
    Advances in Neural Information Processing Systems (NeurIPS), 2022 (Selected as Oral)
    [Paper] [Code]

  • AcTune: Uncertainty-Aware Active Self-Training for Active Fine-Tuning of Pretrained Language Models
    Yue Yu, Lingkai Kong, Jieyu Zhang, Rongzhi Zhang, Chao Zhang
    Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL), 2022
    [Paper] [Code]

  • CAMul: Calibrated and Accurate Multi-view Time-Series Forecasting
    Harshavardhan Kamarthi, Lingkai Kong, Alexander Rodríguez, Chao Zhang and B Aditya Prakash
    The Web Conference (WWW), 2022
    [Paper] [Code]

  • When in Doubt: Neural Non-Parametric Uncertainty Quantification for Epidemic Forecasting
    Harshavardhan Kamarthi, Lingkai Kong, Alexander Rodríguez, Chao Zhang and B Aditya Prakash
    Advances in Neural Information Processing Systems (NeurIPS), 2021
    [Paper] [Code]

  • Data Efficient Estimation for Quality of Transmission Through Active Learning in Fiber-Wireless Integrated Network
    Shuang Yao, Chin-Wei Hsu, Lingkai Kong, Qi Zhou, Shuyi Shen, Rui Zhang, Shang-Jen Su, Yahya Alfadhli, and Gee-Kung Chang
    Journal of Lightwave Technology, Vol. 39, No.18, pp. 5691-5698, Sept. 2021
    [Paper]

  • Calibrated Language Model Fine-Tuning for In- and Out-of-Distribution Data
    Lingkai Kong, Haoming Jiang, Yuchen Zhuang, Jie Lyu, Tuo Zhao and Chao Zhang
    Conference on Empirical Methods in Natural Language Processing (EMNLP), 2020
    [Paper] [Code]

  • SDE-Net: Equipping Deep Neural Networks with Uncertainty Estimates
    Lingkai Kong, Jimeng Sun and Chao Zhang
    International Conference on Machine Learning (ICML), 2020
    [Paper] [Code] [Video]

  • Learning Deep Hidden Nonlinear Dynamics from Aggregate Data
    Yisen Wang, Bo Dai, Lingkai Kong, Sarah Erfani, James Bailey and Hongyuan Zha
    Conference on Uncertainty in Artificial Intelligence (UAI), 2018
    [Paper]

  • Wide-range Dimmable Clipped Flip-OFDM For Indoor Visible Light Communication
    Liang Wu, Lingkai Kong, Zaichen Zhang, Jian Dang and Huaping Liu
    IEEE/CIC International Conference on Communications in China (ICCC), 2018
    [Paper]

  • A Novel OFDM Scheme for VLC Systems under LED Nonlinear Constraints
    Lingkai Kong, Congcong Cao, Siyuan Zhang, Mengchao Li and Liang Wu
    EAI International Conference On Communications and Networking in China (ChinaCom), 2016
    [Paper]

Experience


  • Amazon, Seattle, May 2021 - Nov 2021
    Applied Scientist Intern, Product Graph Team
    Mentor: Xiang He, Chenwei Zhang; Manager: Luna Xin Dong
  • IQVIA, Cambridge, June 2020 - Aug 2020
    Research Intern, Analytics Center of Excellence
    Mentor: Danica (Cao) Xiao

Award


  • ICML Travel Award, 2023
  • NeurIPS Scholar Award, 2022
  • Otto & Jenny Krauss Fellowship, Georgia Tech, 2017
  • Outstanding undergraduate thesis, Southeast University (Top 5%), 2017

Academic Services


Program Committee/Reviewer: NeurIPS 2023, KDD 2021-2023, ACL 2021-2023, EMNLP 2020-2023, NAACL 2021-2022

Teaching


  • Teaching Assistant, CSE8803 Deep Learning for Text Data, Fall, 2020
  • Teaching Assistant, CSE8803 Deep Learning for Text Data, Fall, 2019
  • Teaching Assistant, CS7641 Machine Learning (Online), Spring, 2019
  • Teaching Assistant, CS7641 Machine Learning (Online), Fall, 2018