Lingkai Kong's website
Postdoctoral Fellow at Harvard University
Welcome to Lingkai Kong (孔令恺)’s homepage! I am a postdoctoral fellow at Harvard, advised by Prof. Milind Tambe. I obtained my Ph.D. in Computational Science and Engineering from Georgia Institute of Technology, advised by Prof. Chao Zhang.
I am on the academic job market. Feel free to reach out if you think my background and experience could be a good fit for your institute.
Many real-world decisions are high stakes, time sensitive, and made under uncertainty. Traditional methods can struggle at large spatial and temporal scales, where data are limited and noisy. My work integrates generative AI with optimization and reinforcement learning to model uncertainty, capture complex patterns, and enable reliable, efficient decisions that deliver practical impact in public health and environmental sustainability.

Below are some example papers that focus on the technical aspects of this work:
- Composite Flow Matching for Reinforcement Learning with Shifted-Dynamics Data, NeurIPS’25 (Spotlight)
- Robust Optimization with Diffusion Models for Green Security, UAI’25
- Diffusion Models as Constrained Samplers for Optimization with Unknown Constraints, AISTATS’25
- Aligning Large Language Models with Representation Editing: A Control Perspective, NeurIPS’24
- AdaPlanner: Adaptive Planning from Feedback with Language Models, NeurIPS’23
- End-to-End Stochastic Optimization with Energy-Based Model, NeurIPS’22 (Oral)
- SDE-Net: Equipping Deep Neural Networks with Uncertainty Estimates, ICML’20
I work closely with domain experts to ensure that the methods I design address domain-specific challenges and deliver meaningful impact in practice.
- Generative AI Against Poaching: Latent Composite Flow Matching for Wildlife Conservation, IAAI’26
- LLM-based Agent Simulation for Maternal Health Interventions: Uncertainty Estimation and Decision-focused Evaluation, AAMAS-AASG’25
- PRIORITY2REWARD: Incorporating Healthworker Preferences for Resource Allocation Planning, AAAI’25
- When in Doubt: Neural Non-Parametric Uncertainty Quantification for Epidemic Forecasting, NeurIPS’21