Deep neural networks (DNNs) have achieved enormous success in a wide range of domains, such as computer vision, natural language processing and scientific areas. However, one key bottleneck of DNNs is that they are ignorant about the uncertainties in their predictions. They can produce wildly wrong predictions without realizing, and can even be confident about their mistakes. Such mistakes can cause misguided decisions—sometimes catastrophic in critical applications, ranging from self-driving cars to cyber security to automatic medical diagnosis. In this tutorial, we present recent advancements in uncertainty quantification for DNNs and their applications across various domains. We first provide an overview of the motivation behind uncertainty quantification, different sources of uncertainty, and evaluation metrics. Then, we delve into several representative uncertainty quantification methods for predictive models, including ensembles, Bayesian neural networks, conformal prediction, and others. We go on to discuss how uncertainty can be utilized for label-efficient learning, continual learning, robust decision-making, and experimental design. Furthermore, we showcase examples of uncertainty-aware DNNs in various domains, such as health, robotics, and scientific machine learning. Finally, we summarize open challenges and future directions in this area.
Lingkai Kong is a Ph.D. student in the School of Computational Science and Engineering at Georgia Tech. His research focuses on uncertainty quantification of deep learning and decision-making under uncertainty, with a particular emphasis on their applications to public health and natural language processing. His work has been published in NeurIPS, ICML, EMNLP, WWW, and NAACL.
Harshavardhan Kamarthi is a Ph.D. student in the School of Computational Science and Engineering at Georgia Tech. He received a B.Tech and M.Tech in CSE from the Indian Institute of Technology (IIT) - Madras in 2020. His research interests include time-series forecasting, deep probabilistic, generative modeling, and deep learning. His work has been published in NeurIPS, ICLR, WWW, and AAMAS and was nominated for the best student paper award at AAMAS 2020. He also received the Alumni Association Award for best Academic performance and the Lakshmi Ravi award for the best Masters thesis at IIT Madras.
Peng Chen is an Assistant Professor at the School of Computational Science and Engineering at Georgia Tech. He obtained his Ph.D. degree in Computational Mathematics from EPFL. His research focuses on Scientific Machine Learning (SciML) and Uncertainty Quantification (UQ) driven by grand challenge problems in scientific and engineering fields that involve data-driven modeling, learning, and optimization of complex systems under uncertainty. He has published 28 journal papers and 5 conference papers in SIAM JUQ, SISC, SIOPT, JCP, CMAME, NeurIPS, etc.
B. Aditya Prakash is an Associate Professor at Georgia Tech’s College of Computing, with a Ph.D. from Carnegie Mellon and a B.Tech from IIT-Bombay. He has published a book, over 95 papers, and holds two U.S. patents. Aditya’s research focuses on Data Science, Machine Learning, and AI, specifically on big-data problems in large real-world networks and time-series, with applications to computational epidemiology/public health, urban computing, security, and the Web. His work has been highlighted by several media outlets/popular press, e.g., FiveThirtyEight.com and has also won several awards (e.g., the CMU/Facebook COVID19 Symptom Challenge, NSF CAREER). He has given several tutorials at leading conferences. He was a track chair for the AI for COVID track at AAAI 2021, a Proceedings co-chair for SIGKDD 2020, the Tutorial Chair for SDM 2019, and a PC-vice chair for IEEE BigData 2019. He was also the invited lead organizer of the NSF National PREVENT symposium on pandemic prevention and prediction.
Chao Zhang is an Assistant Professor at Georgia Tech’s College of Computing, with a Ph.D. from UIUC. He specializes in machine learning, data mining, and natural language processing with a particular focus on text mining, spatiotemporal data analysis, uncertainty quantification, and decision making. His work has been recognized by the ACM SIGKDD Dissertation Runner-up Award (2019), UbiComp Distinguished Paper Award (2018), ECML/PKDD Best Student Paper Runner-up Award (2015), and ML4H Outstanding Paper Award (2022). Chao Zhang is also a recipient of the NSF Career Award and has received faculty awards from Google, Facebook, and Amazon for his contributions to the field. He has authored over 100 papers in top-tier conferences such as KDD, ICML, NeurIPS, ACL, EMNLP, and NAACL. Additionally, he has delivered tutorials in data mining including KDD, CIKM, and ICDE.