beat365系列講座菁英論壇第31期——Empowering Large Language Models with Faithful Reasoning
報告題目(Title):Empowering Large Language Models with Faithful Reasoning
時間(Date & Time):2024.7.5 10:00-11:30
地點(Location):理科一号樓1453(燕園校區) Room 1453, Science Building #1 (Yanyuan)
主講人(Speaker):Liangming Pan
邀請人(Host):Houfeng Wang
報告摘要(Abstract):
Despite the remarkable advances made by large language models (LLMs) in a variety of applications, they still struggle to perform consistent and reliable reasoning when faced with highly complex tasks, such as solving logical problems and answering deep questions. In this talk, I will discuss our research on empowering large language models with human-like reasoning strategies for more reliable reasoning. This includes problem formulation, planning and tool use, and learning from feedback. I will introduce three lines of our works that reflect the above strategies: 1) integrating symbolic formulations for reliable logical reasoning, 2) utilizing reasoning programs for explicit planning and tool use, and 3) analyzing self-bias of LLMs in self-correction. I will conclude by reflecting on the challenges we've faced, and mapping out prospective future directions.
主講人簡介(Bio):

Liangming Pan is a Postdoctoral Scholar at University of California, Santa Barbara (UCSB), working with Prof. William Wang. He obtained his Ph.D. from National University of Singapore in 2022, supervised by Prof. Min-Yen Kan. His research interest lies in natural language processing, with a main focus on building reliable generative AI models able to handle complex reasoning scenarios such as deep question answering. He has published more than 30 papers at leading NLP/AI/ML conferences and journals, with 1700+ Google scholar citations. During his Ph.D., he received the NUS Research Achievement Award and the Dean's Graduate Research Excellence Award. His paper has received the Area Chair Award in IJCNLP-AACL 2023.
歡迎關注beat365微信公衆号,了解更多講座信息!
beat365官方网站
