AI Seminar: "Efficient Sequential Decision Making with Large Language Models" by Yinglun Zhu

-
MRB Seminar Room
ABSTRACT: 

This presentation focuses on extending the success of large language models (LLMs) to sequential decision making. Existing efforts either (i) re-train or finetune LLMs for decision making, or (ii) design prompts for pretrained LLMs. The former approach suffers from the computational burden of gradient updates, and the latter approach does not show promising results. In this presentation, I'll talk about a new approach that leverages online model selection algorithms to efficiently incorporate LLMs agents into sequential decision making. Statistically, our approach significantly outperforms both traditional decision making algorithms and vanilla LLM agents. Computationally, our approach avoids the need for expensive gradient updates of LLMs, and throughout the decision making process, it requires only a small number of LLM calls. We conduct extensive experiments to verify the effectiveness of our proposed approach. As an example, on a large-scale Amazon dataset, our approach achieves more than a 6x performance gain over baselines while calling LLMs in only 1.5% of the time steps.

 

Bio:

Yinglun Zhu is an assistant professor in the ECE department at the University of California, Riverside; he is also affiliated with the CSE department, the RAISE@UCR Institute, and the Center for Robotics and Intelligent Systems. Yinglun's research interest is in interactive machine learning, which includes learning paradigms such as active learning, bandits, and reinforcement learning. Recently, Yinglun focuses on connecting interactive machine learning to large AI models (e.g., LLMs), from both algorithmic and systemic perspectives. Yinglun's research has been integrated into leading machine learning libraries and commercial products.

Type
Seminars
Target Audience
Students, Faculty, Staff
Admission
Free
Let us help you with your search