Xuanlin (Simon) Li

I am an incoming PhD student at UCSD CSE, advised by Prof. Hao Su. Previously I was an undergraduate majoring in Mathematics and Computer Science at UC Berkeley (2017-2021). I was also an undergraduate research assistant at Berkeley Artificial Intelligence Research, where I was advised by Prof. Trevor Darrell.

GitHub  /  Google Scholar  /  LinkedIn

profile photo


I currently focus on embodied AI, which combines perspectives from computer vision, deep reinforcement learning, and natural language processing to enable robots to acquire concepts and generalizable skills. I am also interested in self-supervised and unsupervised representation learning, neural network architecture learning, and optimization.

project image

Discovering Autoregressive Orderings with Variational Inference

Xuanlin Li*, Brandon Trabucco*, Dong Huk Park, Yang Gao, Michael Luo, Sheng Shen, Trevor Darrell
International Conference on Learning Representations (ICLR) 2021
paper / video_transcripts / poster / slides /

We propose the first domain-independent unsupervised / self-supervised learner that discovers high-quality autoregressive orders through fully-parallelizable end-to-end training without domain-specific tuning. Empirical results on vision and language tasks suggest that with similar hyperparameters, our algorithm is capable of recovering autoregressive orders that are even better than fixed orders. Case studies suggest that our learned orders depend adaptively on content, and resemble a type of best-first generation order, which first decodes focal objects and names.

project image

Regularization Matters in Policy Optimization - An Empirical Study on Continuous Control

Zhuang Liu*, Xuanlin Li*, Bingyi Kang, Trevor Darrell
International Conference on Learning Representations (ICLR) 2021 (Spotlight)
arxiv / video / code / poster / slides /

We present the first comprehensive study of regularization techniques with multiple policy optimization algorithms on continuous control tasks. We show that conventional regularization methods in supervised learning, which have been largely ignored in RL methods, can be very effective in policy optimization on continuous control tasks, and our finding is robust against training hyperparameter variations. We also analyze why they can help policy generalization from sample complexity, return distribution, weight norm, and noise robustness perspectives.

Other Projects

These include coursework, side projects and unpublished research work.

project image


Zhuang Liu*, Xuanlin Li*, Xiaolong Wang, Trevor Darrell

Research project on neural network pruning; to be submitted to ICCV 2021.

project image

Inferring the Optimal Policy using Markov Chain Monte Carlo

Brandon Trabucco, Albert Qu, Xuanlin Li, Ganeshkumar Ashokavardhanan
Berkeley EECS 126 (Probability and Random Processes)
arxiv /

Final course project for EECS 126 (Probability and Random Processes) in Fall 2018.

Honors and Awards

Jacobs School of Engineering PhD Fellowship, UC San Diego CSE, 2021
Arthur M. Hopkin Award, UC Berkeley EECS, 2021
EECS Honors Program & Mathematics Honors Program, UC Berkeley

Design and source code from Jon Barron's website