Jingyun Yang

I'm a third-year PhD student in the Department of Computer Science at Stanford University. My research focuses on developing generalizable, data-efficient, and deployable methods for robot policy learning. I am advised by Jeannette Bohg as part of the Interactive Perception and Robot Learning Lab.

Previously, I received my Master's degree in Machine Learning at CMU, where I was co-advised by Katerina Fragkiadaki and Christopher G. Atkeson. Before that, I was an undergraduate student at USC, where I worked with Joseph Lim.

Headshot of Jingyun Yang

Publications

Mobi-pi teaser

Mobi-π: Mobilizing Your Robot Learning Policy

Jingyun Yang, Isabella Huang*, Brandon Vu*, Max Bajracharya, Rika Antonova, Jeannette Bohg

Preprint

EquiBot teaser

EquiBot: SIM(3)-Equivariant Diffusion Policy for Generalizable and Data Efficient Learning

Jingyun Yang*, Zi-ang Cao*, Congyue Deng, Rika Antonova, Shuran Song, Jeannette Bohg

CoRL 2024
CoRL 2024 Workshop on Whole-body Control and Bimanual Manipulation, Spotlight Presentation

Sentinel teaser

Unpacking Failure Modes of Generative Policies: Runtime Monitoring of Consistency and Progress

Christopher Agia, Rohan Sinha, Jingyun Yang, Zi-ang Cao, Rika Antonova, Marco Pavone, Jeannette Bohg

CoRL 2024

DROID teaser

DROID: A Large-Scale In-the-Wild Robot Manipulation Dataset

Alexander Khazatsky, Karl Pertsch, et al.

RSS 2024

Open-X Embodiment teaser

Open X-Embodiment: Robotic Learning Datasets and RT-X Models

Open X-Embodiment Collaboration

ICRA 2023, Best Conference Paper Award

EquivAct teaser

EquivAct: SIM(3)-Equivariant Visuomotor Policies beyond Rigid Object Manipulation

Jingyun Yang*, Congyue Deng*, Jimmy Wu, Rika Antonova, Leonidas Guibas, Jeannette Bohg

ICRA 2024

RoboFume teaser

Robot Fine-Tuning Made Easy: Pre-Training Rewards and Policies for Autonomous Real-World Reinforcement Learning

Jingyun Yang*, Max Sobol Mark*, Brandon Vu, Archit Sharma, Jeannette Bohg, Chelsea Finn

ICRA 2024

Global Differentiable Simulation teaser

Rethinking Optimization with Differentiable Simulation from a Global Perspective

Rika Antonova*, Jingyun Yang*, Krishna Murthy Jatavallabhula, Jeannette Bohg

CoRL 2022, Oral (6.5% acceptance rate)

VIPTL teaser

Learning Periodic Tasks from Human Demonstrations

Jingyun Yang, Junwu Zhang, Connor Settle, Akshara Rai, Rika Antonova, Jeannette Bohg

ICRA 2022

V-BE teaser

Visually-Grounded Library of Behaviors for Manipulating Diverse Objects across Diverse Configurations and Views

Jingyun Yang*, Hsiao-Yu Fish Tung*, Yunchu Zhang*, Gaurav Pathak, Ashwini Pokle, Christopher G. Atkeson, Katerina Fragkiadaki

CoRL 2021

Coordination teaser

Learning to Coordinate Manipulation Skills via Skill Behavior Diversification

Youngwoon Lee, Jingyun Yang, Joseph J. Lim

ICLR 2020

KeyIn teaser

Keyframing the Future: Keyframe Discovery for Visual Prediction and Planning

Karl Pertsch*, Jingyun Yang, Shenghao Zhou, Konstantinos G. Derpanis, Kostas Daniilidis, Joseph J. Lim, Andrew Jaegle

L4DC 2020