Good evening. My name is

Jingyun Yang

I am a first-year master student studying Machine Learning at Carnegie Mellon University (CMU). Since June 2020, I have been working with Prof. Katerina Fragkiadaki on deep learning, computer vision, and robotics. Previously, I was in the USC Cognitive Learning for Vision and Robotics Lab (CLVR) advised by Prof. Joseph J. Lim.
Headshot

Publications

teaser_image
Visually-Grounded Library of Behaviours for Generalizing Manipulation Across Objects, Configurations, and Views
In Submission
We propose a visually-grounded library of behaviors approach for manipulating diverse objects across a wide range of initial and goal configurations and camera placements. We disentangle the standard image-to-action or image-to-state-to-action process into two separate modules: (1) a behavior selector built with affordance-aware and view-invariant visual feature representations that conditions on the invariant object properties to select a behavior, and (2) a library of behaviors each of which conditions on the variable object properties to act in the environment. Our framework outperforms various learning and non-learning based baselines in graping and pushing tasks.
teaser_image
Learning to Coordinate Manipulation Skills via Skill Behavior Diversification
International Conference on Learning Representations (ICLR), 2020
Autonomous agents with multiple end-effectors can perform a complex task by coordinating sub-skills of each end-effector. To realize temporal and behavioral coordination of skills, we propose a hierarchical framework that first individually trains sub-skills of each end-effector with skill behavior diversification, and learns to coordinate end-effectors using diverse behaviors of the skills. We demonstrate that our proposed framework is able to efficiently learn sub-skills with diverse behaviors and coordinate them to solve challenging collaborative control tasks.
teaser_image
KeyIn: Keyframing for Visual Planning
Learning for Dynamics and Control (L4DC), 2020
To flexibly and efficiently reason about temporal sequences, abstract representations that compactly represent the important information in the sequence are needed. We propose to construct such representations using a hierarchical Keyframe-Inpainter (KeyIn) model that first generates keyframes and their temporal placement and then inpaints the sequences between keyframes. We propose a fully differentiable formulation for efficiently learning the keyframe placement. We show that KeyIn finds informative keyframes in several datasets with diverse dynamics. When evaluated on a planning task, KeyIn outperforms other recent proposals for learning hierarchical representations.

Awards and Honors

USC University Trustees Award
Spring 2020
USC Provost's Undergraduate Research Fellowship
Fall 2019, Summer 2019
Tau Beta Pi the Engineering Honor Society
Spring 2018
USC Academic Achievement Award
Fall 2017, Spring 2017

Teaching

CSCI 356 - Introduction to Computer Systems
USC
Course Producer
Spring 2020
CSCI 170 - Discrete Methods in Computer Science
USC
Course Producer
Fall 2019, Spring 2019
CSCI 270 - Introduction to Algorithms and Theory of Computing
USC
Course Producer
Fall 2018