About Me

I'm currenty a Google Brain Resident. Before joining Google, I studied computer science and statistics at University of California, Berkeley. During my undergraduate study, I worked with Professor Pieter Abbeel, Professor Sergey Levine and Professor Alexei Efros as a research assistant in the Berkeley Artificial Intelligence Research (BAIR) Lab.



Publications

Automatic Goal Generation for Reinforcement Learning Agents

David Held*, Xinyang Geng*, Carlos Florensa*, Pieter Abbeel


We propose a method that allows an agent to automatically discover the range of tasks that it is capable of performing in its environment. We use a generator network to propose tasks for the agent to try to achieve, specified as sets of goal states.

Real-Time User-Guided Image Colorization with Learned Deep Priors

Richard Zhang*, Jun-Yan Zhu*, Phillip Isola, Xinyang Geng, Angela S. Lin, Tianhe Yu, Alexei A. Efros

In SIGGRAPH 2017.


We propose a deep learning approach for user-guided image colorization. We system directly maps a grayscale image, along with sparse, local user “hints” to an output colorization with a deep convolutional neural network.

Deep Reinforcement Learning for Tensegrity Robot Locomotion

Marvin Zhang*, Xinyang Geng*, Jonathan Bruce*, Ken Caluwaerts, Massimo Vespignani, Vytas SunSpiral, Pieter Abbeel, Sergey Levine

In ICRA, 2017.

Also, in NIPS Deep Reinforcement Learning Workshop, 2016.


We collaborated with NASA Ames to explore the challenges associated with learning locomotion strategies for tensegrity robots, a class of compliant robots that hold promise for future planetary exploration missions. We devised a novel extension of mirror descent guided policy search to learn locomotion gaits for the SUPERball tensegrity robot, both in simulationand on the physical robot.