Yikai Wang 王毅楷

I am a PhD Candidate at Carnegie Mellon University, advised by Prof.Ding Zhao.

I was a reserch assistant at the ISR Lab, IIIS, Tsinghua University and a student intern at Carnegie Mellon University. I am honored to be advised by Prof.Jianyu Chen, Prof. Guanya Shi, and Prof.Ding Zhao.

Email  /  Google Scholar

profile photo

Research

project image

APEX: Learning Adaptive High-Platform Traversal for Humanoid Robots


Yikai Wang*, Tingxuan Leng*, Changyi Lin*, Shiqi Liu, Shir Simon, Bingqing Chen, Jonathan Francis, Ding Zhao
Under Review
arxiv / video / website /

APEX enables adaptive humanoid traversal of high platforms via contact-rich climbing and a unified multi-skill policy. Leveraging a ratchet progress reward and LiDAR-based perception, the system adapts its behaviors to terrain geometry and approach conditions, achieving zero-shot sim-to-real traversal of 0.8 m (114% of leg length) platforms on a 29-DoF humanoid.

project image

Guardians as You Fall: Active Mode Transition for Safe Falling


Yikai Wang, Mengdi Xu, Guanya Shi, Ding Zhao
IEEE International Automated Vehicle Validation Conference (IAVVC), 2024
Best Paper Award – Innovation
arxiv / video / code / website /

We propose Guardians as You Fall (GYF), a safe falling/tumbling and recovery framework that can actively tumble and recover to stable modes to reduce damage in highly dynamic scenarios. The key idea of GYF is to adaptively traverse across different stable modes via active tumbling, before the robot shifts to irrecoverable poses. GYF offers a new perspective on safe falling and recovery in locomotion tasks, which potentially enables much more aggressive explorations of existing agile locomotion skills.

project image

Learning Robust, Agile, Natural Legged Locomotion Skills in the Wild


Yikai Wang*, Zheyuan Jiang*, Jianyu Chen
CoRL 2023 Workshop on Robot Learning in Athletics
arxiv / video / website /

We propose a new framework for learning robust, agile and natural legged locomotion skills over challenging terrain with only proprioceptive perception. We incorporate an adversarial training branch based on real animal locomotion data upon a teacher-student training pipeline for robust sim-to-real transfer. To the best of our understanding, this is the first learning-based method enabling quadrupedal robots to gallop in the wild.




Projects





Design and source code from Jon Barron's website