Last update January 11, 2025

Navigation*:*

Preface

Switching from autonomous vehicles to robotics wasn’t just a career move—it felt like stepping into the future. After five years in AVs, I thought I understood the frontier of technology, but robotics brought a fresh mix of excitement and a steep learning curve. The overlap between the two fields was reassuring, but navigating the rapid AI advancements in robotics without a guide felt like wandering through a dense forest.

Half a year later, I’ve built a mental map of the field—one that helps me see the big picture, identify the main challenges, and explore the most promising directions (all while making real robots walk and pick things up!).

Robotics has undergone a significant transformation over the last couple of years: from being confined to academic labs and narrow industrial applications to being ready to tackle the complexities of the real world. But to achieve the goal of truly intelligent robots, the field must expand to include input from far more minds than it does today.

That’s why I decided to write this lightweight guide to Robot Learning. I hope it helps you quickly build your understanding of the landscape so you can start making your own contributions to this exciting field!

Locomotion

Reinforcement Learning

Two talks by Vladlen Koltun inspired my career switch from autonomous vehicles to robotics:

  1. Drone Racing with RL

    Key takeaway: ML-powered edge systems can achieve remarkable speeds (100 km/h!) while processing all sensor data onboard—a fascinating real-world challenge.

    https://youtu.be/HGULBBAo5lA?si=L-nVWR5qD4ob-NhP

  2. Quadruped Locomotion

    Key takeaway: Even with simple simulated environments, RL can create robust models that handle diverse terrains, including snow, vegetation, and streams. Training and testing environments are radically different, but the model is robust enough to handle the mismatch - magic!

    https://www.youtube.com/embed/9j2a1oAHDL8

The second talk is about learning locomotion policy for quadruped. Key takeaway - “magic” can happen: from a really simple environment in simulation (only rigid surfaces) one can train a robust model that can deal with snow, vegetation, crumbling earth and shallow streams.

If you’re curious about training locomotion policies in simulation, read this ETH-NVIDIA Learning to Walk in Minutes Using Massively Parallel Deep Reinforcement Learning. It shows how scalable simulations and GPU-optimized environments (like Isaac Gym) allow quick training using simple RL algorithms.

image.png