Context
Learning-based manipulation depends heavily on high-quality demonstrations.
Manual teleoperation alone was slow and inconsistent, so I built a pipeline to scale demonstration generation.
My Contribution
- Designed automated trajectory generation via motion planning
- Integrated human teleoperation for complex edge cases
- Built logging and dataset structuring tools
- Unified real and simulated data formats
Technical details
The pipeline combines:
- OMPL-based motion planning for structured trajectories
- Teleoperation for contact-rich and ambiguous tasks
- Automated trajectory validation and filtering
Each episode is stored with:
- Observations
- Actions
- Task metadata
- Success labels
This supports clean training for RL, behavioral cloning (BC), and inverse RL (IRL).
Impact
- Doubled data acquisition throughput
- Improved demonstration consistency
- Directly supported downstream learning experiments
Media
TODO: Replace with your captions (task, environment, and what “validation/filtering” rejected).