Overview

In recent years we have seen significant progress in robot learning, resulting in robot policies that are more reliable and deployable on many different scenes and tasks across robot embodiments. Such increase in robot capability necessitates a re-thinking of the robot development lifecycle of design, evaluation, and deployment. While the traditional development life cycle involves designing a method targeted at increasing the evaluation score for a handful of tasks at the researcher's own institution, a more scalable, comprehensive, and reproducible evaluation framework is needed with the increase in capability of robot policies. There is a growing need to rethink this lifecycle as a first-class problem, alongside policy design. This workshop aims to address this gap by opening discussion on:

  • What are good evaluation protocols and methods for robot learning?
  • How can we make robot evaluation more reproducible and scalable, and less expensive?
  • How do we monitor robot status during deployment and ensure safety and performance?
  • How can research on safety and evaluation outside of robotics inspire that of robotics?

Speakers

Panelists

Call for Papers

Submission Portal: OpenReview

Submission Format: The submission should be 4-8 pages long with unlimited pages for references/appendices. The submission should be anonymized.

We encourage submissions on robot evaluation, deployment, safety, and other related topics. We will accept both long and short papers.

Submission Deadline: August 30, 2025 AOE
Acceptance Notification: September 5, 2025
Camera-ready Deadline: September 12, 2025
Workshop Date: September 27, 2025

There will be a best paper award sponsored by Dyna Robotics.

Organizers

Advisory Committee

Schedule

Contact

Please feel free to send us your queries via email at abrar.anwar@usc.edu