RSS Workshop on Planning and Control with Imperfect Sensors and Perception



Overview

Autonomous robotic systems are increasingly deployed in unstructured, open-world, and safety-critical environments, where sensing and perception are inherently imperfect. Classical modular autonomy pipelines often assume that perception provides a sufficiently accurate state estimate for planning and control. In practice, however, robots must operate under partial observability, uncertain semantic information, limited fields of view, noisy localization, and perception modules that may degrade under distribution shift.

At the same time, modern perception systems are shifting from geometric state estimation toward richer, semantic and language-conditioned representations. Advances in vision-language models (VLMs), vision-language-action (VLA) systems, and large-scale multimodal learning enable robots not only to recognize objects, but to reason about context, relationships, and the implications of their actions. These developments introduce new opportunities for more general and context-aware autonomy, but also raise fundamental challenges: how should such high-dimensional, semantic, and often uncertain representations interface with planning and control? How can semantic reasoning be translated into actionable decisions with reliability and safety guarantees? These developments blur the boundary between perception and decision-making, making the design of their interface a central challenge.

This workshop examines emerging challenges in tightly coupled perception, planning, and control under these new conditions. In particular, we focus on settings where decision-making must operate over uncertain, high-dimensional, and semantically structured representations, rather than well-defined state estimates. We are interested in both modular and end-to-end approaches, and the trade-offs between explicit modeling of uncertainty and implicit reasoning in learned systems. The workshop will bring together researchers from robotics, machine learning, controls, formal methods, and field robotics to foster interdisciplinary discussion on perception-aware autonomy. Our goal is to identify key open problems at the interface of perception and decision-making, evaluate emerging paradigms enabled by modern learned perception, and outline principled directions for building reliable autonomous systems in the real world.


Discussion Questions

  • How should uncertainty from perception—especially from learned and semantic models—be represented and incorporated into planning and control?

  • How can high-dimensional perception outputs be translated into representations, constraints, and objectives for decision-making?

  • What representations best support both reasoning and control (e.g., belief states, scene graphs, latent/world models), and how should they be constructed?

  • How do we reason about context, interactions, and temporal dynamics in perception-aware planning?

  • How do learned planning and control methods interact with learned perception, and what new challenges arise when both perception and decision-making are data-driven?

  • What new failure modes arise from modern learned perception systems (e.g., hallucination, distribution shift), and how should decision-making and control systems account for them?

  • What are the trade-offs between modular pipelines and end-to-end learning approaches in perception, planning, and control?

  • What is the role of formal guarantees and verification when perception is uncertain, learned, and semantically rich?

  • How should we evaluate perception-aware autonomy? What benchmarks, datasets, and metrics are needed to measure reliability and safety in real-world deployment?

Call for Papers

We invite submissions of extended abstracts to share novel ideas on topics relevant to the workshop themes, which include but are not limited to:

  • Planning and control under sensing uncertainty
  • Vision-based and learned perception (including VLM/VLAs)
  • Active perception and information gathering
  • Context-aware and semantic planning and control
  • Classification, object, and semantic uncertainty
  • Partially Observable Markov Decision Processes
  • Sim-to-real transfer and uncertainty quantification
  • Multi-agent interaction with semantic information
  • Learned planning and control

We welcome both ongoing work and recently published results. Accepted contributions will be presented as posters during the workshop, with selected submissions invited for spotlight talks. All accepted abstracts and posters will be made publicly available on the workshop website. This is a non-archival venue, and submissions may be published elsewhere. Abstracts should be a maximum of 2 pages long in RSS paper format. Submission link: https://openreview.net/group?id=roboticsfoundation.org/RSS/2026/Workshop/WPCIS.

Important Dates

  • Abstract submission: June 5, 2026
  • Notification: June 12, 2026
  • Workshop: July 17, 2026

Organizers

For inquiries, please contact: rss2026wpcis@gmail.com