Robot and automation systems are expected to carry out increasingly sophisticated tasks in complex environments. This places stringent expectations on their autonomy stack to enforce operational constraints and safety guarantees. Safe control synthesis methods, including model predictive control, reachability analysis, and control barrier functions, commonly rely on explicit mathematical definitions of safety constraints with known system and environment models. Nevertheless, many current applications incorporate perception uncertainty or semantic concepts in the task specifications, and safety requirements may be specified implicitly, abstractly, or incompletely.
This workshop aims to investigate the concept and formulation of such “intangible” safety constraints. These constraints may not be specified explicitly as mathematical expressions but described through natural language, semantic concepts, observed by sensor data anomalies, or inferred from demonstrations and prior experience. The techniques required to either identify such safety constraints or to propagate uncertainty through them are largely unexplored.
Our workshop is interested in various facets of these techniques including: identification of safety constraints from demonstration, compatibility of learned constraints and control design methodology, interaction with humans or vision-language models for safety information, and associated uncertainty quantification. The workshop invites experts in the field to discuss the formulation, optimization, and guarantees with intangible safety constraints.