Friday, July 12, 2024

human-robot interaction risk analysis

What level of risk aversion is observed among humans when interacting with robots?
What are the preferred methods for human-robot interaction in crowded environments?
Which algorithms should roboticists employ to program robots for effective human interaction?
These questions were the focus of a study conducted by mechanical engineers and computer scientists at the University of California San Diego, recently presented at the International Conference on Robotics and Automation (ICRA) 2024 in Japan.
"This study represents the first known investigation into robots that infer human risk perception for intelligent decision-making in everyday contexts," stated Aamodh Suresh, the study's first author, who earned his Ph.D. under Professor Sonia Martinez Diaz in UC San Diego's Department of Mechanical and Aerospace Engineering. He now serves as a postdoctoral researcher at the U.S. Army Research Lab.
"Our goal was to develop a framework to understand human risk aversion in interactions with robots," said Angelique Taylor, the study's second author, who earned her Ph.D. in Computer Science and Engineering under Professor Laurel Riek at UC San Diego. Taylor is now a faculty member at Cornell Tech in New York.
The team utilized behavioral economics models, deliberating over their selection amidst the pandemic. Consequently, researchers implemented an online experiment to ascertain the optimal model for their study.
The study involved STEM undergraduate and graduate students who participated in a simulation as Instacart shoppers. They were presented with three route options to reach the milk aisle in a grocery store, each spanning between five to 20 minutes. Certain routes exposed them to individuals infected with COVID-19, including one path featuring a severe case.
Each path also carried distinct risks of exposure to individuals with COVID-19 coughing incidents. The shortest route notably exposed participants to a higher concentration of severely ill individuals. Nevertheless, shoppers were motivated by rewards tied to expedient goal attainment.
The researchers were astonished to find a consistent underestimation in survey responses regarding individuals' readiness to risk proximity with COVID-19-infected shoppers. "When there's a reward involved, people seem more willing to take risks," remarked Suresh.
Consequently, in programming robots for human interaction, researchers opted to employ prospect theory, a behavioral economics model pioneered by Daniel Kahneman, Nobel laureate in economics (2002). This theory posits that individuals assess potential losses and gains relative to a reference point.

In this context, individuals tend to weigh losses more heavily than gains. For instance, participants in the study chose to receive $450 rather than taking a 50% chance to win $1100. Thus, subjects focused on obtaining a certain reward promptly rather than evaluating the potential risk of contracting COVID-19.

Researchers also queried individuals on their preferences regarding how robots should convey their intentions. Responses encompassed speech, gestures, and touch screens.

Labels:

0 Comments:

Post a Comment

Subscribe to Post Comments [Atom]

<< Home