Probabilistically Safe Robot Planning with Confidence Based Human Predictions

The navigation framework FaSTrack has been shown theoretically and experimentally to work well in cluttered static environments. However, humans are able to operate safely and efficiently in dense dynamics crowds among other pedestrians. In general this is a very challenging task: prediction in the joint state space of multiple humans scales exponentially.

Research on the psychology of human prediction suggests that we use simple noisily-rational models of other humans to make fast predictions of the future movement of surrounding humans. We have high-confidence in these predictions when other humans are matching our simplified models (for example, when humans are walking in straight lines along the sidewalk). When a nearby human is acting unexpectedly, our prediction of their future motion becomes less confident and we act more conservatively. We can apply this same reasoning to algorithms for our autonomous system to safely navigate environments with humans.

Video summaries on the original project and the expansion to multi-human multi-robot scenarios are below.

 
 

The Thorough Explanation

We developed a confidence-aware prediction framework that allows us to employ simple models of human motion while reasoning about the mismatch between these models and the observed human behavior. The framework employs a probabilistic Boltzmann model of human behavior, where the reward function and dynamics can be learned or encoded.

 
noisily rational.gif
 

We can treat this probabilistic distribution as an obstacle for the robot. When the low-level motion planner samples a state in time, we integrate the tracking error bound (TEB) over the distribution. This tells us the probability that the human will be inside the tracking error bound at that state and time. If this probability is above a set threshold, this sample is considered unsafe and the robot must find a new trajectory.

 
collision.png
 
 

However, we can’t assume our predictions will always be accurate. Our humans will not always match the models we use to predict their behavior. For example, if a bee comes into the room, we may not be able to accurately predict the human’s next actions.

bee.gif

By maintaining a Bayesian belief over the scalar temperature parameter Beta, our framework automatically adjusts the variance in the distribution based on measurements. When the observed human behavior is well-explained by the model, our framework provides a tight “confident” distribution over future human motion. When our model does not explain human behavior well, the probability distribution over the next human states increases in variance and thus uncertainty. This allows us to take advantage of known structure in human motion when it exists, while maintaining safety when this structure is incorrect.

bayes.gif

Demonstrations

Hardware demonstrations of our probabilistically safe robot planning with confidence based human predictions can be seen in the videos above.

Challenges and Next Steps

Immediate challenges are how to adapt the model that we use for human prediction based on computational needs. For example, we would like to use cheap predictive models when possible, but may need to use more sophisticated models for complicated interactions and environments. How much can we simplify human models, and what cost is associated with that simplification?

An additional next step is to solve the prediction in continuous space (for example, using sample-based methods) rather than on a grid.

If you have more ideas or thoughts please don't hesitate to contact me!

Related papers

[1] Andrea Bajcsy*, Sylvia Herbert*, David Fridoch-Keil, Jaime F. Fisac, Sampada Deglurkar, Anca D. Dragan, and Claire J. Tomlin, “A Scalable Framework for Real-Time Multi-Robot, Multi-Human Collision Avoidance.” IEEE International Conference on Robotics and Automation (ICRA), 2019.

[2] Jaime F. Fisac*, Andrea Bajcsy*, Sylvia Herbert, David Fridovich-Keil, Steven Wang, Claire J. Tomlin, and Anca D. Dragan, "Probabilistically Safe Robot Planning with Confidence-Based Human Predictions." Robotics Science and Systems (RSS), 2018.

[3] David Fridovich-Keil*, Andrea Bajcsy*, Jaime F. Fisac, Sylvia Herbert, Steven Wang, Anca D. Dragan, and Claire J. Tomlin, “Confidence-aware motion prediction for real-time collision avoidance.” International Journal of Robotics Research (invited paper, accepted).