Research

Tractable approximate Bayesian inference for introspective autonomy

One of the most socially-impactful applications of robotics is replacing or safeguarding humans in performing so-called “3D” (dull, dirty, or dangerous) tasks.  Many of these tasks require robots to make safety-critical decisions while operating in complex environments with limited information.  For example, an autonomous car approaching a partially-obscured intersection must account for the possibility that there are other vehicles or pedestrians that it cannot yet perceive in order to develop a plan for navigating that intersection safely. Similarly, an EOD robot must be able to reason over the set of potential locations of explosive devices in large, complex, and unexplored environments in order to formulate a plan to locate and disarm them safely.  In general, this class of applications requires robots that can accurately and explicitly model their own uncertainty about the state of the world in order to devise safe and effective plans.

Unfortunately, current state-of-the-art robotic perception systems lack the ability to tractably model such complex beliefs.  Specifically, these systems formulate the perception problem as maximum likelihood estimation (or more generally M-estimation), and then apply nonlinear optimization to recover a single point estimate.  While this approach enables fast (i.e. real-time) operation, it only provides information about a single possible state of the world.  As a result, existing perception systems can dramatically underestimate their own uncertainty about the true state of the world, even entirely missing the existence of alternative but equally-plausible hypotheses.  The practical consequence is that current state-of-the-art robotic perception systems will often report high confidence in completely erroneous estimates.

NEURAL is addressing these limitations by developing tractable algorithms for approximating full Bayesian posterior distributions in robotic perception tasks, thereby enabling the accurate modeling of uncertainty.  In particular, we are exploring the design of nonparametric sample-based inference methods that can exploit the conditional independence relations exposed by probabilistic graphical models to achieve efficient computation on complex, high-dimensional problems.  These algorithms can be applied to the same factor graph models already used to build state-of-the-art robotic perception systems while delivering posterior estimates of markedly superior accuracy, thereby significantly improving overall system safety and reliability.

Research themes

Nonparametric Bayesian inference, normalizing flows, probabilistic graphical models, uncertainty quantification

Selected Publications

Q. Huang, C. Pu, K. Khosoussi, D.M. Rosen, D. Fourie, J.P. How, and J.J. Leonard. “Incremental Non-Gaussian Inference for SLAM using Normalizing Flows”. IEEE Transactions on Robotics (2022). (In press). DOI.