Skip to main content
U.S. flag

An official website of the United States government

Bayesian inverse reinforcement learning for collective animal movement

June 14, 2022

Agent-based methods allow for defining simple rules that generate complex group behaviors. The governing rules of such models are typically set a priori, and parameters are tuned from observed behavior trajectories. Instead of making simplifying assumptions across all anticipated scenarios, inverse reinforcement learning provides inference on the short-term (local) rules governing long-term behavior policies by using properties of a Markov decision process. We use the computationally efficient linearly-solvable Markov decision process to learn the local rules governing collective movement for a simulation of the selfpropelled-particle (SPP) model and a data application for a captive guppy population. The estimation of the behavioral decision costs is done in a Bayesian framework with basis function smoothing. We recover the true costs in the SPP simulation and find the guppies value collective movement more than targeted movement toward shelter.

Publication Year 2022
Title Bayesian inverse reinforcement learning for collective animal movement
DOI 10.1214/21-AOAS1529
Authors Toryn L. J. Schafer, Christopher K. Wikle, Mevin Hooten
Publication Type Article
Publication Subtype Journal Article
Series Title Annals of Applied Statistics
Index ID 70255287
Record Source USGS Publications Warehouse
USGS Organization Coop Res Unit Seattle