Calibrating a Motion Model Based on Reinforcement Learning for Pedestrian Simulation
Francisco Martinez-Gil, Miguel Lozano, Fernando Fernández
Motion in Games, November 2012, pp. 302--313.
Abstract: In this paper, the calibration of a framework based in Multi-agent Reinforcement Learning (RL) for generating motion simulations of pedestrian groups is presented. The framework sets a group of autonomous embodied agents that learn to control individually its instant velocity vector in scenarios with collisions and friction forces. The result of the process is a different learned motion controller for each agent. The calibration of both, the physical properties involved in the motion of our embodied agents and the corresponding dynamics, is an important issue for a realistic simulation. The physics engine used has been calibrated with values taken from real pedestrian dynamics. Two experiments have been carried out for testing this approach. The results of the experiments are compared with databases of real pedestrians in similar scenarios. As a comparison tool, the diagram of speed versus density, known as fundamental diagram in the literature, is used.
Article URL: http://dx.doi.org/10.1007/978-3-642-34710-8_28
BibTeX format:
@incollection{Martinez-Gil:2012:CAM,
  author = {Francisco Martinez-Gil and Miguel Lozano and Fernando Fernández},
  title = {Calibrating a Motion Model Based on Reinforcement Learning for Pedestrian Simulation},
  booktitle = {Motion in Games},
  pages = {302--313},
  month = nov,
  year = {2012},
}
Search for more articles by Francisco Martinez-Gil.
Search for more articles by Miguel Lozano.
Search for more articles by Fernando Fernández.

Return to the search page.


graphbib: Powered by "bibsql" and "SQLite3."