Published January 1, 2019
| Version v1
Conference paper
Open
Partially-Observed Discrete-Time Risk-Sensitive Mean-Field Games
Creators
- 1. Ozyegin Univ, Dept Nat & Math Sci, Istanbul, Turkey
- 2. Univ Illinois, Coordinated Sci Lab, Urbana, IL 61801 USA
Description
We consider in this paper a general class of discrete-time partially-observed mean-field games with Polish state, action, and measurement spaces and with risk-sensitive (exponential) cost functions which capture the risk-averse behaviour of each agent. As standard in mean-field game models, here each agent is weakly coupled with the rest of the population through its individual cost and state dynamics via the empirical distribution of the states. We first establish the mean-field equilibrium in the infinite-population limit by first transforming the risk-sensitive problem to one with risk-neutral (that is, additive instead of multiplicative) cost function, and then employing the technique of converting the underlying original partially-observed stochastic control problem to a fully observed one on the belief space and the principle of dynamic programming. Then, we show that the mean-field equilibrium policy, when adopted by each agent, forms an approximate Nash equilibrium for games with sufficiently many agents.
Files
bib-e61b5a49-4e9b-4816-95cf-d56b59b8fbb4.txt
Files
(165 Bytes)
| Name | Size | Download all |
|---|---|---|
|
md5:15a5e5a97ce14c45b9ac28686c0fcf38
|
165 Bytes | Preview Download |