thesis
Markov chain aggregation for agent-based models
Sven
Banisch
author 17507863
Philippe
Blanchard
supervisor
10028
department
29104678
department
This thesis introduces a Markov chain approach that allows a rigorous analysis of a class of agent-based models (ABMs). It provides a general framework of aggregation in agent-based and related computational models by making use of Markov chain aggregation and lumpability theory in order to link between the micro and the macro level of observation.
The starting point is a microscopic Markov chain description of the dynamical process in complete correspondence with the dynamical behavior of the agent model, which is obtained by considering the set of all possible agent configurations as the state space of a huge Markov chain. This is referred to as micro chain, and an explicit formal representation including microscopic transition rates can be derived for a class of models by using the random mapping representation of a Markov process. The explicit micro formulation enables the application of the theory of Markov chain aggregation -- namely, lumpability -- in order to reduce the state space of the micro chain and relate microscopic descriptions to a macroscopic formulation of interest. Well-known conditions for lumpability make it possible to establish the cases where the macro model is still Markov, and in this case we obtain a complete picture of the dynamics including the transient stage, the most interesting phase in applications.
For such a purpose a crucial role is played by the type of probability distribution used to implement the stochastic part of the model which defines the updating rule and governs the dynamics. Namely, if we decide to remain at a Markovian level, then the partition, or equivalently, the collective variables used to build the macro model must be compatible with the symmetries of the probability distribution ω.
This underlines the theoretical importance of homogeneous or complete mixing in the analysis of »voter-like« models at use in population genetics, evolutionary game theory and social dynamics. On the other hand, if a favored level of observation is not compatible with the symmetries in ω, a certain amount of memory is introduced by the transition from the micro level to such a macro description, and this is the fingerprint of emergence in ABMs. The resulting divergence from Markovianity can be quantified using information-theoretic measures and the thesis presents a scenario in which these measures can be explicitly computed.
Two simple models are used to illustrate these theoretical ideas: the voter model (VM) and an extension of it called contrarian voter model (CVM). Using these examples, the thesis shows that Markov chain theory allows for a rather precise understanding of the model dynamics in case of »simple« population structures where a tractable macro chain can be derived. Constraining the system by interaction networks with a strong local structure leads to the emergence of meta-stable states in the transient of the model. Constraints on the interaction behavior such as bounded confidence or assortative mating lead to the emergence of new absorbing states in the associated macro chain and are related to stable patterns of polarization (stable co-existence of different opinions or species). Constraints and heterogeneities in the microscopic system and complex social interactions are the basic characteristics of ABMs, and the Markov chain approach to link the micro chain to a macro level description (and likewise the failure of a Markovian link) highlights the crucial role played by those ingredients in the generation of complex macroscopic outcomes.
https://pub.uni-bielefeld.de/download/2690117/2690118/PhDThesisFINFIN.pdf
application/pdfno
Universitätsbibliothek Bielefeld2014
eng
Markov chainsAgent-based modelsaggregationlumpabilitycomplexityvoter modelemergence
urn:nbn:de:hbz:361-26901171
2014-03-21
<div style="text-indent:-25px; padding-left:25px;padding-bottom:0px;">Banisch, S. (2014). <em>Markov chain aggregation for agent-based models</em>. Bielefeld: Universitätsbibliothek Bielefeld.</div>
Banisch, S., 2014. <em>Markov chain aggregation for agent-based models</em>, Bielefeld: Universitätsbibliothek Bielefeld.
<div style="text-indent:-25px; padding-left:25px;padding-bottom:0px;">Banisch, Sven. 2014. <em>Markov chain aggregation for agent-based models</em>. Bielefeld: Universitätsbibliothek Bielefeld.</div>
Banisch, S.: Markov chain aggregation for agent-based models. Universitätsbibliothek Bielefeld, Bielefeld (2014).
Banisch, Sven. <em>Markov chain aggregation for agent-based models</em>. Bielefeld: Universitätsbibliothek Bielefeld, 2014.
Banisch S (2014) <br />Bielefeld: Universitätsbibliothek Bielefeld.
S. Banisch, Markov chain aggregation for agent-based models, (Universitätsbibliothek Bielefeld, Bielefeld, 2014).
Banisch, S. (2014): Markov chain aggregation for agent-based models. Bielefeld: Universitätsbibliothek Bielefeld.
Banisch S (2014) <br /><em>Markov chain aggregation for agent-based models</em>.<br />Bielefeld: Universitätsbibliothek Bielefeld.
Banisch S. <em>Markov chain aggregation for agent-based models</em>. Bielefeld: Universitätsbibliothek Bielefeld; 2014.
<div style="text-indent:-25px; padding-left:25px;padding-bottom:0px;">Banisch, S. (2014). <em>Markov chain aggregation for agent-based models</em>. Bielefeld: Universitätsbibliothek Bielefeld.</div>
S. Banisch, <em>Markov chain aggregation for agent-based models</em>, Bielefeld: Universitätsbibliothek Bielefeld, 2014.
S. Banisch, <em>Markov chain aggregation for agent-based models</em>, Universitätsbibliothek Bielefeld, Bielefeld, <strong>2014</strong>.
Banisch, S. (2014). Markov chain aggregation for agent-based models. Bielefeld: Universitätsbibliothek Bielefeld.
Banisch, S. (2014). <em>Markov chain aggregation for agent-based models</em>. Bielefeld: Universitätsbibliothek Bielefeld.
26901172014-08-15T08:40:27Z2018-07-24T13:01:37Z