IBMs have to be constructed in an iterative way. The first model version should be very simple, for example by making the environment constant, or consider identical individuals, or by representing a certain process by a constant parameter only, etc. This first, or null, version of the model is deliberately oversimplified. It serves the purpose to start the iterative process of model development as soon as possible. This is done by providing a first set of tools for analyzing the model, for example, graphical output, summary statistics, etc. Once we have these observation tools implemented, we can start refining the model, testing its implementation, and comparing it to observed patterns. Analyzing and developing the model then is a time consuming and complex task, but it can be performed as rigorously as real experiments and allows us to understand the relative importance of different processes and how individual behavior is related to system-level properties, and vice versa. In the following, three main elements of analyzing IBMs are explained in more detail.
Was this article helpful?