The papers discussed bring up some very important issues about ABMs – how useful are they when models are too simple, how can we extract real causal relationships when they become more complex and start to mirror the real world. Problems of equifinality and how to evaluate ABMs are also important – how are we to verify and validate an ABM that we use for predictive purposes? Even though there are many problems concerning the use and interpretation of ABMs, I think it is still important to acknowledge that these are very cost-effective and quick forms of social experimentation that do not require large amounts of time, manpower and money to perform. There was perhaps one problem that we may see with models such as Schelling’s segregation model – the fact that his model had a final steady state. It seems that if an ABM is too simple, it is very possible to end up with a final steady state, such as with the case of segregation. This is hardly the case, depending on the temporal scale one is looking at, and the accuracy of the programming of agents. Most systems in the world tend to be in flux rather than unchanging – this is what makes the world complicated. Therefore, if conclusions of steady states arise from ABMs, it is perhaps better to use a more complicated model. There tend to always be some kind of exogenous factors that will affect the output of a real world system, and this is what makes ABMs so hard to work with. However, does that mean that Schelling’s model should take into account income levels, land values, rent values, available services etc.? I do not think so, as it would convolute the question and focus of the model away from ‘individual preferences for like individuals’. Having too many variables in a model just serves to blur away any possible causality. This is a problem with all sciences, but especially problematic for ABMs since they tend to deal in complexity rather than simplicity. Some previous comments here have noted the high demand ABMs have for computational power. This is increasingly becoming less of an issue, and becoming more of a data transfer, and data structure problem in my opinion. Actual processing power will not be the limit in the future, only our own data and programming structures with which we create ABMs.
Another very limiting factor to ABMs is calculating error and uncertainty. How should this be done, especially when used for ‘predicting’ or ‘forecasting’, and when we cannot truly model every single possible action of real-life agents? I think this is one of the problems of ABMs that holds it back from mainstream science or even GIS. Whereas in say, hyperspectral imaging, you can attribute your error to the sensor and other factors such as your calibration and correction, in ABMs it would be difficult to assign any sort of error value to conclusions, especially those that do not have a real-world comparison.
Finally, I would like to draw us to the question of: are there alternatives to ABMs? I believe the answer is no. Social experimentation involving large numbers of individuals is too difficult to control in the real world, and much more consuming in terms of time and money. A large-scale real life social experiment is just not as efficient. Additionally, ABMs have the important feature of being able to be re-run very easily – but real-life social experiments cannot just be ‘reset’, especially when the researcher doesn’t memory of the previous experiment to influence agent behaviour.