Bonabeau (2002) articulates the common danger of “improper use of ABM” (7280), regarding the simple technique in ABM creation coupled with the need for conceptual rigour. I don’t think he really explains himself on what he means by “conceptually deep” (7280) throughout the article, and I can see two ways for it to be taken.
With the attempted replication of agent interactions it can be assumed that lots of data is required for the model to be held as valid. Putting value on heterogeneity, individual data can be associated to agents, and there can be multiple different agents in the model doing different things. As agents can exhibit “learning and adaptation” (7281), this needs to be incorporated into the model along with agent rationality and some knowledge of the environment, or adherences to spatial parameters. Modellers are attempting to simulate real occurrences, and we know that human behaviour is incredibly difficult to predict and account for.
Another way I see models as being conceptually deep is in the analysis stage, post-programming. Bonabeau queries “what constitutes an explanation of an observed social phenomenon?” (7281). ABMs capture emergent phenomena, but taking the step to explain this emergent phenomena may prove more challenging for social scientists. With ABM we can make things happen from the bottom-up, and then need to seek reasoning for the phenomena we create, which may not always be evident.
Bonabeau, Eric. “Agent-based Modeling: Methods and Techniques for Simulating Human Systems.” Proceedings of the National Academy of Sciences of the United States of America. 99.10 (2002): 7280-7287. Print.