by Paul Ormerod
In this first Complexity Forum post, we’re looking at Agent-Based Modelling. Thoughts, questions, issues from the “Complexity Community” are very welcome.
There is nothing mystical or magical about Agent Based Models, they are simply computer algorithms.
An important question is: in what sense can an ABM, or indeed any algorithm, discover things which humans previously did not know?
A simple example is the game of chess, where computers (using algorithms) now play at a level where even the strongest human has little chance of defeating them in a one-off game. Over a series of games, the chance is effectively zero. But this is essentially because of their ability to calculate far more variations than humans, not to any superior creativity.
They have discovered things. For example, all – all – positions in which there are only 6 pieces on the board are now solved completely. A non-trivial proportion of specific positions are far beyond the ability of humans to decide unequivocally. For example, at the highest level, complete games of more than 100 moves are rare, and there are almost no examples of such games with more than 150 moves. But in many of these specific positions with only 6 pieces, the computer will discover things such as ‘White to play and win in 186 moves’. The longest sequence involving optimal moves on both sides is 517 moves.
They are also capable of generalising. So, for example, certain combinations of pieces were in general thought to lead to a draw, but computers have established that, instead, the side with the material advantage in these combinations usually wins.
So, given the rules of chess, they have discovered things unknown to humans. But they have not been creative. They cannot devise new games, unless the human programmer instructs them how to do it.
Paul,
thanks for this. It led me to wonder about the use of genetic algorithms (http://en.wikipedia.org/wiki/Genetic_algorithm) as proxies for creativity (I emphasise “proxy” here).
Greg
My mental model of the supply side of the economy as a large number of genetic algorithm that is trying to approach local optimums that keep shifting.
Hi Paul, I was wondering if similar ABM models have been tried for GO (Chinese chess) where there is much more fluidity in the game – all pieces have identical characteristics and the playing space is undefined as well. This requires much more creativity as there are infinite possibilities for a win. At the same time, the rules of the game are few and very simple and therefore could be easily programmed…
Dear Paul,
While I agree with what you say, I think the objectives you imply for simulation could do with a bit more discussion. If one sees simulation as a “research method” (for example giving us a way to “synthesise” qualitative and quantitative data and build rich but rigorous theories) we could equally well ask whether (or in what senses) econometrics or ethnography have discovered things we “didn’t know”. Clearly they have in some senses (disconfirming things popularly believed for example) but I’m not sure we would expect creativity to reside _in_ research methods rather than their application. After all, to be applicable, research methods need relatively standardised procedures than can be taught and applied.
All the best,
Edmund
Paul,
I agree there is nothing mystical about ABMs and, often due to their complexity, researchers all too often start hand waving and make grandiose claims. So you make an important point in stating clearly we are talking about algorithms on computers.
I also second Edmund Chattoe-Brown’s point that ABM perhaps, should be seen more in relation to methodology (or technique). I see it more as a technology (rather like a telescope or microscope) at this stage since there is no agreed standard methodology for ABM.
My own view is that ABM can be used to find out new things in the sense of being an “aid” to creativity. In the same way that Mathematica (Wolframs famous software) extends the ability of professional mathematicians to explore new structures so ABM can provide social theorists with new ways to explore complex thought experiments.
This is because the ABM approach allows assumptions about individual agent behaviours and interactions to be programmed-in without knowing what will emerge when you run the model. In this context, yes, all the computer is doing is masses of complex calculations producing some emergent outcome. However, that outcome can only be obtained in this way (unless their is already some sound micro-macro theory). In this sense when one looks at the output from such models one is learning something new about the implications of a set of assumptions.
In this way ABM becomes an aid to creativity and intuition (something that might come prior to the actual theory stage). In this sense ABM can help in the theory formulation process i.e. the creative bit that neither pure induction or deduction can fully explain.
Robert Axelrod (of Prisoner’s Dilemma fame) put it thus:
“Simulation is a third way of doing science. Like deduction, it starts with a set of explicit assumptions. But unlike deduction, it does not prove theorems… induction can be used to find patterns in data, and deduction can be used to find consequences of assumptions, simulation modelling can be used as an aid to intuition.” (Axelrod 1997)
Another valuable function of ABM is as a communication tool between researchers from different traditions.
I could go on but I try to unpack some of these thoughts in relation to methodology in ABM here:
http://cfpm.org/~david/papers/method-proof-v1.pdf
Apologies for posting a link to a paper in a blog post – I think this is probably considered bad form.
Dave.
Refs:
Axelrod, R. (1997) ‘Advancing the art of simulation in the social sciences’, in R. Conte and R. Hegselmann (eds) Simulating Social Phenomena—LNEMS 456, Berlin: Springer.
Hales, D., (2010) Mix, Chain and Replicate: Methodologies for Agent-Based Modelling of Social Systems. In Mollona, E., (ed) Computational Analysis of Firms’ Organization and Strategic Behaviour. London and New York: Routledge.