By Greg Fisher
Recently I got in to a very interesting discussion, which led me to articulate to my interlocutor (and myself!) the difference between Chaos Theory and Complexity Theory. I thought I’d write this down in the form of a blog article. I should stress that you should only read further if such technical differences blow your skirt up. If you have a life, you should probably read something else.
Chaos Theory is the study of fascinating yet deterministic systems, whereas complex systems are not deterministic. At first blush, if we were to contrast the words chaotic and complex as used in every day language, we might be forgiven for believing that Chaos Theory is the antithesis of determinism. But it is not.
By determinism I am referring to the idea of a clockwork universe, made famous by Laplace, in which all of the rules of the universe are fixed. In this type of universe, as Laplace pointed out, if we knew enough information about the current state of the universe in addition to all of its fundamental and unchanging laws, we would be able both to calculate the entire history of the universe and to predict its entire future. There would be no room for free will, which would be seen merely as an illusion.
[Note to reader: I would draw your attention to Tony Smith’s reply to the first version of this blog below, on the subject of “computational irreducible”. My reference to the word “determinism” here should read “classical determinism”: computational irreducibility does mean that deterministic systems can also be inherently uncertain; however the broad thrust of this article still stands, that there are fundamental differences between chaotic and complex systems.]
Chaos theory studies these mechanistic types of systems but it tends to emphasise the principle of feedback whereby two variables are influenced by each other: this can lead to non-linearity and the variables behaving in seemingly chaotic ways. Hence the name, Chaos Theory.
An important insight of Chaos Theory is the sensitivity of a chaotic system to initial conditions due to the non-linearity of the system. What this means is that if the initial conditions of a chaotic system were changed microscopically, then over a long enough period of time the outcome of the whole system will be completely different. This is often referred to as The Butterfly Effect. However, it is important to emphasise that if the initial conditions of the chaotic system were unchanged between two simulations to an infinite degree of precision, the outcome of the two will be the same over any period of time. So the butterfly effect really only serves to contrast the outcomes in two marginally different systems that are still deterministic i.e. machine-like. In one simulation, the butterfly flapped its wings, in the other it did not.
It is also worth mentioning that the butterfly effect can be abused when, for example, a person says “a butterfly flapping its wings in Wigan causes a hurricane in Huddersfield”. This is a classical Newtonian interpretation of the butterfly effect, as if the hurricane happened because of the butterfly. It may be that in a chaotic system, the simulation whereby the butterfly metaphorically didn’t flat its wings was the simulation in which there was a hurricane. In which case, naughty butterfly. The point is that the butterfly effect is meant to illustrate the sensitivity to initial conditions, not causality.
In complex systems, there is a concept known as a global cascade, which is similar to what people often mean by the butterfly effect but it is in fact fundamentally different. A global cascade is basically a network-wide domino effect that occurs in a dynamic network, made famous by Duncan Watts in 2002. Watts showed that sometimes a complex system proved robust in the face of a modest shock (it might just wobble slightly); but in other instances, the same shock might cascade across the system, showing it to be fragile.
Dynamic networks – or complex systems – are very different to the systems studied in Chaos Theory. They contain a number of constituent parts (“agents”) that interact with and adapt to each other over time. Perhaps the most important feature of complex systems, which is a key differentiator from chaotic systems, is the concept of emergence. Emergence “breaks” the idea of determinism because it means the outcome of some interaction can be inherently unpredictable. In large systems, macro features often emerge that cannot be traced back to any particular event or agent.
To understand the concept of emergence further, we can look at water. We know the qualities of oxygen atoms and we know the qualities of hydrogen atoms, so presumably we can determine the qualities of H20, water? Actually we cannot. Really, however freakishly this might sound, we cannot. It turns out that we are familiar with the qualities of water only because we have observed them empirically. The properties of water are emergent. If any reader wished to delve further in to this, I would recommend Stuart Kauffman’s book Re-Inventing the Sacred. In this book, Kauffman wrote that
“it is something of a quiet scandal that physicists have largely given up trying to reason ‘upward’ from the ultimate physical laws to larger-scale events in the universe”
The reason for the inability to reason ‘upwards’ from hydrogen and oxygen to water is because the properties of water are emergent and therefore indeterminable from the properties of the constituent atoms.
A useful way of distinguishing between chaotic and complex systems is to illustrate how uncertainty arises in each type of system. In chaotic systems, uncertainty is due to the practical inability to know the initial conditions of a system. If the initial conditions were either ‘x’ or ‘a tiny (immeasurable) variation on x’, we know that when non-linearity kicks in, this slight variation will eventually make a large difference in the outcome. But in complex systems, uncertainty is inherent in the system because of the concept of emergence: even if we could measure the initial conditions today to an infinite degree of precision (for the serious geeks: I am ignoring Heisenberg’s uncertainty principle here), we still cannot determine the future. However, as I noted in my article Patterns Amid Complexity, this does not mean the future is random because patterns are an important – and often repeating – feature of complex systems.
As a final note, the term “edge of chaos” gets bandied around a lot and, confusingly, this term is normally associated with complex systems. It represents a point that sits along a spectrum with determinism at the one end and randomness (not chaos) at the other. The edge of chaos is where you have enough structure / patterns in the system that it is not random but also enough fluidity and emergent creativity that it’s not deterministic.
To conclude, it is tempting to believe that Chaos is a highly complex type of complex system. But it isn’t – Chaos theorists have revealed some excellent insights, like sensitivity to initial conditions, but Chaos Theory is still a study of deterministic systems. To understand non-deterministic systems, like social systems, it’s necessary to look at complex systems and Complexity theory.
I’m confused about how specific your notion of “unpredictability” is. Are you implying that Laplace’s quantum demon could *not* perfectly determine any future state of the universal wavefunction, given its current state, if complex systems are involved? Or is it defined more in terms of computational feasibility, something like an algorithm being unsolvable in polynomial time? Or, is complexity just another way of saying that the probability of a single output resulting from the system doesn’t approach 1? Something completely different?
Thanks James,
I don’t think Laplace had quantum theory in mind when he proposed his demon (he wrote in the early 18th Century, well before quantum mechanics emerged). But I don’t know enough about quantum theory beyond Heisenberg’s uncertainty principle to comment on your statement – my article was really only about the “classical” world of atoms and “above”.
Kind regards,
Greg
“The reason for the inability to reason ‘upwards’ from hydrogen and oxygen to water is because the properties of water are emergent and therefore indeterminable from the properties of the constituent atoms.” Whether that is true depends a lot on what you mean by “determinable”. For instance, one can imagine a computational simulation in which the agents are hydrogen and oxygen atoms, interacting in the way that they are known to do, and from which the computational equivalent of water would emerge (i.e. the model has macro-level properties that represent the properties of water, such as having different phases depending on temperature). Let’s suppose that such a simulation is possible (and I can’t see any reason why it would not be, although it might require a lot of computing resources to run any interesting numbers of agents/atoms). Then is this a counter-example to your statement? In what sense is such a computational simulation not offering a way to reason upwards?
Thank you for the clearly articulated article. Unfortunately, my mind isn’t so acute, and I still have trouble understanding key points that you raised. For example, is “emergence” a term used ex post, i.e. we can’t predict effects of a set of relationships, therefore our understanding of those relationships is imperfect, therefore the effects are emergent and complex.
Part of my confusion stems from the implication that deterministic and non-deterministic systems are permanently distinct, as opposed to being relative to existing knowledge. Is there a method to determine that a system can NEVER be described in such a way as to allow for “perfect” predictions?
Thanks Al,
let me try to explain better – by “emergence” yes I was thinking ex post. Perhaps it’s easier if we think of a conversation. Conversations are emergent in that they are inherently creative and we cannot predict how a conversation will evolve, ex ante.
For your second paragraph, it’s my turn for my mind to be less acute as I didn’t understand what you were asking. Could you clarify your question?
Kind regards,
Greg
Thanks for the clarification. As to my second paragraph, using the conversation example, what method(s) would you employ to determine that this event[conversation] is complex vs. chaotic? Is it something that you can identify and express prior to witnessing the conversation, or do you track the conversation and, afterwards, test the error of your modeled conversation? If the latter, then doesn’t that imply that complexity vs. chaos is a matter of current empirical data rather than methodological limits?
Thanks for this thought provoking article.
Greg – Thanks for such a clear and lucid explanation of the differences between chaos and complexity theory. Although both exhibit unpredictable behavior, the differences in their initial conditions and evolution into a totally unpredictable state is fascinating. I really enjoyed your insightful observations on this complex topic.
Greg, I intermittently curate Emergence for scoop.it and am including your item although it presents an outdated understanding of determinism. You need to take of board Wolfram’s description of computational irreducibility http://www.wolframscience.com/nksonline/section-12.6 to understand (i) that both chaos and complexity can be found in deterministic systems and (ii) that determinism almost never implies predeterminism.
Dear Tony,
thank you for a most excellent reply, you are right to raise the point. I have added “computational irreducibility” to my list of “reasons why the universe is uncertain”. I am tempted to re-write the article in light of this – I’ll include a note at the beginning & will let readers explore the link you included.
Kind regards,
Greg
Theoretically everything can be determined given enough computation power and time, so “deterministic” means only what is possible with our current knowledge and computational power.
Thanks Gigi,
Are you related to Laplace?!
Regards,
Greg
Thanks for writing this article, but I think it contains major flaws which mislead. You say:
“Perhaps the most important feature of complex systems, which is a key differentiator from chaotic systems, is the concept of emergence. Emergence “breaks” the idea of determinism because it means the outcome of some interaction can be inherently unpredictable”
This is false. Complex systems may or may not be deterministic. Furthermore ’emergence’ has nothing to do with determinism. In a deterministic system you will be able to predict the future events through simulation of every moment of time, whereas in a non-deterministic system you will not at any time scale. The longer term non-predictability from a given initial conditions you talk of is strangely enough chaos, not emergence.
It’s also bit simplistic to completely differentiate complex systems and chaos. Take a Random Boolean Network for example, which by any reasonable measure is a complex system.Under certain parameters the space-time behaviour will be chaotic.
Thanks Zenna,
are you sufficiently familiar with my meaning of “complex systems” to state confidently that my statement was false? or do you believe that such terms are perfectly defined and definable and that my meaning was identical to that meaning, and that that meaning is identical to yours too? or something else?
In any case, your comments suggest that you have mis-interpreted my article, for which I apologise – I agree with a number of points you made, which suggests mis-interpretation.
Curiously, though, Tony Smith’s comments on my article mean that your statement:
“In a deterministic system you will be able to predict the future events through simulation of every moment of time”
is false (assuming I understand your meaning) because of computational irreducibility.
Kind regards,
Greg
It is true that a complex system does not have a universally accepted formal definition, and may be open to some degree of interpretation and intuition. However, under all definitions formal or otherwise of complex systems, there is no stipulation that it must be deterministic or nondeterministic.
Computational irreducibility is an informal idea put forward by Wolfram, not a theory and certainly not a law.
In any case, even if it were true, it does not suggest that my statement:
“In a deterministic system you will be able to predict the future events through simulation of every moment of time”.
is false.
Computational irreducibility suggests that given some initial conditions, for some systems, it is not possible to predict a future state without running them or as I put it, simulation. I.e. there are systems which are not like newtonion mechanics, where you can shortcut computation of every point in time, and just predict the final position of an object in motion.
I however am stating a simple proerty of deterministic systems, given you have a complete model, you can always predict the future through simulation of this system.
thanks Zenna, that’s a helpful reply.
kind regards,
Greg
“The edge of chaos is where you have enough structure / patterns in the system that it is not random but also enough fluidity and emergent creativity that it’s not deterministic.”
Hi Greg
I enjoy reading your blog and helping me to crystalise some of my thoughts on complexity theory. I’m doing a PhD on human trafficking, “mapping” human trafficking activities in my province in South Africa via the systems and complexity theories. The above quote from the blog post is (at the moment) very relevant of the emergent properties of crime networks, which is one reason they are currently always a step or two ahead of law enforcement efforts.
Kind regards,
Amanda