By Mark McKergow
The Cabinet Office Behavioural Insights team has just issued ‘Test, Learn and Adapt: Developing Public Policy with Randomised Controlled Trials’ (download from their website here).
This document is notable for several reasons. One of the authors is Ben Goldacre, author of Bad Science and a thorn in the side of pseudoscientists and charlatans around the world, who has long advocated actual testing as opposed to ideological debate as a way of finding what works. Secondly, the report mentions complex systems, which is unusual for government reports! (p 12):
Many leading thinkers have concluded that in complex systems, from biological ecosystems to modern economies, much progress – if not most – occurs through a process of trial and error. Economies and ecosystems that become too dominated by a narrow a range of practices, species or companies are more vulnerable to failure than more diverse systems. Similarly, such thinkers tend to be sceptical about the ability of even the wisest experts and leaders to offer a comprehensive strategy or masterplan detailing the best practice or answer on the ground (certainly on a universal basis). Instead they urge the deliberate nurturing of variation coupled with systems, or dynamics, that squeeze out less effective variations and reward and expand those variations that seem to work better.
The idea of testing policy initiatives seems to be surprisingly new in Government circles – indeed, Tim Harford (“Why real life needs real trials”) relates the story of a mandarin at the Department for Work and Pensions who once told Tony Blair’s chief scientific adviser that the DWP could function perfectly well without any contribution from science – what Harford calls ‘a demonstration of grotesque ignorance and arrogance’. That testing, learning and adaptation are on the agenda at last is to be applauded. If the Cabinet Office is going to take complexity seriously too, they may be interested in a couple of additional ideas about experimentation with complex systems.
In complex systems, even small differences in the way something is done can lead to large differences down the line. Strictly speaking, just as one can’t step into the same river twice, one can’t implement the same initiative twice (in two different counties, for example). What works in Kent many not automatically work in Glasgow and vice versa. It would make sense to keep a close eye on the way an initiative is actually implemented, and to take account of local conditions in the transplanting of work already trialled. Allowing sensible local variations would be one way to achieve this – along with continuing monitoring and adaptation.
This variation comes with time as well as space. What works in 2012 may not work in 2013. This is not a counsel of despair, but a prompt to keep monitoring and adapting services to bring results in the locale in question. This is about moving away from the silver bullet idea of ‘Evidence-Based Practice’ – that once something has been found to work, all we need to do is implement it and all will be well – to ‘Practice-Based Evidence’ – with public services continually keeping track of their impact and enhancing it as a matter of course, rather than just doing what they are told. From a complexity perspective, this should include small experiments with new ideas as a matter of course – however well things are going. The world moves on, even if those in power wished, like Canute, that it didn’t.