Test, Learn, Adapt… and repeat: Learning and adaptation in public policy

By Mark McKergow

The Cabinet Office Behavioural Insights team has just issued ‘Test, Learn and Adapt: Developing Public Policy with Randomised Controlled Trials’ (download from their website here).

This document is notable for several reasons.  One of the authors is Ben Goldacre, author of Bad Science and a thorn in the side of pseudoscientists and charlatans around the world, who has long advocated actual testing as opposed to ideological debate as a way of finding what works.  Secondly, the report mentions complex systems, which is unusual for government reports! (p 12):

Many leading thinkers have concluded that in complex systems, from biological ecosystems to modern economies, much progress – if not most – occurs through a process of trial and error. Economies and ecosystems that become too dominated by a narrow a range of practices, species or companies are more vulnerable to failure than more diverse systems.  Similarly, such thinkers tend to be sceptical about the ability of even the wisest experts and leaders to offer a comprehensive strategy or masterplan detailing the best practice or answer on the ground (certainly on a universal basis). Instead they urge the deliberate nurturing of variation coupled with systems, or dynamics, that squeeze out less effective variations and reward and expand those variations that seem to work better.

The idea of testing policy initiatives seems to be surprisingly new in Government circles – indeed, Tim Harford (“Why real life needs real trials”) relates the story of a mandarin at the Department for Work and Pensions who once told Tony Blair’s chief scientific adviser that the DWP could function perfectly well without any contribution from science – what Harford calls ‘a demonstration of grotesque ignorance and arrogance’.  That testing, learning and adaptation are on the agenda at last is to be applauded.  If the Cabinet Office is going to take complexity seriously too, they may be interested in a couple of additional ideas about experimentation with complex systems.

In complex systems, even small differences in the way something is done can lead to large differences down the line.  Strictly speaking, just as one can’t step into the same river twice, one can’t implement the same initiative twice (in two different counties, for example).  What works in Kent many not automatically work in Glasgow and vice versa.  It would make sense to keep a close eye on the way an initiative is actually  implemented, and to take account of local conditions in the transplanting of work already trialled.  Allowing sensible local variations would be one way to achieve this – along with continuing monitoring and adaptation.

This variation comes with time as well as space.  What works in 2012 may not work in 2013.  This is not a counsel of despair, but a prompt to keep monitoring and adapting services to bring results in the locale in question.  This is about moving away from the silver bullet idea of ‘Evidence-Based Practice’ – that once something has been found to work, all we need to do is implement it and all will be well – to ‘Practice-Based Evidence’ – with public services continually keeping track of their impact and enhancing it as a matter of course, rather than just doing what they are told.  From a complexity perspective, this should include small experiments with new ideas as a matter of course – however well things are going.  The world moves on, even if those in power wished, like Canute, that it didn’t.

Be Sociable, Share!

3 Responses to “Test, Learn, Adapt… and repeat: Learning and adaptation in public policy”

  1. Chris Davies says:

    Great post Mark – if government can move away from its obsession with “best practice”, and onto a more subtle understanding of how systems work, then we will all be better off!

    It will also be interesting what impact, if any, this kind of thinking has on the market-based ideas that have underpinned the last 30 years of public policy. I’m increasingly unconvinced of the virtue in trying to apply market-type approaches to policy areas that just aren’t going to be markets, at least in the sense that we would recognise them.

    They will be complex systems, of course, and that should prompt some different approaches to the ones that haven’t really worked until now. The best example I can think of is primary and secondary education, where consumers will never make choices frequently enough to allow effective competition to operate (that assumes they can actually make choices, of course, rather than just expressing preferences as actually happens in many cities). Moving off market dogma onto a more sophisticated understanding of the complexity of the education system is likely to be a good idea.

  2. Richard Vize says:

    Local government is increasingly clear that there is good practice but not best practice. In key areas such as care of older people and protection of vulnerable children, for example, practice is constantly evolving as the system gains new insights into causes, interventions and unintended consequences. Healthcare similarly evolves. But the public sector does not yet have as one of its core values the need to constantly measure, test, re-evaluate and develop. It is beginning to change, but too slowly.

  3. Carl Allen says:

    Now this is old stuff below, sadly …

    Best practice indicates that a good practice has been identified as particularly relevant at that point in time and place (and applied correctly).

    Were it otherwise, best practice would indicate one size fits all or perhaps the best practice is so perfect it leaves no room for improvement which borders on silly. And best practice brings the notion of being a jealously guarded monopoly that blocks emergence of innovation and the new.

    Of course there can be better practices at that point in time and place but since we need to make a choice of practice and none of us are perfect or all knowing all the time, then so be it for the choice made. The point is to recognise a better practice when it comes along and use it and that is the best practice i.e. seeking the better than the best.

    Evidence that it works is not the evidence that it should be up-scaled So we have tested, analysed and realised that it works well and have the evidence neatly written up. It is time to upscale, duplicate or replicate. Yet the funder turns down the application to upscale, duplicate or replicate and says that the evidence is photocopying.

    What the funder means is that the evidence presented is evidence of what worked at that time and place i.e. no evidence was presented of why the upscale, duplicate or replicate would work in another place and time and for that we need to present a different type of evidence.

    An analogy is that of refining petroleum into its various products which all have the same specifications. The thing is petroleum, before processing in the oil refinery, can have varying impurities such as different levels of sulphur depending from where it was extracted. But the petroleum products after refining have the same specification. Now oil refineries, whatever their size or location, all do the same job of refining the petroleum whatever the impurity. So the question of scale is not relevant. The question is one of processing a variable input and having outputs which are the same. And this is the evidence the funder was talking about.

Leave a Reply