Skip to content

Randomized Controlled Trials and Agile Government

January 2, 2013

Randomized Controlled Trials and Agile Government

Ben Goldacre (Mr. ‘Bad Science’) has made a splash with his recent proposals that Government policy should be developed based on evidence, rather than just political conviction. His argument is that Randomized Controlled Trials (RCTs) are the best way of determining whether a policy is working and that they should be routinely used to test the effectiveness of public policy interventions. 1

Ben Goldacre – says that government policies
should be evidence based…

I like what Ben is saying. In my latest book I said:

The central tenet of the agile government approach is that we must be scientists. When we start a project we have a hypothesis that the outcome will be beneficial. We must test that hypothesis as the project progresses. Regular delivery of testable product provides the basis for ensuring that our projects are on the right track.” 2

Brian Wernham – says that government projects
should be evidence based…

So I started to look for evidence on the effectiveness of trialing of policy to discuss in this blog. I examined all the National Audit Office (NAO) reports on a sample department (HM Revenue and Customs) to see if there was any correlation between the report conclusions on ‘Value for Money’ (VFM) and the existence (or not) of any trialing and piloting in each case. I scoured through all the reports published over the last three years. The evidence from this sample is very patchy.

Two reports unequivocally stated that VFM had not been achieved, and that any trialing that had taken place had not been adequate – either because cost measurements had not been adequate, or that any ‘before and after’ comparison was unsafe because no ‘counterfactual’ control data was collected.3 So far so good. I was hoping that the other less critical reports would show some correlation between good trialing and good VFM. Unfortunately not. In some cases there were well designed and tracked trials, in others none at all.4 Patchy evidence indeed, and not enough to satisfy me nor Mr. Goldacre…

A note of caution needs to be exercised here – the NAO’s role is not to criticize Ministers’ policies, but to focus on the economy and efficiency of its implementation. Some aspects of the effectiveness of interventions are covered, but the NAO is circumspect in criticizing policies per se – only their implementation. Of course, there is an interaction between the decision to implement a policy, and the need for feedback and improvement to make it effective…

Anyhow, I did some more digging around in the NAO archive to find advice on policy and trialing, and I came across this report from 2001. Yes – that’s over 10 years ago.5

The eleven-year old NAO report – its recommendations
still not implemented…

This venerable report places a great deal of emphasis on getting and analyzing evidence before committing to policies:

“Experimentation with different options through pilots and trials as part of policy development and implementation allows lessons to be learned, and variation and flexibility to be introduced into policies where appropriate. Examining evidence to understanding the problem, including why previous policy solutions failed.”

The report warned that policy-making guidance materials create the illusion of “a structured, logical, methodical process that does not reflect reality.” In other words policy-making should not be considered a ‘waterfall’ process of design, build, implement, but should be more iterative – using the ‘Agile Government’ approach.

A good business change project is one that not only balances economies of scale against risks of ‘big bang’, but also recognizes the need for feedback from real-life implementation to drive changes to objectives. Such thinking is often termed empirical process control. Its application to complex business changes came out of Shewing’s work on continual improvement of quality in manufacturing processes. He argued that more use should be made of data about the products to adapt and improve processes. This idea was adopted by the Japanese in the search for improvements to their recovering industries after the Second World War, and it was later popularized by Deming as a four-step PDSA cycle.6

Deming’s Plan, Do, Study, Act (PDSA) Model

A ‘waterfall’ lifecycle neglects the importance of feedback and replanning. It assumes that if enough planning is done upfront, then it will never be necessary to deviate from that perfect plan. This is the de-fined process control model. The Deming PDSA model is an empirical process control model – the model emphasizes the need to change the plans regularly using an evidence-based approach. To take full advantage of this theory, we must recognize that:

•    Only an immediate project plan is required in detail – just enough to allow work to proceed to a point where evidence can be gathered on how effective progress is in real-life

•    Evidence must be collected while carrying out tasks – on the effort consumed, the qualities of the outputs, and also on the benefits that the technical solution brings

•    Effort needs to be put into studying lessons learned – could the work have been carried out more efficiently? Were any recurring problems found during testing? Did the resultant business change produce the intended benefits? Were there any disbenefits?

•    Decision makers spend time considering the evidence.

Most importantly, policy makers need to accept the inevitability that initial plans will always need to change. Early feedback is needed to incrementally improve the initial overview plans. Techniques such as prototyping, piloting of the solution, parallel running alongside any existing processes, and phased implementation all should be used to provide feedback on the concepts that underlie a project.

Great leaders plan for data to be collected, make enough time to analyze it, and ensure a blame-free culture that allows for easy adoption of changes to plans. At the heart of the Agile approach to Government is this concept, and Ben Goldacre’s ideas fit neatly alongside this paradigm.

Comment below…

References:

1 Haynes, Laura, Owain Service, Ben Goldacre, and David Torgerson.”Test, Learn, Adapt: Developing Public Policy with Randomised Controlled Trials”, UK Cabinet Office 2012
and Goldacre, Ben, “Bad Evidence” BBC (Radio 4), January 1, 2013, www.bbc.co.uk/programmes/b01phhb9

2 See http://bit.ly/UjFu8n

3 NAO “Customer service performance” and “Pacesetter” reports, both 2012.

4 NAO “Tax avoidance”, “The Compliance and Enforcement Programme”, “Reducing Costs”, “Managing civil tax investigations”, “Engaging with tax agents” and “The efficiency of NI administration” all published between 2010-2012.

5 UK NAO, “Modern Policy Making: Ensuring policies deliver value for money”, (HC 289 2001-02)

6 Imai, M. Kaizen, The key to Japan’s competitive success. Compañía Editorial Continental 1991

© Brian Wernham 2012 CC BY-NC-ND

Advertisements

From → Agile Governance

7 Comments
  1. “The central tenet of the agile government approach is that we must be scientists.” Brian, I entirely agree. Even more alarming is the IfG statement that UK Government is “A less than intelligent customer. It has become unable to judge objectively whether it is getting a good deal from suppliers”.What is lacking are any benchmark measures of suppliers to UK Government let alone Agile versus traditional developments. QSM are benchmarking Agile developments in the States and Europe to enable the benefits to be quantified, understood and costed. Our most recent Agile benchmark results will be presented at the OOP 2013 conference in Munich. Note we do not attempt to quantify business value, that is not our game. The paper I copied you on regarding Agile benchmarking also outlines that this scientific technique applies to all categories of software. For the UK that means MoD as well as the the admin systems across Government.

  2. Jim,

    Thanks for the paper – readers of this blog can find a copy here:
    http://sqgne.org/presentations/2012-13/Mah-Oct-2012.pdf

    Where does the quote “We don’t need no stinkin’ metrics?” come from? Was it Jim Highsmith who said it?

    Brian

    • Apparently it is a quote from Jim Highsmith. If you would like to put up a copy of my paper on Agile Benchmarking please let me know and I will forward.

      • Please do send me the paper. Also please investigate when and where Jim said this – I would like to see the context of his remark!

      • Brian- reply below from my colleague Mike Mah re Jim Highsmith comments: Mike is giving a key note speech at OOP 2013 Munich end of January with updates from benchmarking recent Agile developments.

        “Haha! Yes, it’s a paraphrase of a panel debate from the Cutter Summit
        Conference held in Boston MA when Jim was the Director of the Agile
        Practice.”

        Michael Mah
        Managing Partner
        QSM Associates Inc.
        http://www.qsma.com
        blog: http://www.optimalfriction.com

        Benchmark Practice Director
        Cutter Consortium
        http://www.cutter.com

  3. Update:
    A very interesting debate emerging on Ben Goldacre’s new book ‘Bad Pharma’ – are 50% of medical trials really withheld?
    http://www.placebocontrol.com/2013/02/the-worlds-worst-coin-trick.html

Trackbacks & Pingbacks

  1. Potato logic: News media repeat zombie statistics on food wastage « brianwernham

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: