KT Encounters: What counts as evidence?

5 April 2018

The factors that influence policy development are numerous and complex, ranging from scientific evidence to public opinion, financial restrictions and the social and moral factors which guide individual decision-making.

Here, Dr. Paul Cairney, keynote speaker at the 4th Fuse International Conference on Knowledge Exchange in Public Health, shares his thoughts on how different types of evidence are valued, and the challenges of knowing what type of evidence to use in the messy process of policy-making.

What counts as evidence? Our choice could have a profound effect on policy-making

Since 2016, the topic I’m most often asked to present on is a variant of the question, ‘why don’t policy-makers listen to your evidence?’

There are many contributing factors, but one key answer is that policy-makers and researchers, across all levels and disciplines, have many different ideas about what counts as good evidence.

For example, many scientists follow a hierarchy of scientific evidence. At the top of this hierarchy is the randomised control trial (RCT) and the systematic review of RCTs, with expertise much further down the list, followed by practitioner experience and service user feedback near the bottom.

Yet, most policy-makers – and many academics – prefer a wider range of sources of information, combining their own experience with information ranging from peer reviewed scientific evidence and the ‘grey’ literature, to public opinion and feedback from consultation.

While it may be possible to persuade some central government departments or agencies to privilege scientific evidence in decision-making, they are driven by multiple external factors and priorities, such as a desire to foster consensus driven policy-making or a shift from a centralized approach to tailored local practices.

Consequently, policy-makers can often only recommend rather than impose a uniform evidence-based approach. If local governments or communities choose a different policy solution, we may find that the same evidence may have more or less effect in different parts of government.

Three models of evidence-based policy-making

My research on evidence use in policy – in Scotland and the UK – suggests that governments make many choices simultaneously, including decisions about what counts as ‘good’ evidence and what is ‘good’ governance.

When deciding whether to implement or scale up an intervention, these decisions typically fall into one of three general approaches (described in Table 1), with varying adherence to the hierarchies of evidence noted above and varying levels of centralized policy-making.

Table 1: Three approaches to evidence-based policy-making

Approach 1: Implementation science Approach 2: Storytelling Approach 3: Improvement method
The big picture Interventions are highly regarded when backed by empirical data from international RCTs. The approach has relatively high status in health departments, often while addressing issues of health, social care, and social work. Practitioners tell stories of policy experiences, and invite other people to learn from them. Policy is driven by governance principles based on co-producing policy with users (e.g. residents of care homes). Central governments identify promising evidence, train practitioners to use the improvement, and experiment with local interventions. Discussion about how to ‘scale up’ policy combines personal reflection and gathering evidence of success.
How to gather evidence of effectiveness and best practice With reference to a hierarchy of evidence and evidence gathering, generally with systematic reviews and RCTs at the top. With reference to principles of good practice, and practitioner and service user testimony. No hierarchy of evidence. Identify promising interventions, based on a mix of evidence. Encourage trained practitioners to adapt interventions to their area, and gather comparable data on their experience.
How to ‘scale up’ from evidence of best practice Introduce the same specific model in each area.
Require fidelity to the intervention to allow you to measure its effectiveness with RCTs.
Tell stories based on your experience, and invite other people to learn from them. A simple message to practitioners: if your practice is working, keep doing it; if it is working better elsewhere, consider learning from their experience.
What aim should you prioritize? The correct administration of the same intervention / active ingredient. Key principles, such as localism and respect for service user experiences. Allow local practitioners to experiment and decide how best to turn evidence into practice.

At one extreme you have consistent centralization which is relatively conducive to the roll out of uniform policy interventions driven by evidence from RCTs. At the other, you have routine delegation of policy to local communities, service users, and practitioners that is conducive to sharing evidence personally via storytelling. This method prioritises experience and feedback: people learn from each other then decide if any elements of that learning are applicable to their own experience and aims. For example, My Home Life – a UK initiative that promotes quality of life in care homes for older people – developed as a way to share lessons and best practice to co-create new ways of working to better meet the needs of people in care homes.

Within these two extremes are many possibilities to combine evidence and policy, including compromise models that combine pragmatic delegation with training to encourage the systematic use of evidence. For example, the improvement method acknowledges the benefits of structured learning and more centralist data gathering and the limitations of an RCT-driven uniform approach. This approach is used by the Early Years Collaborative in Scotland where the Institute of Healthcare Improvement model is used to train practitioners and encourage them to experiment with interventions to improve the lives of young children.

Of course, the real world does not map simply onto three ideal-types. Nor do governments make consistent choices to deliver one approach.  Governments have used the phrase ‘let a thousand flowers bloom’ to symbolise a desire to entertain many models of policy-making. The challenge for researchers, and all those wanting to influence policy, is knowing what type of evidence to apply in a system of seemingly inconsistent and contradictory approaches.

The moral of this story

Consequently, we end with a two-fold moral to reflect the ever-present need to make political choices on the best forms of evidence and governance in a messy policy-making system.

  1. The first moral is conceptual: it does not make sense to simply call for ‘evidence-based’ policy and policy-making. The choice to prioritize forms of evidence is political, and this choice is linked inextricably to our choice of governance style.
  2. The second is cautionary: the policy process is too messy and unpredictable to allow policy-makers to produce consistent choices from the centre. Instead, expect to find – and adapt to – what seem to be contradictory choices.

In my presentation to the FUSE conference, I provide more discussion of the policy theories which help explain why the policy process can seem so messy and unpredictable that it makes a simple consistent ‘evidence-based’ choice nearly impossible. That presentation will be further supported by a new special issue in Policy and Politics called ‘Practical lessons from policy theories’.


The opinions expressed in this blog post are those of the author and do not necessarily reflect the views of the Michael Smith Foundation for Health Research.