The US-based Media Impact Funders have produced a ten point list on elements of success for policy change (what I’ve labeled “advocacy”), as reproduced below in this post.
Although it is looking from a US perspective of influence on policy through advocacy, what struck me was that many of the points are relevant to advocacy done globally or in other countries/regions. For example:
Point 1: Solutions – when evaluating advocacy initiatives and talking with policy-makers, a common complaint I have heard is that advocacy is not “solution focused”, i.e. it makes valid points about the given issues (that often policy-makers are also aware of) but don’t necessarily put forward possible solutions to these issues.
Point 3: Agility – the ability to be flexible is so important – to take advantage of opportunities that arise, that were not necessarily included in the original advocacy plan. That I saw recently in an evaluation I carried out for Oxfam on global development policy – where they had enough flexibility to move resources as the issues peaked and new opportunities emerged.
Point 5: Humanity – often advocacy focuses on the “facts” but what can also make an impact is the “human factor”. One effective example of this was in the creation of the Arms Trade Treaty where state representatives drafting the treaty were directly confronted by survivors of armed violence; certainly bringing a human face to the dry legalistic treaty process and language.
p.s. I came out as a Constructivist Evaluator…
A new publication has been released by Donor Committee for Enterprise Development providing practical advice for selecting sample sizes (pdf). The publication is particularly useful for considering sampling issues for online surveys and provides a lot of good advice and tips.
In communicating evaluation findings, challenges are often seen with the key product to do so – the evaluation report. Often evaluation reports suffer from being long, wordy and just plain boring! Therefore, we have to find new ways to communication evaluation findings.
One way I find interesting is the use of a visual summary of findings – that summarises the evaluation findings graphically and in a limited number of pages. I recently had the opportunity to use this format for an evaluation for Oxfam – as seen in the image in this post. You can view the full summary by clicking on the image (it brings you to a pdf file).
It’s a very comprehensive guide and I’d add it to my previous list of best resources for advocacy evaluation>>
DFID have released a new paper on the practice of beneficiary feedback in evaluation (pdf).
The paper highlights five key messages (listed below), with a main point being that beneficiaries are often only seen as a provider of data and aren’t given a broader role in the evaluation process – a point I can confirm from having been involved in many evaluations.
Rather ironically, the DFID study on beneficiary feedback includes no feedback from beneficiaries on the study…
Key Message 1: Lack of definitional clarity has led to a situation where the term beneficiary feedback is subject to vastly differing interpretations and levels of ambition within evaluation.
Key Message 2: There is a shared, normative value that it is important to hear from those who are affected by an intervention about their experiences. However, in practice this has been translated into beneficiary as data provider, rather than beneficiary as having a role to play in design, data validation and analysis and dissemination and communication.
Key Message 3: It is possible to adopt a meaningful, appropriate and robust approach to beneficiary feedback at key stages of the evaluation process, if not in all of them.
Key Message 4: It is recommended that a minimum standard is put in place. This minimum standard would require that evaluation commissioners and evaluators give due consideration to applying a beneficiary feedback approach at each of the four key stages of the evaluation process.
Key Message 5: A beneficiary feedback approach to evaluation does not in any way negate the need to give due consideration to the best combination of methods for collecting reliable data from beneficiaries and sourcing evidence from other sources.