I’ve written previously about the challenge of evaluating communication products – websites, brochures, videos and reports – rather than a communication campaign or programme as a whole.
I’ve just been involved in carrying out an evaluation of products for ACAPS, an NGO focused on assessment in humanitarian crises – and as part of their mandate they produce analytical products (mainly reports) on crisis situations globally. To carry out the evaluation, we undertook 40 interviews with users of their products and then analysed and categorised the type of use, satisfaction ratings and unmet needs. You can view the evaluation report here (pdf)>>.
The Stanford Social Innovation Review has posted an interesting blog post on “Lessons from successful advocacy projects in the Global South“.
The post lists three lessons for successful advocacy in the South:
1. Work plans are not holy writ – need to adapt a project as it evolves
2. For country-level advocacy, local knowledge is critical.
3. “Think globally, act locally”- but how local (Ability of International NGOs to work within local contexts)
I think these are all valid points. From my own experience of evaluating advocacy projects I would add three more lessons:
4. Effective advocacy often needs a combination of tactics: it may seem obvious but advocacy that works often relies on different tactics to reach its goals, from a diversity of tactics from lobbying meetings to public events to coalition-building.
5. Achieving results doesn’t mean press coverage: a lot of effective advocacy I’ve seen was done at the local level where people worked closely with authorities in pressing their concerns; there wasn’t a need to seek press coverage on the issue (warranted it is needed in some cases).
6. Being focused never hurts: in all advocacy evaluations I’ve been involved in like this one or this one, the more specific and targeted the advocacy is, the more that success can be seen. Broad goals may be ambitious and noble – and may make significant achievements – they are just harder to identify successes related to them.
For those interested in humanitarian evaluation, ALNAP’s Humanitarian Evaluation Community of Practice, is organising its first webinar:
The webinar will feature a presentation of the evaluation capacities framework from the popular ALNAP Study Using evaluation for a change and will include a discussion with two members of the Community of Practice – Wartini Pramana (Canadian Red Cross) and Mikkel Nedergaard (Danish Refugee Council) – who have been exploring and working on these issues in their respective organisations and will share their experience and learning on this topic.
· Francesca Bonino, Research Fellow – Evaluation (ALNAP)
· Alistair Hallam, Director (Valid International)
· Wartini Pramana, Manager, Planning, Evaluation and Knowledge Management (Canadian Red Cross)
· Mikkel Nedergaard, Adviser, Monitoring and Evaluation (Danish Refugee Council)
More than 30 individuals have shared their expertise in 7 areas of coalition campaigning to produce some 20 resources. They give top tips, reflections and ideas on topics ranging from digital campaigning to how to structure coalition campaigns. In addition you’ll also find interesting campaigning tools to assist with common campaigning issues.
The “Inspiration” mini-cases are particularly interesting for those working on campaigning and seeking to learn what “works” for others.
I’ve written before about using video for data collection and reporting evaluation results – but I’ve just come across this interesting example of using video for a baseline, that is to record the situation before the project starts.
Miki Tsukamoto of the the International Federation of Red Cross and Red Crescent Societies explains this approach on the AEA365 blog which they used for a project in Uganda. A summarised version of the resulting video is found below. They will return in 2017 to make an “endline” video – so stand-by!
So I’m always interested to read other campaign evaluations and just published is the campaign evaluation (pdf) of the Enough Food for Everyone IF campaign on global hunger. It’s a very comprehensive report and interesting to read.
A new very comprehensive working paper on advocacy evaluation has just been published by the UK-based ODI:
A short description:
Policy influence and advocacy is often seen as a way of creating sustainable change in international development. And the more funders and organisations move towards supporting projects which seek to influence policy, the greater the need for reliable ways to monitor and evaluate those activities. This report tackles the thorny area of monitoring and evaluating policy influence and advocacy projects. It reviews key debates in the area, including the tension between establishing contribution and attribution and different ways of understanding causes.
To evaluate policy influence we first need to understand the processes by which policy changes. To this end, the paper presents a number of options and frameworks for understanding how policy influence happens. It then sets out and describes options for monitoring and evaluating policy influence and advocacy projects at four levels: strategy and direction; management and outputs; outcomes and impact; and understanding causes.
Finally the paper presents six case studies of how real organisations have monitored or evaluated their policy influence or advocacy projects.
This paper will be useful to anyone implementing, evaluating, funding or designing policy influence and advocacy projects.