Evaluating communication products – an example
I’ve written previously about the challenge of evaluating communication products – websites, brochures, videos and reports – rather than a communication campaign or programme as a whole.
I’ve just been involved in carrying out an evaluation of products for ACAPS, an NGO focused on assessment in humanitarian crises – and as part of their mandate they produce analytical products (mainly reports) on crisis situations globally. To carry out the evaluation, we undertook 40 interviews with users of their products and then analysed and categorised the type of use, satisfaction ratings and unmet needs. You can view the evaluation report here (pdf)>>.
Six lessons from successful advocacy projects in the Global South
The Stanford Social Innovation Review has posted an interesting blog post on “Lessons from successful advocacy projects in the Global South“.
The post lists three lessons for successful advocacy in the South:
1. Work plans are not holy writ – need to adapt a project as it evolves
2. For country-level advocacy, local knowledge is critical.
3. “Think globally, act locally”- but how local (Ability of International NGOs to work within local contexts)
I think these are all valid points. From my own experience of evaluating advocacy projects I would add three more lessons:
4. Effective advocacy often needs a combination of tactics: it may seem obvious but advocacy that works often relies on different tactics to reach its goals, from a diversity of tactics from lobbying meetings to public events to coalition-building.
5. Achieving results doesn’t mean press coverage: a lot of effective advocacy I’ve seen was done at the local level where people worked closely with authorities in pressing their concerns; there wasn’t a need to seek press coverage on the issue (warranted it is needed in some cases).
6. Being focused never hurts: in all advocacy evaluations I’ve been involved in like this one or this one, the more specific and targeted the advocacy is, the more that success can be seen. Broad goals may be ambitious and noble – and may make significant achievements – they are just harder to identify successes related to them.
ALNAP’s first webinar on humanitarian evaluation – 26 June 2014
For those interested in humanitarian evaluation, ALNAP’s Humanitarian Evaluation Community of Practice, is organising its first webinar:
The webinar will feature a presentation of the evaluation capacities framework from the popular ALNAP Study Using evaluation for a change and will include a discussion with two members of the Community of Practice – Wartini Pramana (Canadian Red Cross) and Mikkel Nedergaard (Danish Refugee Council) – who have been exploring and working on these issues in their respective organisations and will share their experience and learning on this topic.
Presenters:
· Francesca Bonino, Research Fellow – Evaluation (ALNAP)
· Alistair Hallam, Director (Valid International)
Discussants:
· Wartini Pramana, Manager, Planning, Evaluation and Knowledge Management (Canadian Red Cross)
· Mikkel Nedergaard, Adviser, Monitoring and Evaluation (Danish Refugee Council)
New online hub – learnings from the IF campaign
Bond (UK NGO body) have created an online hub of campaigning effectiveness resources based on the lessons and recommendations of their IF campaign.
More than 30 individuals have shared their expertise in 7 areas of coalition campaigning to produce some 20 resources. They give top tips, reflections and ideas on topics ranging from digital campaigning to how to structure coalition campaigns. In addition you’ll also find interesting campaigning tools to assist with common campaigning issues.
The “Inspiration” mini-cases are particularly interesting for those working on campaigning and seeking to learn what “works” for others.
Using video for evaluation baseline
I’ve written before about using video for data collection and reporting evaluation results – but I’ve just come across this interesting example of using video for a baseline, that is to record the situation before the project starts.
Miki Tsukamoto of the the International Federation of Red Cross and Red Crescent Societies explains this approach on the AEA365 blog which they used for a project in Uganda. A summarised version of the resulting video is found below. They will return in 2017 to make an “endline” video – so stand-by!
IF campaign – evaluation report
As regular readers will know, I’m particularly interested in campaign evaluation and have written before about campaign evaluations that I have been involved with, on food justice and climate change.
So I’m always interested to read other campaign evaluations and just published is the campaign evaluation (pdf) of the Enough Food for Everyone IF campaign on global hunger. It’s a very comprehensive report and interesting to read.
New resource: Monitoring and evaluation of policy influence and advocacy
A new very comprehensive working paper on advocacy evaluation has just been published by the UK-based ODI:
View the working paper here (pdf)>>
A short description:
Policy influence and advocacy is often seen as a way of creating sustainable change in international development. And the more funders and organisations move towards supporting projects which seek to influence policy, the greater the need for reliable ways to monitor and evaluate those activities. This report tackles the thorny area of monitoring and evaluating policy influence and advocacy projects. It reviews key debates in the area, including the tension between establishing contribution and attribution and different ways of understanding causes.
To evaluate policy influence we first need to understand the processes by which policy changes. To this end, the paper presents a number of options and frameworks for understanding how policy influence happens. It then sets out and describes options for monitoring and evaluating policy influence and advocacy projects at four levels: strategy and direction; management and outputs; outcomes and impact; and understanding causes.
Finally the paper presents six case studies of how real organisations have monitored or evaluated their policy influence or advocacy projects.
This paper will be useful to anyone implementing, evaluating, funding or designing policy influence and advocacy projects.
How to transform evaluation findings into infographics
I wrote recently on using infographics for evaluation – and just recently I came across an excellent post from Joitske Hulsebosch on the BetterEvaluation blog on how to transform evaluation findings into infographics – also providing some hints on software you can use yourself. And l love this – an inforgraphic from Elissa Schloesser on how to create infographics! (click on it to see it bigger).
Outcome mapping lab 2014, Tanzania, September 2014
Outcome mapping is an evaluation technique that is growing in use and interest. The Outcome Mapping Learning Community is hosting their third annual event this year in September in Dar es Salaam, Tanzania. The OM Lab 2014 is a three-day training and knowledge sharing event to explore the value Outcome Mapping can add to monitoring and evaluation in complex programmes.
Learn more about the three-day programme>> (pdf)

