Evaluating communication products – an example

I’ve written previously about the challenge of evaluating communication products – websites, brochures, videos and reports – rather than a communication campaign or programme as a whole.

I’ve just been involved in carrying out an evaluation of products for ACAPS, an NGO focused on assessment in humanitarian crises – and as part of their mandate they produce analytical products (mainly reports) on crisis situations globally. To carry out the evaluation, we undertook 40 interviews with users of their products and then analysed and categorised the type of use, satisfaction ratings and unmet needs. You can view the evaluation report here (pdf)>>.

July 23, 2014 at 6:32 am Leave a comment

Six lessons from successful advocacy projects in the Global South

The Stanford Social Innovation Review has posted an interesting blog post on “Lessons from successful advocacy projects in the Global South“.

The post lists three lessons for successful advocacy in the South:


1.  Work plans are not holy writ – need to adapt a project as it evolves

2.  For country-level advocacy, local knowledge is critical.
3.  “Think globally, act locally”- but how local (Ability of International NGOs to work within local contexts)

I think these are all valid points. From my own experience of evaluating advocacy projects I would add three more lessons:

4.  Effective advocacy often needs a combination of tactics: it may seem obvious but advocacy that works often relies on different tactics to reach its goals, from a diversity of tactics from lobbying meetings to public events to coalition-building.
5.  Achieving results doesn’t mean press coverage: a lot of effective advocacy I’ve seen was done at the local level where people worked closely with authorities in pressing their concerns; there wasn’t a need to seek press coverage on the issue (warranted it is needed in some cases).
6.  Being focused never hurts: in all advocacy evaluations I’ve been involved in like this one or this one, the more specific and targeted the advocacy is, the more that success can be seen. Broad goals may be ambitious and noble – and may make significant achievements – they are just harder to identify successes related to them.

July 9, 2014 at 4:02 pm Leave a comment

ALNAP’s first webinar on humanitarian evaluation – 26 June 2014

For those interested in humanitarian evaluation, ALNAP’s Humanitarian Evaluation Community of Practice, is organising its first webinar:

The webinar will feature a presentation of the evaluation capacities framework from the popular ALNAP Study Using evaluation for a change and will include a discussion with two members of the Community of Practice – Wartini Pramana (Canadian Red Cross) and Mikkel Nedergaard (Danish Refugee Council) – who have been exploring and working on these issues in their respective organisations and will share their experience and learning on this topic.
Presenters:
· Francesca Bonino, Research Fellow – Evaluation (ALNAP)
· Alistair Hallam, Director (Valid International)

Discussants:
· Wartini Pramana, Manager, Planning, Evaluation and Knowledge Management (Canadian Red Cross)
· Mikkel Nedergaard, Adviser, Monitoring and Evaluation (Danish Refugee Council)

Register here>>

June 21, 2014 at 7:05 am Leave a comment

New online hub – learnings from the IF campaign

Bond (UK NGO body) have created an online hub of campaigning effectiveness resources based on the lessons and recommendations of their  IF campaign.

More than 30 individuals have shared their expertise in 7 areas of coalition campaigning to produce some 20 resources. They give top tips, reflections and ideas on topics ranging from digital campaigning to how to structure coalition campaigns. In addition you’ll also find interesting campaigning tools to assist with common campaigning issues.

The “Inspiration” mini-cases are particularly interesting for those working on campaigning and seeking to learn what “works” for others.

June 11, 2014 at 10:26 am Leave a comment

Using video for evaluation baseline

I’ve written before about using video for data collection and reporting evaluation results - but I’ve just come across this interesting example of using video for a baseline, that is to record the situation before the project starts.

Miki Tsukamoto of the the International Federation of Red Cross and Red Crescent Societies explains this approach on the AEA365 blog which they used for a project in Uganda.  A summarised version of the resulting video is found below. They will return in 2017 to make an “endline” video – so stand-by!

May 27, 2014 at 6:26 am Leave a comment

IF campaign – evaluation report

  As regular readers will know, I’m particularly interested in campaign evaluation and have written before about campaign evaluations that I have been involved with, on food justice and climate change.

So I’m always interested to read other campaign evaluations and just published is the campaign evaluation (pdf) of the Enough Food for Everyone IF campaign on global hunger. It’s a very comprehensive report and interesting to read.

 

 

May 15, 2014 at 6:51 pm Leave a comment

New resource: Monitoring and evaluation of policy influence and advocacy

A new very comprehensive working paper on advocacy evaluation has just been published by the UK-based ODI:

View the working paper here (pdf)>>

A short description:

Policy influence and advocacy is often seen as a way of creating sustainable change in international development. And the more funders and organisations move towards supporting projects which seek to influence policy, the greater the need for reliable ways to monitor and evaluate those activities.  This report tackles the thorny area of monitoring and evaluating policy influence and advocacy projects. It reviews key debates in the area, including the tension between establishing contribution and attribution and different ways of understanding causes.

To evaluate policy influence we first need to understand the processes by which policy changes. To this end, the paper presents a number of options and frameworks for understanding how policy influence happens. It then sets out and describes options for monitoring and evaluating policy influence and advocacy projects at four levels: strategy and direction; management and outputs; outcomes and impact; and understanding causes.

Finally the paper presents six case studies of how real organisations have monitored or evaluated their policy influence or advocacy projects.

This paper will be useful to anyone implementing, evaluating, funding or designing policy influence and advocacy projects.

View the working paper here (pdf)>>

May 6, 2014 at 3:39 pm Leave a comment

Older Posts


Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 860 other followers

Categories

Feeds


Follow

Get every new post delivered to your Inbox.

Join 860 other followers