Posts filed under ‘Advocacy evaluation’
Two New Advocacy Evaluation Tools
Here are two new advocacy evaluation tools from the Center for Evaluation Innovation:
The Advocacy Strategy Framework (pdf): presents a simple one-page tool for thinking about theories of change that underlie policy advocacy strategies. Check out the “interim outcomes and indicators” on the last page – very good range of advocacy outcomes/indicators.
Four Tools for Assessing Grantee Contribution to Advocacy Efforts (pdf): offers funders practical guidance on how to assess a grantee’s contribution to advocacy outcomes.The four tools include:
1. A question bank
2. Structured grantee reporting
3. An external partner interview guide
4. Contribution analysis
advocacy evaluation – new methods
The latest edition of Evaluation Connections (pdf), newsletter of the European Evaluation Society, has an interesting article “Advocacy evaluation: lessons from Brazil (and the internet)” by William N. Faulkner.
The article describes some new methods the evaluation team has used such as Sankey diagrams and mind mapping for qualitative analysis.
I’ve reproduced in this post, the Sankey diagram from the article, which shows the flow from “outputs” to “outcomes” for the advocacy – quite a good visualisation of the information.
View the article in this pdf, go to page 7>>
Addressing causation in humanitarian evaluation
A new discussion paper has been released by ALNAP entitled “Addressing causation in humanitarian evaluation: A discussion on designs, approaches and examples”.
The paper discusses possible evaluation designs and approaches that can help provide credible answers to these types of questions, using examples from Oxfam, WFP, UNHCR and NRC.
The paper cites an evaluation I worked on with NRC in the realm of advocacy evaluation.
Advocacy evaluation using contribution analysis
A challenge of advocacy evaluation is in analysing and interpreting data and information in a systematic and rigorous manner. For a recent advocacy evaluation I carried out with my colleague Patricia for the Norwegian Refugee Council, we used a simplified content analysis to assist us with this task.
In carrying out this analysis, we asked four questions:
1) What were the policy changes desired by NRC (“the asks”)?
2) What were the reaction of targeted institutions, individuals and allies to these asks?
3) What was the level/significance of policy change (if any)?
4) What was the role of NRC in any change seen?
We then summarised this in a table, listing the some 30 (!) policy asks of NRC’s advocacy, here is an extract:
| Ask | Reaction | Change seen | Role of NRC advocacy |
| UNICEF and partners need to adapt RRMP to include assessment of protection needs. | UNICEF deployed protection specialist for six months to work with RRMP. | High | High |
| Organisations need to ensure that pressure to move quickly does not marginalize commitment to longer-term work with more impact. | This and broader thinking of report taken on-board in creation of DRC DMG network. | Medium | Medium (NRC advocacy was one of many influences on DMG) |
View the full report here (pdf) (see annex 1 for the contribution analysis table, page 27)>>
New resource: the value iceberg: weighing the benefits of advocacy and campaigning
Rhonda Schlangen and Jim Coe (independent evaluation consultants) have just published a very interesting paper “The value iceberg: weighing the benefits of advocacy and campaigning” on the BetterEvaluation website.
The paper looks at how concepts of ‘value’ and ‘results’ are being applied to advocacy and campaigning and presents some alternative strategies for assessing advocacy. You can see the “value iceburg” below.

Six lessons from successful advocacy projects in the Global South
The Stanford Social Innovation Review has posted an interesting blog post on “Lessons from successful advocacy projects in the Global South“.
The post lists three lessons for successful advocacy in the South:
1. Work plans are not holy writ – need to adapt a project as it evolves
2. For country-level advocacy, local knowledge is critical.
3. “Think globally, act locally”- but how local (Ability of International NGOs to work within local contexts)
I think these are all valid points. From my own experience of evaluating advocacy projects I would add three more lessons:
4. Effective advocacy often needs a combination of tactics: it may seem obvious but advocacy that works often relies on different tactics to reach its goals, from a diversity of tactics from lobbying meetings to public events to coalition-building.
5. Achieving results doesn’t mean press coverage: a lot of effective advocacy I’ve seen was done at the local level where people worked closely with authorities in pressing their concerns; there wasn’t a need to seek press coverage on the issue (warranted it is needed in some cases).
6. Being focused never hurts: in all advocacy evaluations I’ve been involved in like this one or this one, the more specific and targeted the advocacy is, the more that success can be seen. Broad goals may be ambitious and noble – and may make significant achievements – they are just harder to identify successes related to them.
New resource: Monitoring and evaluation of policy influence and advocacy
A new very comprehensive working paper on advocacy evaluation has just been published by the UK-based ODI:
View the working paper here (pdf)>>
A short description:
Policy influence and advocacy is often seen as a way of creating sustainable change in international development. And the more funders and organisations move towards supporting projects which seek to influence policy, the greater the need for reliable ways to monitor and evaluate those activities. This report tackles the thorny area of monitoring and evaluating policy influence and advocacy projects. It reviews key debates in the area, including the tension between establishing contribution and attribution and different ways of understanding causes.
To evaluate policy influence we first need to understand the processes by which policy changes. To this end, the paper presents a number of options and frameworks for understanding how policy influence happens. It then sets out and describes options for monitoring and evaluating policy influence and advocacy projects at four levels: strategy and direction; management and outputs; outcomes and impact; and understanding causes.
Finally the paper presents six case studies of how real organisations have monitored or evaluated their policy influence or advocacy projects.
This paper will be useful to anyone implementing, evaluating, funding or designing policy influence and advocacy projects.
Insights into global advocacy: Oxfam’s GROW campaign
I recently spoke at the Graduate Institute in Geneva for the students of the Certificate in Advocacy in International Affairs – presenting a case study on Oxfam’s GROW campaign – drawing insights on global advocacy campaigns. My presentation is below:
Advocacy and Policy Influencing Blended Learning programme – March to April 2014
Here is an interesting course from INTRAC on advocacy influence – online blended learning that can be taken from anywhere in the world:
“Is developing and implementing an advocacy strategy critical to success in your project or programme? Do your staff and partners need support to achieve your advocacy objectives? In this capacity building programme, you will have the opportunity to develop and troubleshoot the implementation of an advocacy strategy as well as build your knowledge and confidence.
This programme will give you the knowledge and skills to influence policy and practice in your own context. You will learn skills to help you plan and deliver an effective advocacy strategy; enhance your ability to lobby decision makers; and gain confidence in the ways in which you relate to different audiences. You will also have the skills to analyse power dynamics and choose your advocacy activities so they have maximum impact.”
Evaluation report – Oxfam’s GROW campaign
For readers interested in campaign evaluation, Oxfam has just published a mid-point external evaluation report (pdf) of their GROW campaign – of which I was part of the evaluation team.
Often organisations will not make available publically their campaign evaluations – but Oxfam has a progressive policy on this so I’m happy to be able to share the report will all interested…
The GROW campaign set out in 2011 to tackle food justice and build a better food system. Challenging to evaluate, the GROW campaign is broad and diverse, operating at national, regional and international levels, across 4 thematic areas – land, investment in small-scale agriculture, climate change and food price volatility.
In our evaluation report we look at the initial Theory of Change and endeavour to track the changes seen over the first two years and the possible intervening factors, positive and negative, using a variety of methods including five case studies (found at the end of the report).
As the campaign had a broad set of activities at a range of levels, the challenge for the evaluation team was to capture all significant changes seen to date and draw out learnings for the future.
Oxfam has also produced a summary infographic that you can view below.
View the executive summary (pdf)>>
The executive summary is also available in French (pdf) and Spanish (pdf) – and you can also read Oxfam’s management response to the evaluation (pdf).
