New resources on advocacy evaluation
Here are some new resources on evaluating advocacy campaigns:
Practical Guide to Advocacy Evaluation from Innovation Network (pdf)>>
For further resources and information on advocacy and campaign evaluation, please consult past posts on these subjects.
Online training course on results-based Monitoring and Evaluation
Here’s an interesting online course offered five times in 2010:
“Results-based Monitoring and Evaluation
This 10 hour live online course will enable you to prepare the ground for undertaking monitoring and evaluation and analysing the results. Results Based Monitoring and Evaluation will help participants understand the key components for effective M&E in order to improve management performance and evidenced based decision making. The course explores how to establish the necessary framework for good M&E based on clarifying stakeholders, bringing the Logical Framework up to date, and generating a set of criteria against which to measure progress.”
Note: this blog has no commercial association with the course organisers – it just seems an interesting course!
A new evaluation method: The Evaluation Café
We are always on the lookout for different methods and approaches for evaluation. Here is a new method that we haven’t come across before: “the evaluation café“.
Following is a brief description:
The Evaluation Café is a method for group facilitation that allows stakeholders of a project or programme to evaluate its impact in an informal brief session. The purpose of the Evaluation Café is to build and document stakeholders’ views on success and impacts after a planned activity.
More resources on network mapping
As I’ve written about before, I’m very interested in how network mapping can be used in evaluation.
Here are two excellent resources for people wanting to learn more about this research technique:
1) A training course conducted by Steeve Ebener of WHO on “Social Network Analysis, Mapping social relations”. You can view the training slides for eight sessions – and there is really some excellent examples of how network mapping can be used.
2) A manual “Network Mapping as a Diagnostic Tool” by Louise Clark (pdf). A “how to” guide on network mapping and an explanation of how to use a popular network mapping software UCINET (I use it too, it’s the best I’ve found).
Evaluating Strategic Communication for Avian Influenza/Pandemic Influenza
As the world is focused on the current flu pandemics, we have seen many efforts to communicate prevention and treatment approaches.
And what about how to measure the results of such communication efforts? Here is an interesting set of guidelines from UNICEF on this issue:
Although it’s a technical document, it provides interesting insight into sampling and interviewing techniques for evaluating communication campaigns.
Summarizing evaluation reports
As I’ve written about previously, evaluation reports are notoriously under-read and underutilized. Aside from the executive summary, evaluators need to find ways of presenting their key findings in a summarized format that make them attractive to their publics.
Aside from the predictable Powerpoint summary (which still can serve a purpose), some of the techniques I have used – and that were well received by publics – are as follows:
Multimedia video: using interviews, graphs and quotes in a video to bring the evaluation results “to life” (see this post for an example)
Scorecard or “snapshot”: highlighting the key findings graphically in one page. See this example:
Summary sheet: summarizing the main findings, conclusions and recommendations in fact sheet of 2-4 pages. See this example: Summary Sheet (pdf)
Findings table: summarizing the main findings, particularly useful where the evaluation is responding to pre-set objectives and indicators, as per this example:
I’m always interested to learn of new methods to summarize evaluation findings, so if you have any more ideas, please share them!
Evaluation conferences in Europe for 2010
The 2010 calendar is starting to fill up with evaluation conferences, relevant to communications in Europe. Here are three that have come to my attention so far:
International Conference on Online Media Measurement
10-13 March 2010, Lisbon, Portugal
Recommended for: those interested in all aspects of online measurement of websites, campaigns and advertising
European Summit on measurement of communications (pdf)
16-18 June 2010, Barcelona, Spain
Recommend for: communicators interested in the lastest best practices in communication evaluation
The 9th European Evaluation Society International Conference
6-8 October 2010, Prague, Czech Republic
Recommended for: those interested in a broader and development-focused view of evaluation. Limited sessions directly on communications evaluation.
Workshop on communications evaluation
I recently conducted a one day training workshop for the staff of Gellis Communications on communications evaluation. We looked at several aspects including:
- How to evaluate communication programmes, products and campaigns;
- How to use the “theory of change” concept;
- Methods specific to communication evaluation including expert reviews, network mapping and tracking mechanisms;
- Options for reporting evaluation findings;
- Case studies and examples on all of the above.
Gellis Communications and myself are happy to share the presentation slides used during the workshop – just see below (these were combined with practical exercises – write to me if you would like copies)
Evaluating online communication tools
Online tools, such as corporate websites, members’ directories or portals increasingly play an important role in communications’ strategies. And of course, they are increasingly important to evaluate.
I just concluded an evaluation of an online tool, created to facilitate the exchange of information amongst a specific community. The tool in question, the Central Register of Disaster Management Capacities is managed by the United Nations Office for the Coordination of Humanitarian Affairs.
The evaluation methodology that I used for evaluating this online tool is interesting as it combines:
- Content analysis
- Network mapping
- Online survey
- Interviews
- Expert review
- Web metrics
And for once, you can dig into the methodology and findings as the evaluation report is available publicly: View the full report here (pdf) >>
Likert Scale & surveys – more discussion..

I’m currently in Brussels for some evaluation training with Gellis Communications and in our discussions the use of Likert Scale in surveys. As I’ve written about before, the Likert scale (named after its creator pictured above) is widely used response scale in surveys. My earlier post spoke about the importance of labelling points on the scales and not to use too many points (most people can’t place their opinion on a scale of more than seven). Here are several other issues that have come up recently:
To use an even or odd scale: there is an ongoing debate on the Likert scale as to whether you should use an odd (five point for example) or even (four point for example). Some advocate and odd scale where respondents can have a “neutral” middle point whereas others prefer to “force” people to select a negative or positive position with an even scale (e.g four points). In addition, the use of a “don’t know” option is inconclusive. I personally believe that a “don’t know” option is essential on some scales where people may simply not have an opinion. However, studies are inconclusive if such an option increases accuracy of responses.
Left to right or right to left: I always advocate displaying scales from the negative to the positive, left to right. It seems more logic to me and some automated survey software mark your answers and calculate the responses for graphs on this basis, e.g. that the first point is the lowest. But I’ve had heard others argue that it should be the opposite way around – put positive to negative, left to right – as people will click on the first point by default in online surveys – which I personally don’t believe. I’ve not yet found any academic reference supporting either way but looking at all examples in academic articles, 95% are written as negative to positive, left to right – some evidence in itself!
Mr Likert you have a lot to answer for!

