Workshop on Contribution Analysis and evaluation
For those readers that are in Switzerland, here is a very interesting course on contribution analysis and evaluation planned for 2011:
Contribution Analysis: A tool for planning and evaluating cause-effect and impact of interventions
Who is the Course for
• evaluation practitioners
• evaluation managers and commissioners
• teachers of policy and programme evaluation
• staff within organisations delivering interventions with a responsibility for evaluation
Course Overview
This workshop will look at planning for interventions and evaluating their outcomes and impact from the perspective of contribution analysis. Contribution analysis is a way to make contribution claims about cause and effect in the absence of counterfactuals. It can also be used to better plan interventions and design better data gathering for evaluations. Contribution analysis will be situated within theory-based evaluation approaches and within approaches to evaluate complex interventions. New approaches to developing and representing theories of change—as distinct from results chains and logic models—will be presented. Examples of theories of change and contribution analysis will be used from several countries.
Course Aims and Objectives
This course aims at providing participants with an understanding of the ideas behind contribution analysis, the benefits from adopting this theory-based approach, insight into recent developments and experience working with a contribution analysis framework.
Course Directors
Prof. Marc-Henry Soulet, Department Sociologie, Social Policy and Social Work, University of Fribourg
Dr Marlène M. Läubli Loud, (DPhil) Lecturer in Public Sector Evaluation, Department Department Sociologie, Social Policy and Social Work, University of Fribourg.
Course Facilitator
The course will be led by Dr John Mayne, PhD. He is currently an independent advisor and consultant on public sector performance. Until 2004, he was with the Canadian Federal Government, where he worked in the Office of the Comptroller General, the Treasury Board Secretariat and the Office of the Auditor General. Dr. Mayne was instrumental in the development of the federal government’s approach to evaluating the performance of programs. He was involved with the development of government-wide accountability and reporting regimes, and effective approaches to managing for results and performance measurement. In the national audit office, he was responsible for the audit areas of accountability, governance, alternative service delivery, and performance measurement and reporting.
Date and Place
19th-20th May, 2011, Swiss Federal Office of Personnel, Eigerstrasse 71, Berne
Language
Course language is English. However, of course participants are encouraged to contribute in their native language (French/German).
Course Fees
CHFrs.650 for SEVAL members, CHFrs.700 for non-members
Enrolment deadline
01 April, 2011 – (Number of participants is limited)
Presenting evalution results in photostories
![]()
I am always interested in new ways to present evaluation results.
Here is a very engaging and accessible format to present evaluation results – photostories.
This photostory(pdf) tells the story of an evaluation of a programme in Kenya on reconciliation.
What makes for the perfect internal conference?

As regular readers will know I am interested in evaluating conferences and events. So this report caught my eye: What makes for the perfect internal conference.
The report details a discussion from some 40 private and public sector communication professionals in the UK and recommendations for the *perfect internal conference*.
I particularly like what they had to say on “Outcomes and effectiveness”:
“If the conference is part of a wider initiative, this must beclear to participants. Stakeholders need to define and agree on the desired outcomes in advance. This doesn’t preclude unexpected benefits, but creates a frame on which to measure success”
Indices, Benchmarks,and Indicators:Planning and Evaluating Human Rights Dialogues
For those interested in human rights an evaluation, an interesting publication has recently been published:
Indices, Benchmarks,and Indicators:Planning and Evaluating Human Rights Dialogues (pdf)
The publication provides guidance and advice on evaluating human rights dialogues and makes the point that :
“Ratification of treaties as a goal should be distinguished from the goal of improvements in the overall human rights record.”
In other words, the fact that treaties are agreed to by countries doesn’t mean that this is the ultimate end goal – more so that evaluation needs to monitor the actual application of human rights in-country.
Involving stakeholders in the evaluation process
An issue evaluators often come across is to what extent to involve stakeholders in the evaluation process: How much input should stakeholders have into designing evaluation questions? When and what feedback should be given to stakeholders during the evaluation? How to reflect the perspectives of all stakeholders in the evaluation questions and criteria?
An interesting guide has been put together that helps in answering some of these questions: “A Practical Guide for Engaging Stakeholders in Developing Evaluation Questions”(PDF) by the Robert Wood Johnson Foundation.
The guide proposes five steps to engaging with stakeholders:
Step 1: Prepare for stakeholder engagement
Step 2: Identify potential stakeholders
Step 3: Prioritize the list of stakeholders
Step 4: Consider potential stakeholders‘ motivations for participating
Step 5: Select a stakeholder engagement strategy
Measurement Matters – internal communications: A one-day workshop with Angela Sinickas in London
Angela Sinickas, a US-based communications evaluation expert is conducting a one day workshop on measuring communications (with a focus on internal communications) in London.
I’ve had the good fortune to participate in a workshop with Angela and she does have an immense knowledge and experience in communications evaluation.
date: 30 November 2010
cost: 545-595 £
location: Broadway House, London.
For more information and registration>>
This event is organised by Melcrum Publishing.
(This blog has no commercial connections to Melcrum or Angela – it just looks like an excellent workshop!).
Projects, feedback and communications
Here is a post from this blog’s co-author, Richard on projects, feedback and communication failure – featured on the “How to manage a camel” blog.
Some very interesting points for those interested in the failure of communications and effective monitoring.
Going beyond standard training evaluation
During the recent European Evaluation Conference, I saw a very interesting presentation on going beyond the standard approach to training evaluation.
Dr. Jan Ulrich Hense of LMU München presented his research on “Kirkpatrick and beyond: A comprehensive methodology for influential training evaluations” (view Dr Hense’s full presentation here).
As I’ve written about before (well in a post four years ago…) , Donald Kirkpatrick developed a model for training evaluation that focused on evaluating four levels of impact:
1. Reaction
2. Learning
3. Behavior
4. Results
Dr Hense provides a new perspective – we could say an updated approach – to this model. Even further, he has tested his ideas with a real training evaluation in a corporate setting.
I particularly like how he considers the “input” aspect (e.g. participants’ motivation) and the context of the training (which can be very important to influence its outcomes).
View Dr Hense’s presentation on his website.
Traditional surveys – how reliable are they?
Here is an interesting article from the Economist about political polling in the US. The article discusses the increasing difficulties in conducting polls or surveys that assess voting intentions in the US.
Most polling companies, in the US and elsewhere, conduct their surveys by calling phone landlines (fixed lines). But less and less people are using landlines – the article states that some 25% of US residents only have a mobile phone these days. Polling companies often don’t call mobile phones for various reasons, mostly related to cost. So the conclusion is, be careful when looking at survey results based on this traditional approach.
Interestingly, the article did not mention the growth of surveying using the Internet – or the possibility to survey using smart phones.
This article from FiveThirtyEight blog provides more insight into the issue – mentions the growth of Internet polling and is not so pessimistic about the future of traditional surveys.
For evaluation, the debate is interesting as often we use surveying as a tool – and many of the points discussed are relevant to the surveying undertaken for large-scale evaluations.
At the EEC conference in Prague
I am currently at the European Evaluation Society International Conference in Prague, Czech Republic. With over 600 evaluators, there have been no shortage of interesting discussions and debates on evaluation. The following topics I’ve found most interesting so far:
– How independent are evaluators
– Networks and partnerships evaluation
– Evaluating policy influence
– Going beyond standard training evaluation
– New approaches to evaluating non-formal learning
I’ll be blogging further on some of these subjects in the next weeks so stay tuned…