Posts filed under ‘Training evaluation’

New book: Monitoring and Evaluation Training: A Systematic Approach

An under-appreciated area has been what is a systematic approach to monitoring and evaluation (M&E) training for programs and projects. Now this gap has been filled with a new book from  Scott Chaplowe and J. Bradley Cousins:

Monitoring and Evaluation Training: A Systematic Approach

“Bridging theoretical concepts with practical, how-to knowledge, authors Scott Chaplowe and J. Bradley Cousins draw upon the scholarly literature, applied resources, and over 50 years of combined experience to provide expert guidance for M&E training that can be tailored to different training needs and contexts, from training for professionals or non-professionals, to organization staff, community members, and other groups with a desire to learn and sustain sound M&E practices.”

April 25, 2017 at 10:38 am 2 comments

2 day Course in Most Significant Change Technique, London, June 10-11, 2013

For those who are looking for insights into a participatory M&E method, this may be of interest:

A two day course on the Most Significant Change (MSC) technique –  a participatory monitoring & evaluation technique ideally suited to providing qualitative information on project /programme impact.The course offers several practical exercises and provides insights into other real world examples of MSC use.

Date: June 10-11, 2013
Venue: London, NCVO National Council for Voluntary Organisations
Society Building, 8 All Saints Street, London (near Kings Cross
Presenters: Theo Nabben with secondary input from Rick Davies

Costs: £ 450 per person for 2 days inclusive of meals. Please note: no scholarships are available.
Accommodation and transport are participants own responsibility. Course notes and electronic files will be provided. 10% discount to members of German and UK evaluation societies.

For course information details contact Theo at or through their website>>

June 1, 2013 at 9:53 am Leave a comment

Going beyond standard training evaluation

During the recent European Evaluation Conference, I saw a very interesting presentation on going beyond the standard approach to training evaluation.

Dr. Jan Ulrich Hense of LMU München presented his research on “Kirkpatrick and beyond: A comprehensive methodology for influential training evaluations” (view Dr Hense’s full presentation here).

As I’ve written about before (well in a post four years ago…) ,  Donald Kirkpatrick  developed a model for training evaluation that focused on evaluating four levels of impact:

1. Reaction
2. Learning
3. Behavior
4. Results

Dr Hense provides a new perspective – we could say an updated approach  – to this model. Even further, he has tested his ideas with a real training evaluation in a corporate setting.

I particularly like how he considers the “input” aspect (e.g. participants’ motivation) and the context of the training (which can be very important to influence its outcomes).

View Dr Hense’s presentation on his website.


October 31, 2010 at 9:37 pm Leave a comment

Measuring long term impact of conferences

Often we evaluate conferences with their participants just after the conferences, measuring mostly reactions and learnings, as I’ve written about previously.

Wouldn’t it be more interesting actually to try and measure the longer term impact of a conference? This is what the International AIDS Society has done concerning one of its international conferences – measuring longer term impact 14 months after the conference – you can view the report (pdf) here.

Their overall assessment of impact was as follows:

“AIDS 2008 had a clear impact on delegates’ work and on their organizations, and that the conference influence has extended far beyond those who attended, thanks to networking, collaboration, knowledge sharing and advocacy at all levels.”

July 8, 2010 at 7:52 pm Leave a comment

network mapping tool

As regular readers will now, I am interested in network mapping and have undertaken some projects where I have used network mapping to assess networks that have emerged as a result of conferences.

Here is quite an interesting tool, Net-Map, an interview-based mapping tool. The creators of this tool state that it is a “tool that helps people understand, visualize, discuss, and improve situations in which many different actors influence outcomes”.

Read further about the tool and view many of the illustrative images here>>


May 20, 2008 at 12:57 pm Leave a comment

Event scorecard

In the work I do to evaluate conferences and events, I have put together what I believe is a “neat” way of displaying the main results of an evaluation: an event scorecard. In the evaluation of a conference that occurs every year in Geneva, Switzerland, the LIFT conference, the scorecard summarises both qualitative and quantitative results taken from the survey of attendees. Above you can see a snapshot of the scorecard.

As I have evaluated the conference for three years now, we were also able to show some comparative data as you can see here:

If you are interested, you can view the full scorecard by clicking on the thumbnail image below:

And for the really keen, you can read the full evaluation report of the LIFT08 evaluation report (pdf).

Greetings from Tashkent, Uzbekistan from where I write this post. I’m here for an evaluation project and off to Bishkek, Kyrgyzstan now.


May 11, 2008 at 10:55 am 7 comments

Perceptions of evaluation

I’ve just spent a week in Armenia and Georgia (pictured above) for an evaluation project where I interviewed people from a cross section of society. These are both fascinating countries, if you ever get the chance to visit… During my work there, I was wondering – what do people think about evaluators? For this type of in-site evaluation, we show up, ask some questions – and leave – and they may never see us again.

From this experience and others I’ve tried to interpret how people see evaluators – and I believe people see us in multiple ways including:

The auditor: you are here to check and control how things are running. Your findings will mean drastic changes for the organisation. Many people see us in this light.

The fixer: you are here to listen to the problems and come up with solutions. You will be instrumental in changing the organisation.

The messenger: you are simply channelling what you hear back to your commissioning organisation. But this is an effective way to pass a message or an opinion to the organisation via a third party.

The researcher: you are interested in knowing what works and what doesn’t. You are looking at what causes what. This is for the greater science and not for anyone in particular.

The tourist: you are simply visiting on a “meet and greet” tour. People don’t really understanding why you are visiting and talking to them.

The teacher: you are here to tell people how to do things better. You listen and then tell them how they can improve.

We may have a clear idea of what we are trying to do as evaluators (e.g. to assess results of programmes and see how they can be improved), but we also have to be aware that people will see us in many different ways and from varied perspectives – which just makes the work more interesting….


April 21, 2008 at 8:46 pm 1 comment

Older Posts

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 1,144 other followers