Posts filed under ‘Training evaluation’
For those who are looking for insights into a participatory M&E method, this may be of interest:
A two day course on the Most Significant Change (MSC) technique – a participatory monitoring & evaluation technique ideally suited to providing qualitative information on project /programme impact.The course offers several practical exercises and provides insights into other real world examples of MSC use.
Date: June 10-11, 2013
Venue: London, NCVO National Council for Voluntary Organisations
Society Building, 8 All Saints Street, London (near Kings Cross
Presenters: Theo Nabben with secondary input from Rick Davies
Costs: £ 450 per person for 2 days inclusive of meals. Please note: no scholarships are available.
Accommodation and transport are participants own responsibility. Course notes and electronic files will be provided. 10% discount to members of German and UK evaluation societies.
For course information details contact Theo at firstname.lastname@example.org or through their website>>
During the recent European Evaluation Conference, I saw a very interesting presentation on going beyond the standard approach to training evaluation.
Dr. Jan Ulrich Hense of LMU München presented his research on “Kirkpatrick and beyond: A comprehensive methodology for influential training evaluations” (view Dr Hense’s full presentation here).
As I’ve written about before (well in a post four years ago…) , Donald Kirkpatrick developed a model for training evaluation that focused on evaluating four levels of impact:
Dr Hense provides a new perspective – we could say an updated approach – to this model. Even further, he has tested his ideas with a real training evaluation in a corporate setting.
I particularly like how he considers the “input” aspect (e.g. participants’ motivation) and the context of the training (which can be very important to influence its outcomes).
View Dr Hense’s presentation on his website.
Often we evaluate conferences with their participants just after the conferences, measuring mostly reactions and learnings, as I’ve written about previously.
Wouldn’t it be more interesting actually to try and measure the longer term impact of a conference? This is what the International AIDS Society has done concerning one of its international conferences – measuring longer term impact 14 months after the conference – you can view the report (pdf) here.
Their overall assessment of impact was as follows:
“AIDS 2008 had a clear impact on delegates’ work and on their organizations, and that the conference influence has extended far beyond those who attended, thanks to networking, collaboration, knowledge sharing and advocacy at all levels.”
As regular readers will now, I am interested in network mapping and have undertaken some projects where I have used network mapping to assess networks that have emerged as a result of conferences.
Here is quite an interesting tool, Net-Map, an interview-based mapping tool. The creators of this tool state that it is a “tool that helps people understand, visualize, discuss, and improve situations in which many different actors influence outcomes”.
Read further about the tool and view many of the illustrative images here>>
In the work I do to evaluate conferences and events, I have put together what I believe is a “neat” way of displaying the main results of an evaluation: an event scorecard. In the evaluation of a conference that occurs every year in Geneva, Switzerland, the LIFT conference, the scorecard summarises both qualitative and quantitative results taken from the survey of attendees. Above you can see a snapshot of the scorecard.
If you are interested, you can view the full scorecard by clicking on the thumbnail image below:
And for the really keen, you can read the full evaluation report of the LIFT08 evaluation report (pdf).
Greetings from Tashkent, Uzbekistan from where I write this post. I’m here for an evaluation project and off to Bishkek, Kyrgyzstan now.
I’ve just spent a week in Armenia and Georgia (pictured above) for an evaluation project where I interviewed people from a cross section of society. These are both fascinating countries, if you ever get the chance to visit… During my work there, I was wondering – what do people think about evaluators? For this type of in-site evaluation, we show up, ask some questions – and leave – and they may never see us again.
From this experience and others I’ve tried to interpret how people see evaluators – and I believe people see us in multiple ways including:
The auditor: you are here to check and control how things are running. Your findings will mean drastic changes for the organisation. Many people see us in this light.
The fixer: you are here to listen to the problems and come up with solutions. You will be instrumental in changing the organisation.
The messenger: you are simply channelling what you hear back to your commissioning organisation. But this is an effective way to pass a message or an opinion to the organisation via a third party.
The researcher: you are interested in knowing what works and what doesn’t. You are looking at what causes what. This is for the greater science and not for anyone in particular.
The tourist: you are simply visiting on a “meet and greet” tour. People don’t really understanding why you are visiting and talking to them.
The teacher: you are here to tell people how to do things better. You listen and then tell them how they can improve.
We may have a clear idea of what we are trying to do as evaluators (e.g. to assess results of programmes and see how they can be improved), but we also have to be aware that people will see us in many different ways and from varied perspectives – which just makes the work more interesting….
Evaluators often use interviews as a primary tool to collect information. Many guides and books exist on interviewing – but not so many for evaluation projects in particular. Here are some hints on interviewing based on my own experiences:
1. Be prepared: No matter how wide-ranging you would like an interview to be, you should as a minimum note down some subjects you would like to cover or particular questions to be answered. A little bit of structure will make the analysis easier.
2. Determine what is key for you to know: Before starting the interview, you might have a number of subjects to cover. It may be wise to determine what is key for you to know – what are the three to four things you would like to know from every person interviewed? Often you will get side-tracked during an interview and later on going through your notes you may discover that you forgot to ask about a key piece of information.
3. Explain the purpose: Before launching into questions, explain in broad terms the nature of the evaluation project and how the information from the discussion will be used.
4. Take notes as you discuss: Even if it is just the main points. Do not rely on your memory as after you have done several interviews you may mix up some of the responses. Once the interview has concluded try to write further on the main points raised. Of course, recording and then transcribing interviews is recommended but not always possible.
5. Take notes about other matters: It’s important also to note down not only what a person says but how they say it – you need to look out for body language, signs of frustration, enthusiasm, etc. Any points of this nature I would normally note down at the end of my interview notes. This is also important if someone else reads your notes in order for them to understand the context.
6. Don’t offer your own opinion or indicate a bias: Your main role is to gather information and you shouldn’t try to defend a project or enter into a debate with an interviewee. Remember, listening is key!
7. Have interviewees define terms: If someone says “I’n not happy with the situation”, you have understood that they are not happy but not much more. Have them define what they are not happy about. It’s the same if an interviewew says “we need more support”. Ask them to define what they mean by “support”.
8. Ask for clarification, details and examples: Such as “why is that so?”, “can you provide me with an example?”, “can you take me through the steps of that?” etc.
Hope these hints are of use..