Posts filed under ‘Training evaluation’
Methodology and misuse of research – part 2
As I wrote in a previous post, research results are sometimes misused (that’s nothing new…) and we are often given scant details on how the results were gathered and analysed.
I came across a study undertaken by a bank in Geneva, Switzerland (where I am living) that makes a series of claims about e-banking, web surfing habits and computer use in general. I was surprised to learn that these claims were based on a sample of 300 residents. Now Geneva has some 440,000 residents and I seem to recall from Statistics 101 that 300 people doesn’t really make a representative sample of 440,000 (it would be closer to 600 people depending upon the confidence level and intervals you are aiming at).
I’m not such a stickler on samples given that often the audiences we are looking at can be broken down into sub-populations that are often relatively small in number (so we look for highest participation as possible) – but if you do have a uniform finite population, try using this online sample size calculator to estimate the sample needed, it’s quite useful.
Glenn
Methodology and misuse of research
I’m always surprised at the number of published research results that fail to explain how they came to gather/analyse the results they are promoting. Related to this, you have the issue of results being misused, embellished or taken out of context. Constantin Basturea tells the interesting tale of how results from a quick poll of 50 website visitors became a poll of “300 business communicators” in a later publication. He only found this out after being curious about the poll results and requesting details of the methodology.
Personally, I think it’s always wise to publish information about your methodology for evaluation projects, particularly if the results are published and freely available. That’s what we did for the evaluation of the LIFT06 conference, publishing the results and the methodology. Then hopefully your results are not taken out of context and the methodology is available for review and criticism.
Glenn
Combining Qualitative and Quantitative Methods for Evaluation – Part 2
Further to my earlier post on combining qualitative and quantitative methods for evaluation, I came across some interesting resources on this subject:
An article “Methodological Triangulation,or How To Get Lost Without Being Found Out” – with an interesting review of common errors in triangulation.
“User-Friendly Handbook for Mixed Method Evaluations” – good practical advice on mixing evaluation methods.
Glenn
Common Myths of Evaluation
Looking further into outcome evaluation in different fields, I came across a very good resource with a rather quaint title: The Basic Guide to Outcomes-Based Evaluation for Nonprofit Organizations with Very Limited Resources. Interesting to note the similiarities between my own top ten excuses for not evaluating and the common myths about evaluation that the author lists; as follows:
Myth: Evaluation is a complex science. I don’t have time to learn it! No! It’s a practical activity. If you can run an organization, you can surely implement an evaluation process!
Myth: It’s an event to get over with and then move on! No! Outcomes evaluation is an ongoing process. It takes months to develop, test and polish — however, many of the activities required to carry out outcomes evaluation are activities that you're either already doing or you should be doing. Read on …
Myth: Evaluation is a whole new set of activities – we don’t have the resources No! Most of these activities in the outcomes evaluation process are normal management activities that need to be carried out anyway in order to evolve your organization to the next level.
Myth: There’s a "right" way to do outcomes evaluation. What if I don’t get it right? No! Each outcomes evaluation process is somewhat different, depending on the needs and nature of the nonprofit organization and its programs. Consequently, each nonprofit is the "expert" at their outcomes plan. Therefore, start simple, but start and learn as you go along in your outcomes planning and implementation.
Myth: Funders will accept or reject my outcomes plan No! Enlightened funders will (at least, should?) work with you, for example, to polish your outcomes, indicators and outcomes targets. Especially if your's is a new nonprofit and/or new program, then you very likely will need some help — and time — to develop and polish your outcomes plan.
Myth: I always know what my clients need – I don't need outcomes evaluation to tell me if I'm really meeting the needs of my clients or not You don’t always know what you don’t know about the needs of your clients – outcomes evaluation helps ensure that you always know the needs of your clients. Outcomes evaluation sets up structures in your organization so that you and your organization are very likely always focused on the current needs of your clients. Also, you won’t always be around – outcomes help ensure that your organization is always focused on the most appropriate, current needs of clients even after you've left your organization.
Glenn
Evaluation of LIFT06: can we measure the impact of conferences?
I’ve just finished a very interesting project – the evaluation of the impact of the LIFT06 conference that took place in Geneva in February 2006. In a true open source spirit, the evaluation report is available for everyone to consult. With this evaluation, we tried to go beyond the standard assessment of reactions to a conference. We looked at changes to knowledge, attitudes and behaviours. Using a triangulation approach combining quantitative and qualitative research methods, I believe we could identify the influence of LIFT06 on these above variables. We were aware of the limitations to the evaluation given that it was a punctual evaluation and based largely on self-assessment of attitudinal and behavioural changes, which I explain here in the report.
What sort of changes could we identify?
Changes to awareness and attitudes: Through and online survey, the majority of attendees (82%) agreed that LIFT06 provided them with interesting information on the usage of emerging technologies and 70% agreed that LIFT06 influenced what they thought about the subject. This quote taken from an attendee’s blog illustrates this point:
“And just think; if I had never gone to Lift06 I would not be feeling anything like this strongly about the issue”
Changes to Behavior: Evaluations of conferences are rarely able to show a direct relation between the event and changes in behaviour of attendees. With LIFT06, some attendees indicated a change in behaviour, such as starting a blog or getting a new partnership. another key objective of LIFT06 was to “connect” people – 94% of attendees reported that they met new people at LIFT06. 
Evaluation – going beyond your own focus
If your focus is on evaluating PR, training or another business competency, it is sometimes helpful to learn more about evaluation by looking further than your particular focus. Look at international aid. I just read a review of a new book The White Man’s Burden – Why the West’s Efforts to Aid the Rest Have Done So Much Ill and So Little Good By William Easterly. Based on the book review, he raises two interesting points about evaluation:
Planners vs. Searchers: he says most aid projects have either one of these approaches. He writes “A Planner thinks he already knows the answer. A Searcher admits he doesn’t know the answers in advance”. He explains that Searchers treat problem-solving as an incremental discovery process, relying on competition and feedback to figure out what works.
Measuring Success: Easterly argues that aid projects rarely get enough feedback whether from competition or complaint. Instead of introducing outcome evaluation where results are relatively easy to measure (e.g. public health, school attendance, etc), advocates of aid measure success by looking at how much money rich countries spend. He says this is like reviewing movies based on their budgets.
You can read the full review of Easterly’s book on the IHT website.
These are some excellent points that we can apply to evaluation across the board. Evaluators can probably classify many projects they have evaluated as either of a Planner or Searcher nature. The Searcher approach integrates feedback and evaluation throughout the whole process – look at PR activities, we may not know what communication works best with our target audience to begin with, but by integrating feedback and evaluation we can soon find out.
His analogy about reviewing movies also strikes a chord. How many projects are evaluated on expenditure alone?
Aside from the evaluation aspects, his thoughts on international aid are interesting. I spent my formative years as an aid worker in Africa, Eastern Europe and Asia and a lot of what he says confirms my own experiences and conclusions.
Glenn
Combining Qualitative and Quantitative Methods for Evaluation
In evaluation, we often choose between using qualitative (e.g. focus group) and quantitative (e.g. survey) methods. In fact, we should always try and use both approaches. This is what is referred to as triangulation: the combination of several research methods in the study of the same phenomenon. My experience has been that a combination of research methods helps provide more data to work with and ultimately a more accurate evaluation. In a recent project, I was able to use interviews combined with surveys to assess participant reaction to training. I found that the information we could draw from the interviews was complimentary – and of added value – to what we discovered through the surveys.
Even if you are only conducting online surveys, the inclusion of open questions (where respondents put in comments in a free text field) is not quite triangulation but will provide you with insight into the phenomenon being evaluated. In a recent online survey project, we were able to clarify important issues by sorting and classifying the comments made in open questions. This proved invaluable information and gave the evaluation heightened status within the organisation.
Glenn
Evaluation, Proof and the Kylie effect
A question often asked by those commissioning an evaluation is how can we “prove” that a program or activities have caused a change we are observing. How can we be sure that a training program is responsible for the rise in productivity? That an awareness campaign has changed attitudes about a company? In most cases you simply cannot get 100% proof. But what you can do is collect evidence that indicates that a program / activity did play a major role in the change we are seeing. As one pundit put it:
“The key to winning a trial is evidence not proof”
Following are some strategies to tackle this issue:
- Set up a control group that were not exposed to the program or activity
- Use pre- and post measures to show the changes occuring over time
- Don’t only rely on survey or quantitative data – testimonies and anecdotes can be convincing evidence
- Identify any other possible factors that could have caused the change being observed.
Of course, setting up a control group is always difficult in a real-world environment. But my experience has shown that it can bring forward very useful results, if we are honest about limitations and other possible influences.
It is important to be transparent and recognise any other factors that could have caused the change being observed. Take for example, breast cancer awareness in Australia. Health educators have been working hard for years to get more young women to undertake a mammogram (breast screening). As if detected early, the disease can be treated successfully. So for health educators, a clear impact indicator would be the number of appointments taken for mammograms. In August 2005, appointments for women aged 40 to 69 in Australia jumped by 101%. Was this the result of a very successful awareness campaign? No, in fact what we were seeing is what has been labelled as the “Kylie effect”. In May 2005, Australian pop singer Kylie Minogue was diagnosed with breast cancer resulting in mass media coverage about the issue – and consequent awareness of breast cancer and its detection. Studies have shown that there is a direct link between the jump in screening appointments and Kylie Minogue’s illness. If interested, you can read further about the “Kylie effect” on the BBC website.
Glenn
Highlights from interviews at LIFT06
As part of my project to evaluate the impact of LIFT06, I spoke with some 20 people over the last two days about their initial reactions to the LIFT06 conference. This helps me gain further insight into the feedback we will receive when we survey all LIFT06 attendees next week. Here are some highlights from my interviews:
Why come to LIFT06?
“I need to know what is coming in the web field – in the near future. We need to know what services and features we can propose to our clients” [webmaster for large organisation]
“In my workplace people are using this technology. For me what is interesting is the impact of technology on people” [education worker]
“I’m here to exchange ideas with people working in similar positions – that’s the added value for me” [webmaster for NGO]
“I have nothing to do with technology. I’m in business intelligence. But this is the future I am told” [financial analyst]
What are the benefits?
“Ideas, ideas, ideas. I need the futuristic stuff like spimes but i also need the bread and butter stuff like communication channels” [web consultant]
“Concrete proposals. Mash-ups for example. That’s an area we’ve got to explore” [webmaster for large organisation]
“The communication aspect. I’m here for the marketing and communication perspective and how these new technologies can be applied [communication manager]
“Just before leaving for LIFT06, the Head of Media asked me “What do you know about blogs?” “Huh!” I said “Let’s speak when I get back on Monday” [Information Manager]
Are you connecting?
“It seems a bit geekish – I don’t know anyone. Could you introduce me to some people?” [consultant]
“Over breakfast and lunch, I’ve chatted with plenty of interesting people, of most interest were people working in the same field as me” [webmaster for NGO]
“Hey, that’s the UNAIDS cocktail taking place over there. Let’s go and network with them as I’ve got a job application pending there” [web editor]
Admittedly, these highlights are not all representative of the views expressed, but provide a flavour of what people thought – and said.
For some random quotes from LIFT06, check out Nicolas Nova’s collection on pasta and vinegar.
Glenn
Context and Evaluation
I noticed recently in an evaluation the influence that the context can have on the findings. Interviewing and surveying people from different countries, we could see how their responses were influenced by their different frames of reference.
If we are aware of context issues (e.g. the setting for a training seminar, the relationship between the respondents and the commissioning organisation, the political climate, etc.) we can better estimate their impact on the findings. Plus it helps see how we can apply our findings to other contexts.
Most of the recognised standards on evaluation speak of the importance of analyzing the influence of context as a question of “accuracy”. Consult the Program Evaluation Standards of the Evaluation Center of Western Michigan University if you are interested to learn more about context and accuracy in evaluation.
Glenn