Posts filed under ‘Evaluation methodology’
Combining Qualitative and Quantitative Methods for Evaluation – Part 2
Further to my earlier post on combining qualitative and quantitative methods for evaluation, I came across some interesting resources on this subject:
An article “Methodological Triangulation,or How To Get Lost Without Being Found Out” – with an interesting review of common errors in triangulation.
“User-Friendly Handbook for Mixed Method Evaluations” – good practical advice on mixing evaluation methods.
Glenn
Combining Qualitative and Quantitative Methods for Evaluation
In evaluation, we often choose between using qualitative (e.g. focus group) and quantitative (e.g. survey) methods. In fact, we should always try and use both approaches. This is what is referred to as triangulation: the combination of several research methods in the study of the same phenomenon. My experience has been that a combination of research methods helps provide more data to work with and ultimately a more accurate evaluation. In a recent project, I was able to use interviews combined with surveys to assess participant reaction to training. I found that the information we could draw from the interviews was complimentary – and of added value – to what we discovered through the surveys.
Even if you are only conducting online surveys, the inclusion of open questions (where respondents put in comments in a free text field) is not quite triangulation but will provide you with insight into the phenomenon being evaluated. In a recent online survey project, we were able to clarify important issues by sorting and classifying the comments made in open questions. This proved invaluable information and gave the evaluation heightened status within the organisation.
Glenn
Evaluation, Proof and the Kylie effect
A question often asked by those commissioning an evaluation is how can we “prove” that a program or activities have caused a change we are observing. How can we be sure that a training program is responsible for the rise in productivity? That an awareness campaign has changed attitudes about a company? In most cases you simply cannot get 100% proof. But what you can do is collect evidence that indicates that a program / activity did play a major role in the change we are seeing. As one pundit put it:
“The key to winning a trial is evidence not proof”
Following are some strategies to tackle this issue:
- Set up a control group that were not exposed to the program or activity
- Use pre- and post measures to show the changes occuring over time
- Don’t only rely on survey or quantitative data – testimonies and anecdotes can be convincing evidence
- Identify any other possible factors that could have caused the change being observed.
Of course, setting up a control group is always difficult in a real-world environment. But my experience has shown that it can bring forward very useful results, if we are honest about limitations and other possible influences.
It is important to be transparent and recognise any other factors that could have caused the change being observed. Take for example, breast cancer awareness in Australia. Health educators have been working hard for years to get more young women to undertake a mammogram (breast screening). As if detected early, the disease can be treated successfully. So for health educators, a clear impact indicator would be the number of appointments taken for mammograms. In August 2005, appointments for women aged 40 to 69 in Australia jumped by 101%. Was this the result of a very successful awareness campaign? No, in fact what we were seeing is what has been labelled as the “Kylie effect”. In May 2005, Australian pop singer Kylie Minogue was diagnosed with breast cancer resulting in mass media coverage about the issue – and consequent awareness of breast cancer and its detection. Studies have shown that there is a direct link between the jump in screening appointments and Kylie Minogue’s illness. If interested, you can read further about the “Kylie effect” on the BBC website.
Glenn