Posts filed under ‘Evaluation methodology’
Impact or results?

When speaking of achieving objectives for a project, I’ve heard a lot of people speak of the “intended impact” and I’ve read quite some “impact reports”. I know it’s a question of language, but often people use the word “impact” when in fact they should use the word “results”. Impact in the evaluation field has a specific application to long term effects of a project. The DAC Glossary of Key Terms in Evaluation and Results Based Management (pdf) produced by the OECD contains the most widely accepted definitions in this field. Impact is defined as:
“Positive and negative, primary and secondary long-term effects produced by a development intervention, directly or indirectly, intended or unintended”.
And “results” is defined as
“The output, outcome or impact (intended or unintended, positive and/or negative) of a development intervention”.
Consequently I believe that when we produce a report that shows media visibility generated by a project, this is a short term output and should be called “results” rather than “impact” which applies to more long-term effects.
Glenn
Output or outcome?

I did appreciate the following quote from Alberto Gonzales, US Attorney General who when defending the work of his department said:
“Good, if you look at the output”
Regardless of what you think of Mr Gonzales and his department’s performance, I find it interesting the use of the word output – it has sneaked in from management-by-objective speak… but output is usually a poor measure for performance, as it represents the products or services produced. It is just like..
A press officer judges her performance by the number of press releases she writes
A training office judges his performance by the number of people that attends his training sessions
What is far more important are outcomes – the effects and changes that are a result of the outputs:
A press officer should judge her performance by how her press activities change the knowledge and attitudes of audiences
A training officer should judge his performance by how the people he trains uses what they have learnt
Like Mr Gonzales, most people prefer to look at outputs to judge performance as they are much easier to control and monitor compared to outcomes, which I’ve written about previously. But increasingly activities are assessed on what they achieve (outcome) rather than what they produce (output).
Glenn
Kidneys, Kylie and effects

This month, the Dutch authorities reported that people registering as organ donors had tripled compared to previous months. What caused this sudden jump in registrations – a fantastic awareness programme?
In fact, they trace the increase to the now infamous “Dutch TV Kidney Hoax“, a reality TV show where real patients in need of a kidney “competed” for one.
From the communications evaluation point of view, it is an interesting example of how a communications activity can bring about a rapid change in behaviour (in this case donor registration) and perhaps one that was not intended.
In evaluating our own communication activities, we should try and identify other factors that could have influenced the change being seen – in the kidney TV hoax it was obvious but it will not be for many of the more day-to-day communication activities that we run.
Which reminds me of another example – in August 2005, the number of appointments made for manograms (to detect breast cancer) jumped by 101% in Australia. Was this the result of a successful communications campaign? No, in fact that month, pop singer and fellow Melburnian Kylie Minogue was diagnosed with breast cancer resulting in mass media coverage about the issue which I’ve written about previously.
The identification of other possible explanations for changes being observed (rather than just saying “look our communications campaign worked”) is important in maintaining a credible and balanced approach to evaluation.
Glenn
Online polling – legitimate or not?

Here is something from the mainstream media, the International Herald Tribune has published an interesting article about the legitimacy or not of online polling – basically the use of online survey tools to conduct polling with broad audiences.
The article pits two main players in the field – Yougov against TNS.
The representative from TNS says about internet polling:
“Internet polling is like the Far West, with no rules, no sheriff and no reference points.”
Of course, he has a point although the counter argument is that *offline* polling – usually done by calling people on their fixed phone line – is fastly becoming obsolete as countries create “no call” lists and people use increasingly their mobile phones.
From my perspective, I would see the debate from a slightly different angle:
- We shouldn’t forget the approach of triangulation, that is the combination of different research methods to bring us closer to the *truth*. Internet polling, if combined with other research methods (interviews, focus groups, observations) can become more useful.
- The whole article focuses only on surveying *unknown* audiences – that is members of the public as a whole. However, most evaluation that I see undertaken is done with *known” audiences, e.g. staff, partners, customers, members, etc. For *known* audiences, the use of internet polling is efficient and reliable, assuming of course that your audience does have access to the Internet. Of course, if you are trying to gauge the opinions of audiences that are obviously not using the Internet, then another approach would be appropriate.
Glenn
Accountability and Outcomes – the Penny Drops…

In the past months I’ve had many discussions with project managers about evaluation where they clearly feel uncomfortable linking their objectives to outcomes – and now after reading a recent article in the Evaluation journal, the penny has dropped (i.e. I am able to link up the dots and understand a little more…).
In an article by John Mayne, “Challenges and Lessons in Implementing Results-Based Management”, he discusses the issue of accountability and outcomes:
“People are generally comfortable with being accountable for things they can control. Thus, managers can see themselves as being accountable for the outputs produced by the activities they control. When the focus turns to outcomes, they are considerably less comfortable, since the outcomes to be achieved are affected by many factors not under the control of the manager.”
And that’s it – a communications manager prefers to be accountable for the number of press releases s/he publishes and not the changes to an audiences knowledge or attitude, a training manager prefers to be accountable for the number of courses s/he organises and not the impact on an organisation’s efficency, etc.
So is there a solution? John Mayne speaks of a more sophisticated approach to accountability, notably to look at the extent to which a programme has influenced and contributed to the outcomes observed. And that leads us to the next question – how much influence is good enough?
Food for thought…you can read an earlier version of the article here (pdf).
Glenn
Macro or Micro Approach to Evaluation?

When I’m asked to take on an evaluation project, I usually categorise it in my own mind as “micro” or “macro”. Let me explain. I see evaluation projects falling into these two categories:
Macro: evaluation of an overall project or programme (e.g. training programme, communications project)
Micro: evaluation of an element that is part of a larger programme (e.g. evaluation of online communications – part of larger communications project, evaluation of an event that is part of larger campaign).
I find that a lot of evaluation projects that I get involved with are at a “micro” level. And I’ve been wondering why is this so?
I believe that evaluation is more often approached at a “micro” level because it is easier to deal with and less daunting for an organisation to cope with. Many people do not have the resources, time and political authority to launch a “macro” evaluation of projects/programmes.
And a lot of the literature on evaluation recommends implementing evaluation aspects in small steps. And there’s certainly some merit in this – to start at the “micro” level and build up to the “macro” level. In this way, people can hopefully see the benefits of evaluation and will support larger evaluation efforts when needed.
Glenn
The “Before” Aspect of evaluation
Evaluation is often thought of as a “concluding” activity – something that is done once a programme or project is finished. But evaluation has its role “before” and “during” an activity. A recent experience highlighted for me the importance that evaluation can play in the “before” phase.
I have been involved in setting-up a pan-European e-learning platform and prior to its launch, we decided to test the platform with a select group of users. In the learning or communications field that would be a standard procedure – to pre-test material before it is used with its target audiences. But I am amazed at how many organisations don’t pre-test material – a “before” evaluation activity.
The feedback we received from the test users was incredibly informative – they identified issues that we did not even think about; access, usablity and broader issues on motivation and incentives for using the platform. User tests for online websites/platforms do not have to be complicated and costly – Jakob Nielsen, the specialist in this field explains well why usability is not necessarily expensive.
The “before” evaluation phase is much broader than simply pre-testing material. The establishment of baseline data (e.g. attitude levels on issues), the gathering of existing research on a subject, benchmarking with comparable projects and ensuring that a project’s objectives are clear and measurable are some of the components of this phase.
Glenn
New Methodologies in Evaluation
The Uk-based Overseas Development Institute have published a very comprehensive guide “Tools for Knowledge and Learning: A guide for development and humanitarian organisations” (pdf) which contains descriptions of 30 knowledge and learning tools and techniques.
It contains guidelines for several relatively new methodologies useful for evaluation, notably Social Network Analysis, Most Significant Change and Outcome Mapping. I believe these new methodologies could be useful in a lot fields, not only for the development / humanitarian sector.
Glenn
Methodology and misuse of research – part 2
As I wrote in a previous post, research results are sometimes misused (that’s nothing new…) and we are often given scant details on how the results were gathered and analysed.
I came across a study undertaken by a bank in Geneva, Switzerland (where I am living) that makes a series of claims about e-banking, web surfing habits and computer use in general. I was surprised to learn that these claims were based on a sample of 300 residents. Now Geneva has some 440,000 residents and I seem to recall from Statistics 101 that 300 people doesn’t really make a representative sample of 440,000 (it would be closer to 600 people depending upon the confidence level and intervals you are aiming at).
I’m not such a stickler on samples given that often the audiences we are looking at can be broken down into sub-populations that are often relatively small in number (so we look for highest participation as possible) – but if you do have a uniform finite population, try using this online sample size calculator to estimate the sample needed, it’s quite useful.
Glenn
Methodology and misuse of research
I’m always surprised at the number of published research results that fail to explain how they came to gather/analyse the results they are promoting. Related to this, you have the issue of results being misused, embellished or taken out of context. Constantin Basturea tells the interesting tale of how results from a quick poll of 50 website visitors became a poll of “300 business communicators” in a later publication. He only found this out after being curious about the poll results and requesting details of the methodology.
Personally, I think it’s always wise to publish information about your methodology for evaluation projects, particularly if the results are published and freely available. That’s what we did for the evaluation of the LIFT06 conference, publishing the results and the methodology. Then hopefully your results are not taken out of context and the methodology is available for review and criticism.
Glenn