Posts filed under ‘Evaluation tools (surveys, interviews..)’
Checklists and evaluation

Often in evaluation, we are asked to evaluate projects and programmes from several different perspectives: the end user, the implementer or that of an external specialist or “expert”. I always favour the idea that evaluation is representing the *target audiences* point of view – as is often the case in evaluating training or communications programmes – we are trying to explain the effects of a given programme or project on target audiences. However, often a complementary point of view from an “expert” can be useful. A simple example – imagine if you making an assessment of a company website – a useful comparison would be comparing the feedback from site visitors with that of an “expert” who examines the the website and gives his/her opinion.
However, often opinions of “experts” are mixed in with feedback from audiences and comes across as unstructured opinions and impressions. A way of avoiding this is for “experts” to use checklists – a structured way to assess the overall merit, worth or importance of something.
Now many would consider checklists as being a simple tool not worthy of discussion. But actually a checklist is often a representation of a huge body of knowledge or experience: e.g. how do you determine and describe the key criteria for a successful website?
Most checklists used in evaluation are criteria of merit checklists – where a series of criteria are established and given a standard scale (e.g. very poor to excellent) and are weighed equally or not (e.g. one criteria is equal or more crucial than the next one). Here are several examples where checklists could be useful in evaluation:
- Evaluating an event: you determine “success criteria” for the event and have several experts use a checklist and then compare results.
- Project implementation: a team of evaluators are interviewing staff/partners on how a project is being implemented. The evaluators use a checklist to assess the progress themselves.
- Evaluating services/products: commonly used, where a checklist is used by a selection panel to determine the most appropriate product/services for their needs.
This post by Rick Davies actually got me thinking about this subject and discusses the use of checklists in assessing the functioning of health centres.
Glenn
Impact or results?

When speaking of achieving objectives for a project, I’ve heard a lot of people speak of the “intended impact” and I’ve read quite some “impact reports”. I know it’s a question of language, but often people use the word “impact” when in fact they should use the word “results”. Impact in the evaluation field has a specific application to long term effects of a project. The DAC Glossary of Key Terms in Evaluation and Results Based Management (pdf) produced by the OECD contains the most widely accepted definitions in this field. Impact is defined as:
“Positive and negative, primary and secondary long-term effects produced by a development intervention, directly or indirectly, intended or unintended”.
And “results” is defined as
“The output, outcome or impact (intended or unintended, positive and/or negative) of a development intervention”.
Consequently I believe that when we produce a report that shows media visibility generated by a project, this is a short term output and should be called “results” rather than “impact” which applies to more long-term effects.
Glenn
The EvaluationWiki

I came across the EvaluationWiki recently which provides interesting background information and resources in all fields of, what else, evaluation. I particularly enjoyed reading the Origins of Evaluation page.
And while speaking of wikis, we shouldn’t forget the newPRwiki’s page on PR measurement for those interested in communcations and evaluation.
Glenn
Network Mapping – Commercial application?

I read of interest on this post about a network mapping service that has created relational maps about investors, companies and people in the Silicon Valley.
For example, they made a map of the capital links between the three big social networking sites (Facebook, Friendster & LinkedIn). You can view more examples here.
What’s interesting is that this is a paying service – to fully access and use the maps – which points out the value that network mapping can have – for analysing complex situations.
Thanks to Kushtrim Xhakli for bringing this to my attention.
Glenn
Beware: dodgy Blog ROI in circulation

Forrester Research have published a new report on the “ROI of blogging” (at USD $ 379 a pop). And I’ve seen that many bloggers have jumped on this with utmost enthusiasm.
Well hold on….
Although Charlene Li of Forrester explains well the ROI model there are some fundamental flaws of the ROI calculation that KD Paine and David Phillips explain further. As KD Paine put it:
“The false assumptions and inaccuracies in this report are scary”
What is the main flaw? Well, the whole blog ROI calculation falls down as it is based on comparing purchased advertising to editorial content, which is a highly discredited way of measuring PR value (read more about comparing advertising to editorial content in this report (pdf) by some leading scholars).
The report does have some interesting points in that it attempts to pull out some of benefits of blogging (such as customer insights) and comparing this to the cost of market research). Certainly the idea of showing how visibility grows from a blog post (through generating comments, thoughts and referrals) to changes in attitudes and behaviors is heading in the right direction, as I’ve written about before.
And as for blogging ROI, I would look more at the cost of working hours in blogging and comparing it to working hours needed to mount a traditional campaign – and comparing the changes to behaviour and attitude using both methods (admittedly easier said than done). That would be more a measure of “efficiency” than anything else.
Glenn
Using employee opinion surveys to guide HR policies

In a previous post Glenn wrote that evaluation is frustrating when no changes result.
Quite so. I have seen many an employee survey gather dust or been quietly forgotten.
Why should employees bother? Often the survey will avoid touchy issues- “The elephant in the room”. There is often a huge delay in making the survey results public. And nothing changes anyway.
Well, here’s a little case history to cast a ray of hope.
A Benchpoint client in the City of London’s financial sector (often characterised by macho management cultures) wanted to run an anonymous poll of employee attitudes on just about everything relating to the job and the relationship with the employer.
The company had very enlightened HR policies by traditional City standards. A “listening culture”, personal development goals, work/life balance, continuous personal feedback etc. But were the policies working? What needed to change?
More than “just a survey”, The Benchpoint poll enabled important decisions to be taken quickly with the benefit of employee input. The company announced operational changes just 5 days after the survey closed, and immediately after a presentation of the results to all employees.
Employee response and feedback was positive; just one employee was unable to take part. For the first time they had an opportunity to voice their true opinions anonymously and confidentially. And they were impressed by the speed of their management’s reaction.
For management, this was a relatively low-cost exercise, and was particularly economic of senior people’s time. It was also an excellent way of “walking the talk” and demonstrating that HR policies regarding inclusivity, development and teamwork were real.
The data collected provided an insight into the behaviour drivers of groups of employees, and a dynamic tool for future goal setting and benchmarking.
Conclusions
- Don’t do a survey unless you are prepared to communicate the results and make changes.
- Don’t make organisation or operational changes unless you have surveyed and understood the real issues.
- Do it quickly, share the results quickly and make your announcements quickly.
- Don’t assume that everything is OK in your organisation. Measure intelligently to find out what the real issues are.
Richard
New Methodologies in Evaluation
The Uk-based Overseas Development Institute have published a very comprehensive guide “Tools for Knowledge and Learning: A guide for development and humanitarian organisations” (pdf) which contains descriptions of 30 knowledge and learning tools and techniques.
It contains guidelines for several relatively new methodologies useful for evaluation, notably Social Network Analysis, Most Significant Change and Outcome Mapping. I believe these new methodologies could be useful in a lot fields, not only for the development / humanitarian sector.
Glenn
Measuring Online Behaviour – Part 2
Further to my earlier post on measuring online behaviour, I would recommend this article in Brandweek. The article (which I read about on K D Paine’s blog), explains well the current practices of many companies in tracking online behaviour (particularly linked to online campaigns). It goes in the direction that I think – that is, in the online environment, we can measure behaviour of publics to supplement “offline” measurement.
I encourage companies to focus on performance indicators, that moves away from looking at visit statistics and more into what actions are undertaken by a user when visiting a website, for example: referral (referring a page/issue to a friend), commitment (signing-up or endorsing a given activity) or task completion (completing an action online – e.g. playing a game, requesting information, etc.).
Some point of interest I noted from this article:
– Time spent looking at a web feature is an important measure for some campaigns
– IBM looks at registrations and opt-ins as success measures for campaigns
– The Pharmaceutical industry is increasingly turning to online measurement as more and more patients seek medical information online.
Glenn
Methodology and misuse of research – part 2
As I wrote in a previous post, research results are sometimes misused (that’s nothing new…) and we are often given scant details on how the results were gathered and analysed.
I came across a study undertaken by a bank in Geneva, Switzerland (where I am living) that makes a series of claims about e-banking, web surfing habits and computer use in general. I was surprised to learn that these claims were based on a sample of 300 residents. Now Geneva has some 440,000 residents and I seem to recall from Statistics 101 that 300 people doesn’t really make a representative sample of 440,000 (it would be closer to 600 people depending upon the confidence level and intervals you are aiming at).
I’m not such a stickler on samples given that often the audiences we are looking at can be broken down into sub-populations that are often relatively small in number (so we look for highest participation as possible) – but if you do have a uniform finite population, try using this online sample size calculator to estimate the sample needed, it’s quite useful.
Glenn
Methodology and misuse of research
I’m always surprised at the number of published research results that fail to explain how they came to gather/analyse the results they are promoting. Related to this, you have the issue of results being misused, embellished or taken out of context. Constantin Basturea tells the interesting tale of how results from a quick poll of 50 website visitors became a poll of “300 business communicators” in a later publication. He only found this out after being curious about the poll results and requesting details of the methodology.
Personally, I think it’s always wise to publish information about your methodology for evaluation projects, particularly if the results are published and freely available. That’s what we did for the evaluation of the LIFT06 conference, publishing the results and the methodology. Then hopefully your results are not taken out of context and the methodology is available for review and criticism.
Glenn