Granularity – part 2 – still no one cares!
It seems that people creating surveys don’t always pay attention to granularity issues as I’ve written about before… Here is another example from a survey from people who should know better (The Guardian newspaper nonetheless)…
Now what’s wrong with this? It’s highly unusual to place an MBA at the same level as a PhD – An MBA is the same educational level as an MA or MSc – a masters degree. This would make analysis difficult afterwards as you cannot separate correctly the different levels of education – and that’s a granularity issues – placing items in a correct level – not to mention that the scale above gives the impression that an MBA has the same value as a PhD…
Glenn
95 theses on evaluation
Disturbed by the state of affairs in evaluation, Professor Cronbach and colleagues wrote a 95 theses on reform in evaluation (inspired by Martin Luther’s 95 theses in 1517). They speak of the need for:
“A thoroughgoing transformation. Its priests and patrons have sought from from evaluation what it cannot, probably should not, give.”
Although written 28 years ago, the 95 theses (pdf) makes may pertinent points still valid today.
Here are several favourites that have stood the test of time (no. 75 is my favourite):
9. Commissioners of evaluations complain that the messages from evaluations are not useful, while evaluators complain that the messages are not used.
35. “Evaluate this program” is often a vague charge because a program or a system frequently has no clear boundaries.
49. Communication overload is a common fault; many an evaluation is reported with self-defeating thoroughness.
75. Though the information from an evaluation is typically not used at a foreseeable moment to make a foreseen choice, in many evaluations a deadline set at the start of the study dominates the effort.
95. Scientific quality is not the principle standard; an evaluation should aim to be comprehensible, correct and complete, and credible to partisans on all sides.
Read the full 95 theses (pdf) – despite this poor copy it’s well worth a read. The 95 theses originally appeared in the book “Towards reform of program evaluation“.
Glenn
Content analysis and word clouds
Content analysis is a research method to analyse and categorise all sorts of texts and images (from photos to interview transcripts to newspaper articles) with the aim of identifying trends and patterns.
This is usually a labour-intensive task but has recently been improved by smart software that appears to be getting better and better. Here is one, that I came across, Wordle, that searches through a text and gives more prominence to words appearing more frequently – and creates a “word cloud”. A simple idea that graphically can be quite revealing – here is a wordle of this very blog you are reading (click on it to see a larger version):
It’s quite interesting to see the main words that emerge; “communications”, “audience”, “survey”, “analysis” and “materials” – which for the most – are accurate descriptions of the focus of the blog.
Try creating your own word cloud on wordle>>
I discovered wordle while on the many eyes website, an interesting website about visualisation.
Glenn
Survey responses – do the “don’t know” really know?
I’ve written before about survey respones and the use of “don’t know” as an option on a Likert scale. What I said was that in some situations, a person may not have an opinion on a subject – and cannot say if they agree or disagree – so it may be wise to include a “don’t know” option. Well, i just read an interesting article that suggests that people who respond “don’t know” may actually have an opinion – it’s just that they may require a longer amount of time to develop confidence or awareness of their choice. The article gives an example of how the opinion of undecided people can be acurately predicted by creative means:
In a recent study, 33 residents of an Italian town initially told interviewers that they were undecided about their attitude toward a controversial expansion of a nearby American military base. But researchers found that those people’s opinions could be predicted by measuring how quickly they made automatic associations between photographs of the military base with positive or negative words.
Research in communication projects
I came across this useful table from the Devcom blog which explains how research can be used at different stages of communication projects. There are many elements that will be familiar to readers, but what caught my eye was the first method “audience analysis” – which is often ignored by communicators in their rush to create materials and campaigns. The blog also has an example of an audience analysis (pdf) for readers. And method 3 – pretesting of prototype material – is another step often skipped over.
|
Method |
Purpose |
|
1. Audience analysis |
To characterize audience (demographics, communication environment) to develop content of materials, set campaign targets |
|
2. Baseline survey |
To assess knowledge, beliefs and behavior – to document current scenario |
|
3. Pretesting of prototype materials |
To determine appeal, understandability of materials (radio drama, campaign materials) |
|
4. Management monitoring survey |
To track implementation plans and make adjustments as needed |
|
5. Content analysis |
To analyze the content of audience feedback |
|
6. Post-test survey |
To determine whether the project has achieved its objectives |
Evaluation and communications for development

If, like me, you are interested in how communications can support development programmes – and consequently how it can be evaluated – then you might want to check out the evaluation page of the Communication Initiative Network website – where you can find 100s of evaluation studies and reports on communication for development – ranging from reproductive health to tobacco control to conflict. View the range of subjects here>>
Open source movement measuring social media

I’ve written previously about measuring social media – now here is an interesting initiative – an open source movement measuring social media. They have a useful resources page and what we are trying to measure page. It’s also great that it’s an open source initiative as there are companies promoting proprietary solutions that claim to measure social media – but aren’t willing to share the methodology of how they do so.
Glenn
Evaluating Public Information and Advocacy Campaigns

Last week, I was at the European Evaluation Society Biennial Conference in Lisbon, Portugal and presented a paper on “Evaluating public information and advocacy campaigns”.
Here is a summary of the paper:
Increasingly non-governmental organisations and international organisations use public information and advocacy campaigns to support their goals. Existing methodologies are rarely applied to evaluate campaigns. However, meaningful evaluation of campaigns is possible by taking into account the specific nature of campaigns while meeting minimum requirements of evaluation. This paper discusses “lessons learnt” in evaluating campaigns and particular challenges faced in assessing international campaigns. Although a standard methodology is yet to emerge, this paper describes the desired outcomes that many campaigns share and the appropriate evaluation methods that have been successfully used.
Read the full paper here (pdf)>>
Glenn
Blog on event evaluation
This is the first blog I’ve seen that focuses 100% on event evaluation: Constellation Communication blog. Many interesting posts on event evaluation and ROI for events.
Similar to what I do with an “event scorecard”, here is an interesting post with visuals on how to represent event evaluation – the ROI measures are particularly useful.
Glenn
Evaluating events and conferences

Like me, you might be suprised to know that businesses spend an estimate 150 billion US dollars per year in organising meetings and events in the US alone.
But in my experience, organisations rarely measure the impact of meetings and events: what did attendance at meetings or events change in the performance of individuals and organisations as a whole?
I’ve done some event evaluation projects in the past years and have developed an “event scorecard”. To summarise and share my experiences, I have just created a fact sheet “Evaluating events and conferences (pdf)” – good luck with it!
Glenn
(above photo from Lift conference 2007, Geneva, Switzerland – photo by noneck).


