Posts filed under ‘Campaign evaluation’
Kidneys, Kylie and effects

This month, the Dutch authorities reported that people registering as organ donors had tripled compared to previous months. What caused this sudden jump in registrations – a fantastic awareness programme?
In fact, they trace the increase to the now infamous “Dutch TV Kidney Hoax“, a reality TV show where real patients in need of a kidney “competed” for one.
From the communications evaluation point of view, it is an interesting example of how a communications activity can bring about a rapid change in behaviour (in this case donor registration) and perhaps one that was not intended.
In evaluating our own communication activities, we should try and identify other factors that could have influenced the change being seen – in the kidney TV hoax it was obvious but it will not be for many of the more day-to-day communication activities that we run.
Which reminds me of another example – in August 2005, the number of appointments made for manograms (to detect breast cancer) jumped by 101% in Australia. Was this the result of a successful communications campaign? No, in fact that month, pop singer and fellow Melburnian Kylie Minogue was diagnosed with breast cancer resulting in mass media coverage about the issue which I’ve written about previously.
The identification of other possible explanations for changes being observed (rather than just saying “look our communications campaign worked”) is important in maintaining a credible and balanced approach to evaluation.
Glenn
Fact Sheets on Communications Evaluation
As part of a breakfast meeting recently held in Geneva on evaluation and communications (where Tom Watson spoke at), I put together a series of fact sheets which some of you may find of interest:
- Introduction to communications evaluation (pdf)
- Evaluating networks (pdf)
- Evaluating communication campaigns (pdf)
Glenn
Evaluating Advocacy Campaigns

I’ve written previously about work that others and myself have done on evaluating communication campaigns, particulary concerning campaigns that aim for both changes in individual behaviour and government/private sector policies. In this same direction, a post from the Mission Measurement blog caught my eye on evaluating advocacy campaigns. They make the very relevant point that although evaluating the impact of advocacy campaigns is difficult – trying to isolate the precise influence on changes being observed – what certainly can be measured is the progress towards the desired change.
They go on to provide some further insight into this issue, by looking at various measurements undertaken, such as:
- Number of contacts established
- Intermediate changes to knowledge/attitudes
- Measuring progress of change on a continuum
- Bellweather ratings
In the same vein, what I recommend to organisations is to set clear objectives to start with in terms of what is precisely expected from advocacy/campaigning and establish relatively simple “tracking mechanisms” to follow “progress” on an issue – on a policy level (e..g. number of governments that publicly commit to a given issue) or at an individual level (e.g number of people who pledge to undertake a given action). Often this information is “known” within an organisation but is not centralised or analysed – making any conclusion on a campaign’s impact difficult.
Glenn
Does Creativity Equal Results?

Last week I gave a presentation on evaluation at the First ISO and IEC Marketing and Communication Forum which took place in Geneva. The forum gathered communications and marketing professionals from all over the world working in the field of standards development.
My presentation focused on some of my favourite topics of evaluation, notably:
- Why don’t marketing/communication professionals evaluate
- The need for clarity in setting marketing/communication objectives
- How low cost evaluation can be undertaken
- The risks of being over creative in communications
On the last point, I used the example of the Got Milk campaign which has been lauded as one of the most visible and creative ad campaigns of all time (notably by the advertising industry). However, did the highly creative ads actually help achieve campaign objectives? That is, to get people to drink more milk? Well, milk consumption continues to decline and the ads have been criticised for not addressing a key concern for teenagers – that they consider milk to be fattening.
And that’s the point I tried to make, that creativity all is well and good – but it has to help communicators achieve their campaign goals – and be measurable.
My full presentation can be downloaded here:
Presentation: Effective Marketing & Communications through Evaluation (pdf – 1 MB)
Glenn
Acknowledgement: the example of the Got Milk campaign comes from the book “The Fall of Advertising and the Rise of PR”, A & L Ries.
Linking Media Coverage to Business Outcomes

Can we show a link between media coverage and desired business outcomes? A new study (pdf) from the US-based Institute for Public Relations has some interesting case studies that in several instances illustrate corresponding trends between increased media coverage and a desired business outcome occurring.
For example, they speak of a campaign on the importance of mammograms with the desired business outcome being an increase in the number of relevant medical procedures undertaken. Looking at the number of articles published on the issue and comparing it to the number of medical procedures, a correlation seems to exist. This can be seen in the above graph which shows in blue the number of press articles on the issues and in red the number of medical procedures undertaken (over 2 years in the US).
The authors of the study readily admit that they are making a jump in assuming “cause and effect” but what they are looking for is a “preponderance of evidence” that supports a correlation between media coverage and business outcomes.
What I find interesting is the jump from an output measure (clips) to a business outcome. Further, that they were able to find communication campaigns where a clear link was made between communication objectives and business objectives – as often there is a large gap between these two elements.
Read the full study “Exploring the Link Between Volume of Media Coverage and Business Outcomes”(pdf) By Angela Jeffrey, APR, Dr. David Michaelson, and Dr. Don W. Stacks
Glenn
Measuring Online Behaviour – Part 2
Further to my earlier post on measuring online behaviour, I would recommend this article in Brandweek. The article (which I read about on K D Paine’s blog), explains well the current practices of many companies in tracking online behaviour (particularly linked to online campaigns). It goes in the direction that I think – that is, in the online environment, we can measure behaviour of publics to supplement “offline” measurement.
I encourage companies to focus on performance indicators, that moves away from looking at visit statistics and more into what actions are undertaken by a user when visiting a website, for example: referral (referring a page/issue to a friend), commitment (signing-up or endorsing a given activity) or task completion (completing an action online – e.g. playing a game, requesting information, etc.).
Some point of interest I noted from this article:
– Time spent looking at a web feature is an important measure for some campaigns
– IBM looks at registrations and opt-ins as success measures for campaigns
– The Pharmaceutical industry is increasingly turning to online measurement as more and more patients seek medical information online.
Glenn
Evaluating Communication Campaigns
As the whole development and humanitarian sector focus more on accountability and performance, there has been a push for more evaluation of communication activities of this sector.
Most methodology and tools can be adapted from those used in the private sector. However, many communication campaigns of NGOs and international organisations often have dual outcomes they wish to achieve – individual behaviour change (e.g. persuade individuals to adopt a more healthier lifestyle) and policy change (e.g. push governments to change policy on food labeling).
An excellent study “Lessons in Evaluating Communication Campaigns: Five Case Studies” from the Harvard Family Research Project looks at evaluating campaigns ranging from gun safety to emmissions (ozone) reduction.
If you are interested to read more about standards and practices of evaluation in the development and humanitarian sector, a good starting point is The Active Learning Network for Accountability and Performance in Humanitarian Action (ALNAP). This interagency forum focuses on evaluation and accountability issues in this sector.
Glenn