Kidneys, Kylie and effects

This month, the Dutch authorities reported that people registering as organ donors had tripled compared to previous months. What caused this sudden jump in registrations – a fantastic awareness programme?
In fact, they trace the increase to the now infamous “Dutch TV Kidney Hoax“, a reality TV show where real patients in need of a kidney “competed” for one.
From the communications evaluation point of view, it is an interesting example of how a communications activity can bring about a rapid change in behaviour (in this case donor registration) and perhaps one that was not intended.
In evaluating our own communication activities, we should try and identify other factors that could have influenced the change being seen – in the kidney TV hoax it was obvious but it will not be for many of the more day-to-day communication activities that we run.
Which reminds me of another example – in August 2005, the number of appointments made for manograms (to detect breast cancer) jumped by 101% in Australia. Was this the result of a successful communications campaign? No, in fact that month, pop singer and fellow Melburnian Kylie Minogue was diagnosed with breast cancer resulting in mass media coverage about the issue which I’ve written about previously.
The identification of other possible explanations for changes being observed (rather than just saying “look our communications campaign worked”) is important in maintaining a credible and balanced approach to evaluation.
Glenn
Measuring online behaviour – statistics to indicators

I’ve written previously about measuring online behaviour and how it can be linked to overall PR evaluation. I found of interest the recent news from Nielsen that they will now rank websites by time spent on sites rather than number of pages viewed. Interesting, as this is a recognition that an indirect indication of “interest” or “engagement” is the amount of time spent on a website, e.g. watching a video, clicking through a slide presentation, reading a text, etc.
When looking at measuring online behaviour, I’ve seen quite some organisations simply drowning in data from web metric software packages and are unable to pull out a real analysis of what they have achieved – or not through the web.
Ultimately indicators should be set to measure success by. These could be:
- “engagement” (average time spent on website),
- “interest” (number of podcast downloaded),
- “conversion” (number of sign-ups for a sales offer),
- “preferences” (growth in visits to a new language version) ,
- etc., etc .
On a related note, when thinking about how to measure online social networking, the Measurement Standard blog provides an interesting list of suggested indicators to measure.
Glenn
New blogs on PR measurement

Two new blogs that have come to my attention recently and focus on public relations and evaluation/measurement:
Evaluating the media from Michael Blowers in the UK
Measurement PRoponent / PRomulgator from Alan Chumley in Canada
Glenn
Download Free PR Measurement Book Now!

No that’s not a spam title there… PR measurement guru KD Paine has put her thoughts down on paper for us and into a 177 page book – which you can download a draft of here in pdf format (1.5 mb). I’ve just been skimming through it and there are certainly some interesting chapters – on measuring relationships with communities, investors and others. K.D Paine welcomes your feedback on her book blog.
Glenn
Evaluation of LIFT07 – can we measure the long term impact of conferences?
![]()
I’ve just finished an interesting evaluation study on LIFT07, an international conference on emerging technology and communications that was held in Geneva during February 2007 – you can view the complete evaluation report here (pdf -339 kb). Our main evaluation tool was a survey of the conference attendees (48% of participants completed the survey).
Apart from providing useful feedback for the conference organisers that will assist them in improving future conferences, the study set out to find out what was the longer term impact of the first LIFT conference (held in February 2006). By surveying attendees that participated in both the 2006 and 2007 conferences we were able to “track” some key points of changes in attitudes and behaviours and to what extent they could be attributed to the LIFT conference. My findings are summarised in this diagram (pdf).
What I found very interesting is that one year after the conference, 28% of attendees (of a 50 person sample) said they started new activities partly due to LIFT06, such as forming a partnership, creating a blog or launching a new partnership. Further, 90% of attendees said that the conference influenced them in finding and exchanging information.
Of course, we have to recognise the limitations of the study, notably that it is self reported (and not backed-up by independent confirmation) and it is a relatively small sample (i.e. 17.5% – 50 people out of 285 participants). Nevertheless, we can make certain conclusions that the conference did have longer term impact in quite precise areas with some participants; establishing new contacts, inspiring new ideas and ways to find and exchange information.
Glenn
The EvaluationWiki

I came across the EvaluationWiki recently which provides interesting background information and resources in all fields of, what else, evaluation. I particularly enjoyed reading the Origins of Evaluation page.
And while speaking of wikis, we shouldn’t forget the newPRwiki’s page on PR measurement for those interested in communcations and evaluation.
Glenn
Online polling – legitimate or not?

Here is something from the mainstream media, the International Herald Tribune has published an interesting article about the legitimacy or not of online polling – basically the use of online survey tools to conduct polling with broad audiences.
The article pits two main players in the field – Yougov against TNS.
The representative from TNS says about internet polling:
“Internet polling is like the Far West, with no rules, no sheriff and no reference points.”
Of course, he has a point although the counter argument is that *offline* polling – usually done by calling people on their fixed phone line – is fastly becoming obsolete as countries create “no call” lists and people use increasingly their mobile phones.
From my perspective, I would see the debate from a slightly different angle:
- We shouldn’t forget the approach of triangulation, that is the combination of different research methods to bring us closer to the *truth*. Internet polling, if combined with other research methods (interviews, focus groups, observations) can become more useful.
- The whole article focuses only on surveying *unknown* audiences – that is members of the public as a whole. However, most evaluation that I see undertaken is done with *known” audiences, e.g. staff, partners, customers, members, etc. For *known* audiences, the use of internet polling is efficient and reliable, assuming of course that your audience does have access to the Internet. Of course, if you are trying to gauge the opinions of audiences that are obviously not using the Internet, then another approach would be appropriate.
Glenn
The problem with ROI

I’ve written previously about Return on Investment (ROI) and particularly its application to blogging (which I consider as flawed).
At a broader level, there has been discussion for some time on ROI for public relations/communications programmes. Tom Watson of the dummyspit blog has written about this issue and the difficulty of applying what is essentially a financial concept to a non-financial activity, as is the case for marketing and PR.
As he states.
“For marketers, the application of ROI limits their role to sales support and ignores the brand and reputational issues. In PR, I’ve long argued that the use of business language is a fundamental sign of insecurity and a lack of confidence. It seems that marketing has the same affliction.”
Glenn
Ambitious but Achievable

I was recently at a presentation where the speaker commented on the SMART approach to writing objectives and said that that “A” stood for “Ambitious”. The person next to me nudged me and whispered “I think he means “Achievable””. And that got me thinking that the A should stand for “Ambitious but Achievable”.
For it to be possible to evaluate a programme or project, it is necessary to set SMART objectives that allow the indentification of indicators that facilitate evaluation. What do SMART objectives look like? I’ve created several examples (pdf) if you are interested at my interpretation of SMART.
But there is a potential conflict in the concept of “Ambitious but Achievable”: we need to set realistic objectives that can be evaluated – but at the same time we want to make our objectives ambitious enough in order to contribute significantly to the overall aims of a project or programme.
Glenn
Accountability and Outcomes – the Penny Drops…

In the past months I’ve had many discussions with project managers about evaluation where they clearly feel uncomfortable linking their objectives to outcomes – and now after reading a recent article in the Evaluation journal, the penny has dropped (i.e. I am able to link up the dots and understand a little more…).
In an article by John Mayne, “Challenges and Lessons in Implementing Results-Based Management”, he discusses the issue of accountability and outcomes:
“People are generally comfortable with being accountable for things they can control. Thus, managers can see themselves as being accountable for the outputs produced by the activities they control. When the focus turns to outcomes, they are considerably less comfortable, since the outcomes to be achieved are affected by many factors not under the control of the manager.”
And that’s it – a communications manager prefers to be accountable for the number of press releases s/he publishes and not the changes to an audiences knowledge or attitude, a training manager prefers to be accountable for the number of courses s/he organises and not the impact on an organisation’s efficency, etc.
So is there a solution? John Mayne speaks of a more sophisticated approach to accountability, notably to look at the extent to which a programme has influenced and contributed to the outcomes observed. And that leads us to the next question – how much influence is good enough?
Food for thought…you can read an earlier version of the article here (pdf).
Glenn