Client Perspective of Evaluation

The Active Learning Network for Accountability and Performance in Humanitarian Action (ALNAP), has recently published the results (pdf) of a survey of evaluation managers from humanitarian organisations.

ALNAP had responses from 46 evaluation managers that commission and manage evaluation projects for humanitarian organisations. This provides an interesting insight into the “client” perspective of evaluation, some highlights:

What helps promote an evaluation?
ownership or “buy in” of an evaluation was the most often stated promoting factor. Quality of the evaluation report and recommendations was also important.

What inhibits the use of an evaluation?
The most frequently mentioned factor was the imposing of and evaluation by HQ and/or donor. The arrogant, fault-finding or ivory tower approach of evaluators and insufficient time for the evaluation leading to superficial results were also important factors.

What other factors induce changes in an organisation?
A very interesting question – what factors do they believe impact on change – apart from evaluation. Respondents mentioned two important influences: the media and donors. And to a lesser extent, the influence of peers (exchange/discussions between agencies)

Why do organisations evaluate?
Formal accountability (obligation to donors, trustees, etc.)
Improving the programme
Learning for the organisation
Legitimising (to add credence or challenge existing agenda)

how to increase use of evaluation?
Most respondents mentioned changing the atttitude of senior manager and the culture of learning within their organisations. Respondents spoke of a culture of defensiveness rather than of learning and reflection.

Some very interesting results. It also confirms what I have seen in the humanitarian field: communications professionals are slowly coming around to recognise that evaluation is necessary and important – but this is being prompted by pressure from donors and the monitoring and evaluation units that are sprouting up in their organisations.

Glenn

September 6, 2006 at 2:17 pm Leave a comment

Measuring Online Behaviour – Part 2

Further to my earlier post on measuring online behaviour, I would recommend this article in Brandweek. The article (which I read about on K D Paine’s blog), explains well the current practices of many companies in tracking online behaviour (particularly linked to online campaigns). It goes in the direction that I think – that is, in the online environment, we can measure behaviour of publics to supplement “offline” measurement.

I encourage companies to focus on performance indicators, that moves away from looking at visit statistics and more into what actions are undertaken by a user when visiting a website, for example: referral (referring a page/issue to a friend), commitment (signing-up or endorsing a given activity) or task completion (completing an action online – e.g. playing a game, requesting information, etc.).

Some point of interest I noted from this article:

– Time spent looking at a web feature is an important measure for some campaigns

– IBM looks at registrations and opt-ins as success measures for campaigns

– The Pharmaceutical industry is increasingly turning to online measurement as more and more patients seek medical information online.

Glenn

August 27, 2006 at 8:36 pm Leave a comment

Measuring Online Behavior

A lot has already been written about how we can measure online behavior through looking at indicators from web site statistics or “web metrics”. As part of PR measurement, web metrics can provide an interesting complement to other measures being taken. For example, in campaigning, online behaviour such as signing a petition, referring a page to a friend or uploading a message of support can be measures of behavior change and supplement “offline” measures. In advertising, the use of web metrics is making advertising more “measurable” and impacting the business model in general. This article in The Economist sums up well this change.

I found this explanation from a Google representative quoted in the article of interest:

Old way of “offline” advertising:
“Advertisers are always trying to block the stream of information to the user in order to blast their message to him.”

New way of “online” advertising:
“On the internet, by contrast, advertisers have no choice but to go with the user, the information coming back from the users is more important than the messages going out.The interactive nature of the Internet makes this possible; the medium more measurable and a two-way symmetrical approach to communications feasible.

Glenn

August 3, 2006 at 8:00 pm 2 comments

Methodology and misuse of research – part 2

As I wrote in a previous post, research results are sometimes misused (that’s nothing new…) and we are often given scant details on how the results were gathered and analysed.

I came across a study undertaken by a bank in Geneva, Switzerland (where I am living) that makes a series of claims about e-banking, web surfing habits and computer use in general. I was surprised to learn that these claims were based on a sample of 300 residents. Now Geneva has some 440,000 residents and I seem to recall from Statistics 101 that 300 people doesn’t really make a representative sample of 440,000 (it would be closer to 600 people depending upon the confidence level and intervals you are aiming at).

I’m not such a stickler on samples given that often the audiences we are looking at can be broken down into sub-populations that are often relatively small in number (so we look for highest participation as possible) – but if you do have a uniform finite population, try using this online sample size calculator to estimate the sample needed, it’s quite useful.

Glenn

July 26, 2006 at 6:42 am 5 comments

Methodology and misuse of research

I’m always surprised at the number of published research results that fail to explain how they came to gather/analyse the results they are promoting. Related to this, you have the issue of results being misused, embellished or taken out of context. Constantin Basturea tells the interesting tale of how results from a quick poll of 50 website visitors became a poll of “300 business communicators” in a later publication. He only found this out after being curious about the poll results and requesting details of the methodology.

Personally, I think it’s always wise to publish information about your methodology for evaluation projects, particularly if the results are published and freely available. That’s what we did for the evaluation of the LIFT06 conference, publishing the results and the methodology. Then hopefully your results are not taken out of context and the methodology is available for review and criticism.

Glenn

July 24, 2006 at 10:09 am 3 comments

Measurement Summit, September 2006

I see the that program of the 4th annual measurement summit (to be held in NH, US) has been announced by the Institute for Public Relations. This is an important conference for people interested in communications and evaluation. There are certainly some interesting topics being addressed: Shell Corporation on reputation tracking,  Procter and Gamble on ROI and PR and a panel discussion featuring three of my favourite “scholars” in the field of evaluation: Larissa Grunig, James Grunig and Jim MacNamara. The full program is available online (pdf).

For those of us who will not be able to make it, let’s hope they publish the main presentations and conclusions (they usually are kind enough to do so so).

Glenn

July 19, 2006 at 6:56 am Leave a comment

Media monitoring to behaviour changes

Can we make the logical step to “output” with media monitoring – measuring changes to knowledge, behaviour or attitudes? With traditional media monitoring we cannot. And that’s the missing link of most media monitoring – how can we tell if the media exposure led to a change with a given audience? Polling of audiences and making an informed assumption linking their media use with changes observed is possible – but cost and complexity are the main deterrents for many organisations.

But with the online environment, there are some interesting developments in the ability to link media exposure with an actual behaviour change of an audience. Take this example: people who read an article online and then link to it in their blog have made a behaviour change – a simple example. If we could show the path from media exposure to the triggering of thoughts, comments, actions and ideas we are heading in the “outcome” direction. David Phillips of Leaverwealth blog is working in this area and is developing software to summarise content of RSS feeds under subject headings and show the path to the original stories and posts. This uses a statistical/mathematical technique, Latent Semantic Analysis which extracts and represents the similarity of meaning of words and passages. Now, that’s much more valuable than clip counting.

Glenn

July 11, 2006 at 5:39 am 4 comments

Media monitoring – what is it worth? Part 2

Some further thoughts on media monitoring:

Michael Blower at the Media evaluation blog writes about audience reaction to the media as an “outtake” indicator with the new “BBC most popular stories” tool:

“For too long media content analysis has been driven by media output. This new tool makes it possible to do something which, up to now has been an expensive luxury – see an exact measure of media out-take.”

I find this a refreshing point of view from a media evaluator, moving the focus from “output” to “outtake”. If we consider “outtake” to mean understanding, reaction and favourability to a message, such website tools can provide a feedback – albeit not complete – on this level of measurement. I guess there must be a web-based tool that can collate and prioritise the popularity of news stories (based on visitor traffic) from the main news sites. Googletrends does this with search results and links it to news stories as I wrote about previously, a step in this direction. Of course we have to factor in the limitations of web metrics including pass-word protected content of news sites.

Glenn

July 7, 2006 at 12:23 pm Leave a comment

Media monitoring – what is it worth?

Prompted by a client asking me to look at how they evalute their media relations, I’ve been examining closer at the whole media monitoring subject. I’ve followed the debate on the new media monitoring system proposed in Canada (called “PR Measurement” “MRP” – it’s actually PRess monitoring and not Public Relations measurement) and I see that K D Paine has some wise comments in this area. What do I think? – the amount of time, budget and resources spent on media monitoring is completely out of proportion. For me, media monitoring is an indication of efficiency (number of messages placed, supported and received) and not an indication of effectiveness (did our message reach our given audiences and change knowledge, attitudes or behaviour).

Instead of spending time to manually analyse clips or budgets to pay for monitoring software, I would prefer to see media relations professionals tackling the harder questions – what was the effect of a given media campaign on their target audiences? Media monitoring conducted jointly with surveying target audiences is an interesting solution.

This can work particularly well with targeted campaigns; my colleagues at Benchpoint in the UK recently did a joint media analysis (with Mantra International) on the retail sector linked with surveys of the target audience (customers). The problem of isolating the effect of a given media campaign/event/activity is not simple as I have written about previously but at least – at a minimum – you will have indications as to the influence of the different media and be able to make reasonable assumptions – with supporting evidence – about your media work and its impact.

There are interesting developments in this area which I will write further about in the coming days.

Glenn

July 5, 2006 at 8:11 am 3 comments

Special issue focusing on public relations measurement and evaluation

A new special issue of PRism focusing on public relations measurement and evaluation has just been published. PRism is a free-access, online public relations and communication research journal.  

In his editorial (pdf), Tom Watson quotes Jon White as saying that the PR evaluation discussion is like:

“A car stuck in mud with its wheels spinning”

In other words, the debate goes around but there is no traction. Let’s hope that this interesting collection of articles will put some more tread on the tires…

And I’m obliged to mention my own case study in the issue:

Blogs, mash-ups and wikis – new tools for evaluating event objectives: A case study on the LIFT06 conference in Geneva (pdf)

Glenn

June 27, 2006 at 11:03 am Leave a comment

Older Posts Newer Posts


Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 2,665 other subscribers

Categories

Feeds