Posts filed under ‘PR evaluation’

Linking Media Coverage to Business Outcomes

Can we show a link between media coverage and desired business outcomes? A new study (pdf) from the US-based Institute for Public Relations has some interesting case studies that in several instances illustrate corresponding trends between increased media coverage and a desired business outcome occurring.

For example, they speak of a campaign on the importance of mammograms with the desired business outcome being an increase in the number of relevant medical procedures undertaken. Looking at the number of articles published on the issue and comparing it to the number of medical procedures, a correlation seems to exist. This can be seen in the above graph which shows in blue the number of press articles on the issues and in red the number of medical procedures undertaken (over 2 years in the US).

The authors of the study readily admit that they are making a jump in assuming “cause and effect” but what they are looking for is a “preponderance of evidence” that supports a correlation between media coverage and business outcomes.

What I find interesting is the jump from an output measure (clips) to a business outcome. Further, that they were able to find communication campaigns where a clear link was made between communication objectives and business objectives – as often there is a large gap between these two elements.

Read the full study “Exploring the Link Between Volume of Media Coverage and Business Outcomes”(pdf) By Angela Jeffrey, APR, Dr. David Michaelson, and Dr. Don W. Stacks

Glenn

November 15, 2006 at 9:50 pm 4 comments

Metrics: You are what you measure!

An interesting post from the Metrics Man on the “Media Measurement Catch-22” as he puts it. His main point is that “you are what you measure”, in other words, you will focus you efforts on achieving the metrics you set, and further:

If all you measure is media relations (primarily clip tonnage), that is how the PR profession will be valued.

Read the full post and if you are interested to learn more about the concept of metrics and how they influence our work efforts, consult the Hauser and Katz paper “Metrics: You are what you measure!” (pdf)

Glenn

November 1, 2006 at 9:45 pm Leave a comment

Why aren’t we measuring?

A real gem of a paper here (.doc) by Jim Macnamara, a well-know PR evaluation specialist from Australia. He provides an interesting response to the question “why don’t communication professionals measure more?”:

This is the real reason for lack of commitment to measurement. Most PR practitioners do not proactively use research to measure, either for planning or for evaluation, because in their worldview, it is not relevant. When one focuses on and sees one’s job as producing outputs such as publicity, publications and events, measurement of effects that those outputs might or might not cause is an inconsequential downstream issue – it’s someone else’s concern.

A very interesting conclusion – the focus on production is something I’ve seen a lot – I think there is certainly some truth in what he says.

Read the full article here (.doc).

And thanks to K D Paine for sharing this paper with us.

Glenn

October 19, 2006 at 9:27 pm Leave a comment

It’s Official: Harold Burson says lack of PR Measurement is no. 1 Obstacle

Last night, I attended a communications forum in Geneva where Harold Burson, the founder of the PR agency Burson & Marsteller spoke (in the photo above, he is on the right and Keith Rockwell from WTO on the left).

In responding to a question from a member of the audience (none other than the public affairs representative from the US Mission) – as to how can communicators measure the effectiveness of their programmes, Mr Burson responded:

“The lack of research by communication professionals is the number one obstacle in the PR field today – people don’t do enough research to evaluate the impact of their activities..”

I agree fully. Then he went on to explain the reason “why”. For Mr Burson, the reason is cost – PR research and measurement is too expensive, he mentioned that often research to evaluate can often cost as much as the activities itself. And that’s where I disagree – PR measurement does not have to be expensive. Most capable communication managers should be able to manage measurement tasks themselves through using low cost media monitoring services, easy-to-use online surveys and innovative methods such as case studies and tracking mechanisms. To get started, check out the guidelines from the Institute for PR. And there are certainly other reasons why communication professionals don’t evaluate.

You can read more about the forum on the Geneva Communicators blog. And you can read more about Mr Burson’s thoughts on his blog (is he the oldest PR blogger at 84 years old..?)

Glenn

October 12, 2006 at 7:46 pm 2 comments

The “Before” Aspect of evaluation

Evaluation is often thought of as a “concluding” activity – something that is done once a programme or project is finished. But evaluation has its role “before” and “during” an activity. A recent experience highlighted for me the importance that evaluation can play in the “before” phase.

I have been involved in setting-up a pan-European e-learning platform and prior to its launch, we decided to test the platform with a select group of users. In the learning or communications field that would be a standard procedure – to pre-test material before it is used with its target audiences. But I am amazed at how many organisations don’t pre-test material – a “before” evaluation activity.

The feedback we received from the test users was incredibly informative – they identified issues that we did not even think about; access, usablity and broader issues on motivation and incentives for using the platform. User tests for online websites/platforms do not have to be complicated and costly – Jakob Nielsen, the specialist in this field explains well why usability is not necessarily expensive.

The “before” evaluation phase is much broader than simply pre-testing material. The establishment of baseline data (e.g. attitude levels on issues), the gathering of existing research on a subject, benchmarking with comparable projects and ensuring that a project’s objectives are clear and measurable are some of the components of this phase.

Glenn

September 30, 2006 at 12:44 pm 1 comment

A Post-Modern Tale on Evaluation

A tale of an organisation concerned with “perception”: Due to some negative press, the organisation was convinced that this was causing a drop in their reputation and affecting their relationship with key government and political stakeholders. Consequently the organisation was pushing to re-orientate their communication activities to lobbying and campaigning activities aimed at government and political circles.

But before going ahead, the organisation did have some input from others (no, not me – but a “good friend”) who suggested evaluating the perception of the organisation amongst stakeholders. A methodology was drawn up and a survey conducted of the major stakeholders. Lo and behold, the organisation was shocked upon seeing the findings. The results showed that government officials and politicians actually had a very good perception of the organisation. But that other important target groups, notably key partners and staff had a negative perception of the organisation. So now the organisation is re-re-orientating activities towards staff and partners.

This simple but true tale illustrates two points that I consider important for image and evaluation:

1) Your view of how your organisation is perceived is probably false. Your stakeholders do not necessarily have access nor are influenced by the same media as you are.

2) The only way to determine how stakeholders perceive your organisations is by asking them. Don’t base your ideas on “feelings” or what the media are reporting. Go to the source.

And like all tales, it has a moral: Our intuition can often be wrong. We base our decisions on biased information formed by our own “world view”. An objective evaluation can be a solution and can alter, sometimes radically, what we thought of as the “truth”.

Glenn

September 21, 2006 at 6:05 pm 1 comment

Client Perspective of Evaluation

The Active Learning Network for Accountability and Performance in Humanitarian Action (ALNAP), has recently published the results (pdf) of a survey of evaluation managers from humanitarian organisations.

ALNAP had responses from 46 evaluation managers that commission and manage evaluation projects for humanitarian organisations. This provides an interesting insight into the “client” perspective of evaluation, some highlights:

What helps promote an evaluation?
ownership or “buy in” of an evaluation was the most often stated promoting factor. Quality of the evaluation report and recommendations was also important.

What inhibits the use of an evaluation?
The most frequently mentioned factor was the imposing of and evaluation by HQ and/or donor. The arrogant, fault-finding or ivory tower approach of evaluators and insufficient time for the evaluation leading to superficial results were also important factors.

What other factors induce changes in an organisation?
A very interesting question – what factors do they believe impact on change – apart from evaluation. Respondents mentioned two important influences: the media and donors. And to a lesser extent, the influence of peers (exchange/discussions between agencies)

Why do organisations evaluate?
Formal accountability (obligation to donors, trustees, etc.)
Improving the programme
Learning for the organisation
Legitimising (to add credence or challenge existing agenda)

how to increase use of evaluation?
Most respondents mentioned changing the atttitude of senior manager and the culture of learning within their organisations. Respondents spoke of a culture of defensiveness rather than of learning and reflection.

Some very interesting results. It also confirms what I have seen in the humanitarian field: communications professionals are slowly coming around to recognise that evaluation is necessary and important – but this is being prompted by pressure from donors and the monitoring and evaluation units that are sprouting up in their organisations.

Glenn

September 6, 2006 at 2:17 pm Leave a comment

Measuring Online Behaviour – Part 2

Further to my earlier post on measuring online behaviour, I would recommend this article in Brandweek. The article (which I read about on K D Paine’s blog), explains well the current practices of many companies in tracking online behaviour (particularly linked to online campaigns). It goes in the direction that I think – that is, in the online environment, we can measure behaviour of publics to supplement “offline” measurement.

I encourage companies to focus on performance indicators, that moves away from looking at visit statistics and more into what actions are undertaken by a user when visiting a website, for example: referral (referring a page/issue to a friend), commitment (signing-up or endorsing a given activity) or task completion (completing an action online – e.g. playing a game, requesting information, etc.).

Some point of interest I noted from this article:

– Time spent looking at a web feature is an important measure for some campaigns

– IBM looks at registrations and opt-ins as success measures for campaigns

– The Pharmaceutical industry is increasingly turning to online measurement as more and more patients seek medical information online.

Glenn

August 27, 2006 at 8:36 pm Leave a comment

Methodology and misuse of research – part 2

As I wrote in a previous post, research results are sometimes misused (that’s nothing new…) and we are often given scant details on how the results were gathered and analysed.

I came across a study undertaken by a bank in Geneva, Switzerland (where I am living) that makes a series of claims about e-banking, web surfing habits and computer use in general. I was surprised to learn that these claims were based on a sample of 300 residents. Now Geneva has some 440,000 residents and I seem to recall from Statistics 101 that 300 people doesn’t really make a representative sample of 440,000 (it would be closer to 600 people depending upon the confidence level and intervals you are aiming at).

I’m not such a stickler on samples given that often the audiences we are looking at can be broken down into sub-populations that are often relatively small in number (so we look for highest participation as possible) – but if you do have a uniform finite population, try using this online sample size calculator to estimate the sample needed, it’s quite useful.

Glenn

July 26, 2006 at 6:42 am 5 comments

Methodology and misuse of research

I’m always surprised at the number of published research results that fail to explain how they came to gather/analyse the results they are promoting. Related to this, you have the issue of results being misused, embellished or taken out of context. Constantin Basturea tells the interesting tale of how results from a quick poll of 50 website visitors became a poll of “300 business communicators” in a later publication. He only found this out after being curious about the poll results and requesting details of the methodology.

Personally, I think it’s always wise to publish information about your methodology for evaluation projects, particularly if the results are published and freely available. That’s what we did for the evaluation of the LIFT06 conference, publishing the results and the methodology. Then hopefully your results are not taken out of context and the methodology is available for review and criticism.

Glenn

July 24, 2006 at 10:09 am 3 comments

Older Posts Newer Posts


Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 2,665 other subscribers

Categories

Feeds