Posts filed under ‘PR evaluation’

Intelligent Measurement – what and how?

Intelligent Measurement is about using innovative methodologies and technologies to measure things which are (a) important to measure, and (b) notoriously difficult – normally because of the intangibility of the subject and the subjectivity of measurers.

Don’t get me wrong. As a declared enemy of bureaucracy, I have an aversion to all forms of obsessive micro management and control freakery.

But I have seen many organisations blunder along in the dark as they attempt to improve their communications or embark on change management or environmental performance programmes.  If only they had some objective measurements at the start of the programme- their strategy would be informed by actualities, not anecdote. They could have measured outcomes, not output. They could have used the data to frame a sensible budget, and also measure which parts of the budget were wasted, and which returned the investment.

I have been to a number of conferences on PR measurement, notably Katie Paine’s “Measurement Summits”. The cry from the floor is “how can I obtain evidence to justify/keep my job/budget/team?” One solution is measurement. But what? And how?

Over the next months, I will be posting on this blog examples of good “Intelligent Measurement”. (please feel free to post any suggestions on the blog).

Richard

February 28, 2006 at 9:46 pm Leave a comment

PR Measurement and Budgeting

Here is an interesting article from measurement guru KD Paine on how to make sure PR measurement stays in the budget. I liked her main point: 

“Measurement… a necessary tool for making better decisions and advancing the organization’s best interest.”

Like any program/project budget, measurement should account for 10% of the budget – which we all know happens rarely. But today it is possible to do simple and effective PR measurement – think about success stories, blog/media monitoring, testimonies, brief surveys – that are affordable and manageable by all PR professionals.

If you can show the results of your work by effective measurement, it will certainly help you keep your measurement budget – and more so – your overall budget.

Glenn 

February 26, 2006 at 5:35 pm 2 comments

Measuring Networks

This is an interesting tool from trackingthethreat.com that provides a graphical overview of the Al Qaeda network. The data is collected from thousands of open source reports, documents & news stories which are put together to establish the network linkages.

I write about this tool as it’s one of the first I have seen that attempts to measure a network. In communications, it is interesting for an organisation to assess the links between their key stakeholders. The theory is that a stakeholder group has more influence over an organisation if it has multiple links with other stakeholders. The structure of the stakeholder network is a good indicator as to where the power and influence is centered – and this helps organisations in prioritising their communication and relationship building activities with stakeholders.

The theory and practice of the importance of stakeholder networks is growing. If you are interested, read this article (pdf) from Ann Svendsen of the Collaborative Learning and Innovation Group (Simon Fraser University, Canada) where she explains how organisations actively assess and work with their stakeholder networks.

I learnt about the trackingthethreat network tool from the information aesthetics blog that looks at novel approaches as to how data can be visually represented.

Glenn

February 15, 2006 at 9:25 am Leave a comment

The Ninth Trap of PR Measurement

The people over at Cymfony have put together a list of the “Eight traps to Avoid in PR Measurement“, as follows:

1.) Not doing a media audit or assessment before starting a measurement program.

2.) Not defining standard metrics across your organization.

3.) Limiting metrics and analysis to small number of key pubs or simple messages.

4.) Treating all mention s equally.

5.) Not slicing metrics by different audience segments.

6.) Delayed measurement and reporting.

7.) Not taking blogs and online discussions seriously.

8.) Not demonstrating your success, often.

Some of these traps are relevant to all PR and communication measurement, notably 2, 5 6 & 8. However, the focus is clearly on measuring PR outputs and not PR outcomes.

So I would add a ninth trap:

“9. Focusing only on outputs and not outcomes”

PR outputs measure the amount of exposure an organisation receives. More important is to measure PR outcomes – did the PR activities result in any opinion, attitude or behaviour change amongst the targeted audiences?

Given the focus of the list, it’s not suprising to learn that Cymfony offer products to measure PR output. But they should recognise that good PR measurement goes beyond monitoring the amount of press articles, blog posts or online discussions you generate.

Glenn

February 10, 2006 at 1:48 pm 2 comments

Evaluation, Proof and the Kylie effect

A question often asked by those commissioning an evaluation is how can we “prove” that a program or activities have caused a change we are observing. How can we be sure that a training program is responsible for the rise in productivity? That an awareness campaign has changed attitudes about a company? In most cases you simply cannot get 100% proof. But what you can do is collect evidence that indicates that a program / activity did play a major role in the change we are seeing. As one pundit put it:

“The key to winning a trial is evidence not proof”

Following are some strategies to tackle this issue:

  • Set up a control group that were not exposed to the program or activity
  • Use pre- and post measures to show the changes occuring over time
  • Don’t only rely on survey or quantitative data – testimonies and anecdotes can be convincing evidence
  • Identify any other possible factors that could have caused the change being observed.

Of course, setting up a control group is always difficult in a real-world environment. But my experience has shown that it can bring forward very useful results, if we are honest about limitations and other possible influences.

It is important to be transparent and recognise any other factors that could have caused the change being observed. Take for example, breast cancer awareness in Australia. Health educators have been working hard for years to get more young women to undertake a mammogram (breast screening). As if detected early, the disease can be treated successfully. So for health educators, a clear impact indicator would be the number of appointments taken for mammograms. In August 2005, appointments for women aged 40 to 69 in Australia jumped by 101%. Was this the result of a very successful awareness campaign? No, in fact what we were seeing is what has been labelled as the “Kylie effect”. In May 2005, Australian pop singer Kylie Minogue was diagnosed with breast cancer resulting in mass media coverage about the issue – and consequent awareness of breast cancer and its detection. Studies have shown that there is a direct link between the jump in screening appointments and Kylie Minogue’s illness. If interested, you can read further about the “Kylie effect” on the BBC website.

Glenn

February 6, 2006 at 10:24 pm 5 comments

Evaluating Communication Campaigns

As the whole development and humanitarian sector focus more on accountability and performance, there has been a push for more evaluation of communication activities of this sector.

Most methodology and tools can be adapted from those used in the private sector. However, many communication campaigns of NGOs and international organisations often have dual outcomes they wish to achieve – individual behaviour change (e.g. persuade individuals to adopt a more healthier lifestyle) and policy change (e.g. push governments to change policy on food labeling).

An excellent study “Lessons in Evaluating Communication Campaigns: Five Case Studies” from the Harvard Family Research Project looks at evaluating campaigns ranging from gun safety to emmissions (ozone) reduction.

If you are interested to read more about standards and practices of evaluation in the development and humanitarian sector, a good starting point is The Active Learning Network for Accountability and Performance in Humanitarian Action (ALNAP). This interagency forum focuses on evaluation and accountability issues in this sector.

Glenn

January 18, 2006 at 2:26 pm 1 comment

New to PR Evaluation?

Evaluation of public relations / communications program has been an interest of mine for some time. For those that want to learn more, there are many excellent resources available online. Here are a few I highly recommend:

Glenn

January 12, 2006 at 1:19 pm Leave a comment

Newer Posts


Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 2,665 other subscribers

Categories

Feeds