PR Measurement – Generally Accepted Practices Study
An interesting study has been released by the USC Annenberg Strategic Public Relations Center based in California, USA. The “Generally Accepted Practices” study focuses on budgeting, staffing, evaluation and the use of agencies by communication professionals. Some 500 US-based communication professionals responded to the study survey.
An interesting result of the study is the top three evaluation methodologies used by professionals:
1- Influence on corporate reputation
2-Content Analysis of Clips
3-Influence on employee attitudes
The first result is interesting as although there has been methodologies developed on measuring corporate reputation (notably by the Reputation Institute), there is no generally accepted metric on how communication activities “influence” reputation, as the authors of the study point out. What would be interesting to know is how exactly do the professionals assess the influence of their activities on corporate reputation? Active monitoring, comparison studies (to other potential influences), research/analysis or simple intuition?
The reliance on clips is not surprising, but worrying as the authors point out:
“Despite much discussion and debate, evaluation methods have not advanced beyond various forms of content analysis, which is another way of measuring outputs rather than outcomes. The authors suggest that while content analysis is the state of the media measurement art, it ignores all other public relations functions, thereby reinforcing the notion that PR is nothing more than publicity and media relations. This does a disservice to the increasingly sophisticated and complex nature of the profession. Clearly much more work remains to be done in the field of evaluation.”
Thanks to metricsman blog for bringing this study to my attention, where on this post he also makes an interesting analysis on the results.
Glenn
Combining Qualitative and Quantitative Methods for Evaluation – Part 2
Further to my earlier post on combining qualitative and quantitative methods for evaluation, I came across some interesting resources on this subject:
An article “Methodological Triangulation,or How To Get Lost Without Being Found Out” – with an interesting review of common errors in triangulation.
“User-Friendly Handbook for Mixed Method Evaluations” – good practical advice on mixing evaluation methods.
Glenn
Mapping Stakeholder Networks

I’ve written previously about measuring networks and how it can be very useful for an organisation to assess the different links between their key stakeholders. I came across another approach to mapping stakeholders which has some very good elements: the clarity concept identifies stakeholders and maps the relationships based on attitude, importance and influence of publics and the strength of the relationships (see the example above). You can read further about this concept on the stakeholder relationship management blog.
Although this is essentially a proprietary solution (offered by Clarity CS), what I appreciate is that the people behind the solution offer all their thoughts, theories and ideas on the subject so you can learn a lot from their approach – at no cost.
Disclaimer: there are no links between the authors of this blog and Clarity CS. Although one of the founders of this solution, Jon White, was a lecturer at my masters programme).
Glenn
Common Myths of Evaluation
Looking further into outcome evaluation in different fields, I came across a very good resource with a rather quaint title: The Basic Guide to Outcomes-Based Evaluation for Nonprofit Organizations with Very Limited Resources. Interesting to note the similiarities between my own top ten excuses for not evaluating and the common myths about evaluation that the author lists; as follows:
Myth: Evaluation is a complex science. I don’t have time to learn it! No! It’s a practical activity. If you can run an organization, you can surely implement an evaluation process!
Myth: It’s an event to get over with and then move on! No! Outcomes evaluation is an ongoing process. It takes months to develop, test and polish — however, many of the activities required to carry out outcomes evaluation are activities that you're either already doing or you should be doing. Read on …
Myth: Evaluation is a whole new set of activities – we don’t have the resources No! Most of these activities in the outcomes evaluation process are normal management activities that need to be carried out anyway in order to evolve your organization to the next level.
Myth: There’s a "right" way to do outcomes evaluation. What if I don’t get it right? No! Each outcomes evaluation process is somewhat different, depending on the needs and nature of the nonprofit organization and its programs. Consequently, each nonprofit is the "expert" at their outcomes plan. Therefore, start simple, but start and learn as you go along in your outcomes planning and implementation.
Myth: Funders will accept or reject my outcomes plan No! Enlightened funders will (at least, should?) work with you, for example, to polish your outcomes, indicators and outcomes targets. Especially if your's is a new nonprofit and/or new program, then you very likely will need some help — and time — to develop and polish your outcomes plan.
Myth: I always know what my clients need – I don't need outcomes evaluation to tell me if I'm really meeting the needs of my clients or not You don’t always know what you don’t know about the needs of your clients – outcomes evaluation helps ensure that you always know the needs of your clients. Outcomes evaluation sets up structures in your organization so that you and your organization are very likely always focused on the current needs of your clients. Also, you won’t always be around – outcomes help ensure that your organization is always focused on the most appropriate, current needs of clients even after you've left your organization.
Glenn
PR Measurement and Google Trends
The new “Google Trends” product is a nice complementary tool for monitoring the “noise” out there on an issue. It may prove useful for organisations that want to track general interest on a trend, or on related issues to see if there is any correlation.
What is interesting is that we can get an idea of the impact of events and policy announcements on issues – and how it spikes interest in an issue – and thus increased search results and news stories.
In the graph below, I have chartered WWF (blue line) and climate change (red line). The letters (A, B, C etc.) indicate major news stories on events and announcements. What is interesting is the peak around certain events (like D, a climate change conference) and both searches on WWF and climate change rise slightly. Also there are unexplained peaks in the search and news volume.
What can we conclude from this? Firstly, the search and news volumes on WWF and climate change do mirror each other often in terms of peaks and troughs which could be reassuring to the organisation as climate change is one of its key campaigns. Secondly, it supports the notion that events and policy announcements influence a public’s awareness and interest in an issue and may help track and explain these spikes. I wrote about this before in what I termed the “Kylie effect“. Thirdly, it shows that there are still spikes in public interest that are not traceable to news/policy announcements (look at the spike in early 2004 for both WWF and climate change). Could it be partly explained because Google Trends does not include blogs in its analysis? They could be a possible source of some peaks (i.e. a blogger writes about an issue, links to a story or WWF site inciting interest in the issue). Steve Rubel points out this weakness in the analytical power of the new tool. KD Paine has also written about the tool and PR measurement.
Glenn
Evaluation of events and conferences
I’ve written in previous posts about my work in evaluating the impact of events. A very interesting paper on this subject “A Guide to Measuring Event Sponsorship” has been published by US-based Institute for Public Relations. The title is misleading as the paper focuses on how to measure the effectiveness of an event and not on sponsorship evaluation (a separate subject, don’t get me started on it…).
The guide states:
“There are four central questions to keep in mind concerning
event evaluation:
1. How effective was the event? To what extent did the event impact the target public in the desired manner?
2. Did the event change the targeted public in unexpected ways,
whether desirable or undesirable?
3. How cost effective was the event?
4. What was learned that will help improve future events? “
The Guide goes further than I have done in event evaluation by looking at calculating ROI and at the impact on sales (applicable for a commercially focused event). It also confirms my general opinion on event evaluation – we have to go further than simply counting attendees, general reactions and press coverage – we have to look at the impact on attendees’ knowledge, attitudes, behaviours and anticipated behaviour (e.g. intention to purchase a product).
Glenn
Training and innovative evaluation
If you are based in Europe and interested in learning more about evaluation, the European Evaluation Society is organising their biennial conference in London from 4-6 October 2006. Even more interesting is the series of Professional Development Training Workshops they are organising around the event. One workshop looks particularly interesting – "innovative evaluation approaches" covering the following subjects:
- Theory based approaches
- Participative evaluation and ‘social inclusion’
- Integrating qualitative and quantitative methods
- Monitoring and Evaluation in international development settings
- Synthesis reviews and meta-evaluations
- Experimental methods and RCTs
- Modelling and risk analysis
- Self evaluations
- Cost benefit analysis
Glenn
Evaluation of LIFT06: can we measure the impact of conferences?
I’ve just finished a very interesting project – the evaluation of the impact of the LIFT06 conference that took place in Geneva in February 2006. In a true open source spirit, the evaluation report is available for everyone to consult. With this evaluation, we tried to go beyond the standard assessment of reactions to a conference. We looked at changes to knowledge, attitudes and behaviours. Using a triangulation approach combining quantitative and qualitative research methods, I believe we could identify the influence of LIFT06 on these above variables. We were aware of the limitations to the evaluation given that it was a punctual evaluation and based largely on self-assessment of attitudinal and behavioural changes, which I explain here in the report.
What sort of changes could we identify?
Changes to awareness and attitudes: Through and online survey, the majority of attendees (82%) agreed that LIFT06 provided them with interesting information on the usage of emerging technologies and 70% agreed that LIFT06 influenced what they thought about the subject. This quote taken from an attendee’s blog illustrates this point:
“And just think; if I had never gone to Lift06 I would not be feeling anything like this strongly about the issue”
Changes to Behavior: Evaluations of conferences are rarely able to show a direct relation between the event and changes in behaviour of attendees. With LIFT06, some attendees indicated a change in behaviour, such as starting a blog or getting a new partnership. another key objective of LIFT06 was to “connect” people – 94% of attendees reported that they met new people at LIFT06. 
Online Surveys and Best Practices
As I’m setting up and running online surveys using the Benchpoint system (commercial plug), I am always interested to see examples of best practices in online surveying. Sometimes we come across examples of worst practice also. Below I’ve copied in a screenshot of a survey I was asked to complete. It breaks a simple rule of surveying that dates back to phone and street surveying: personal / demographic information should normally be requested at the end of a survey. As it has been found that people are more comfortable answering such questions once they are familiar with the survey theme and if done in person, with the interviewer. I found the amount of personal information this survey asks even before you “start survey” is excessive and was the reason why I didn’t complete the survey. Hopefully general standards on online surveying are emerging to avoid this type of issue.
Glenn

Measurement – The Business case for communication in an organisation
Chris Mykrantz of Watson Wyatt writes about his company’s latest communication ROI study on the Simply-Communicate website.
For years internal communications people have been arguing for a seat at the top table where decisions are taken, and several have made it. But, warns Chris,
“Be careful what you wish for. Senior leadership is now asking what value they’re contributing by sitting there”.
He mentions large companies who are actively looking for a financial–based ROI measurement of communication, a statement which may have a few communicators who measure by instinct quaking in their shoes!
One of his key points, with which we heartily agree, is :
“Start at the end and think hard cash – the first step in developing a communication strategy that includes measurement should be to envision the successful outcome. What would be different in your organization if the communication was on target? How would it change the business? How would it change employees? Don't settle for awareness or satisfaction. Find a desired state that you can define in dollar terms and design your strategy around that.”
Read the full article here.
Richard