Posts filed under ‘PR evaluation’
Evaluation of events and conferences
I’ve written in previous posts about my work in evaluating the impact of events. A very interesting paper on this subject “A Guide to Measuring Event Sponsorship” has been published by US-based Institute for Public Relations. The title is misleading as the paper focuses on how to measure the effectiveness of an event and not on sponsorship evaluation (a separate subject, don’t get me started on it…).
The guide states:
“There are four central questions to keep in mind concerning
event evaluation:
1. How effective was the event? To what extent did the event impact the target public in the desired manner?
2. Did the event change the targeted public in unexpected ways,
whether desirable or undesirable?
3. How cost effective was the event?
4. What was learned that will help improve future events? “
The Guide goes further than I have done in event evaluation by looking at calculating ROI and at the impact on sales (applicable for a commercially focused event). It also confirms my general opinion on event evaluation – we have to go further than simply counting attendees, general reactions and press coverage – we have to look at the impact on attendees’ knowledge, attitudes, behaviours and anticipated behaviour (e.g. intention to purchase a product).
Glenn
Evaluation of LIFT06: can we measure the impact of conferences?
I’ve just finished a very interesting project – the evaluation of the impact of the LIFT06 conference that took place in Geneva in February 2006. In a true open source spirit, the evaluation report is available for everyone to consult. With this evaluation, we tried to go beyond the standard assessment of reactions to a conference. We looked at changes to knowledge, attitudes and behaviours. Using a triangulation approach combining quantitative and qualitative research methods, I believe we could identify the influence of LIFT06 on these above variables. We were aware of the limitations to the evaluation given that it was a punctual evaluation and based largely on self-assessment of attitudinal and behavioural changes, which I explain here in the report.
What sort of changes could we identify?
Changes to awareness and attitudes: Through and online survey, the majority of attendees (82%) agreed that LIFT06 provided them with interesting information on the usage of emerging technologies and 70% agreed that LIFT06 influenced what they thought about the subject. This quote taken from an attendee’s blog illustrates this point:
“And just think; if I had never gone to Lift06 I would not be feeling anything like this strongly about the issue”
Changes to Behavior: Evaluations of conferences are rarely able to show a direct relation between the event and changes in behaviour of attendees. With LIFT06, some attendees indicated a change in behaviour, such as starting a blog or getting a new partnership. another key objective of LIFT06 was to “connect” people – 94% of attendees reported that they met new people at LIFT06. 
Top Ten Excuses for not Evaluating
This post at the IABC measurement blog caught my attention, as its author asks the question:
“So why don’t we measure more? Is it budget, competencies, time or the risk of accountability?”
People usually don’t evaluate for various reasons, but the most common excuses I’ve heard are the following:
- “It’s too expensive”.
With the amount of free advice, excellent guidelines and cheap research solutions available, this doesn’t pass anymore. - “I don’t know how to”.
Fair enough, but you can learn a lot yourself without having to engage expensive consultants. - “I’m too busy “doing” to be bothered with measuring”.
Frightening. People love doing things, it’s natural. But sometimes you have to stop and take a step back to see what you have achieved. - “What I’m doing couldn’t possibly be measured”.
Often heard from the Creative Type. People who create their own fonts, too clever campaigns and beautiful artwork that impresses other Creative Types. But my question is – what did you change? - “I don’t see the value of it”.
How else can you judge the value of your work if you don’t attempt to analyse and assess it? - “I’m scared what of what I will find out”.
But I think it will be scarier for you if you don’t evaluate and someone else does. - “People are fed up with giving their opinion”.
I don’t think people are – as I’ve written about before. - “My gut feeling tells me I’m doing a good job”.
There is a certain vogue that says out intuition is often our best call. But research often brings out issues that were not even on your radar. - “All my work is vetoed by the CEO, if s/he’s happy so am I”.
The CEO sees the organisation through the same rose-colored glasses as you do. In PR, it’s your public’s perception of your communication that counts. - “You can’t prove anything anyway”.
You can rarely obtain 100% proof that your programme caused the change seen. But what you do is collect evidence that indicates the role your activity played, as I further explained in this post.
Glenn
PR Guru Grunig on intensive research PR
Professor James Grunig, a leading thinker in the field of communications / PR evaluation was in Switzerland last week. While here, he gave an interview where he spoke about the notion of PR being "research intensive" amongst a wide range of subjects in the PR field.
Glenn
PR Measurement: are publics fed up?
People (read: potential clients) often tell me that their publics are fed up filling out online survey, being interview or queried about their opinions. Rubbish I say. People are willing to spend time giving their opinion about a website, service or issues that is important to them.
In a recent online survey of external audiences for an international organisation, 25% of the potential audience responded to the survey. And of those who responded, 44% requested a copy of the results. In an evaluation of an international event, 55% of the participants responded to survey. And of those who responded, 60% requested a copy of the results.
In the survey for the international organisation, 80% of respondents said that the organisation’s website was very important or important for their work. Wouldn’t you like to give an opinion on something that important to you? I would.
My experience is that people are willing to participate in research if it is something that is important to them and if they believe something will be done with the results.
In this article on the simply-communicate.com site, the author points out that:
“In a recent survey of reasons for non-response to employee questionnaires, the biggest driver of non-participation was found to be ‘Nothing would happen as a result'”.
So people do want to see that their opinion is valued. The fact that such a large percentage of people want to receive the evaluation results indicate that they are interested in the subject in question. In addition, I imagine that people are interested to see if what they think corresponds to the norm – and to judge what the organisation will do as a consequence of the evaluation.
Glenn
Evaluation – going beyond your own focus
If your focus is on evaluating PR, training or another business competency, it is sometimes helpful to learn more about evaluation by looking further than your particular focus. Look at international aid. I just read a review of a new book The White Man’s Burden – Why the West’s Efforts to Aid the Rest Have Done So Much Ill and So Little Good By William Easterly. Based on the book review, he raises two interesting points about evaluation:
Planners vs. Searchers: he says most aid projects have either one of these approaches. He writes “A Planner thinks he already knows the answer. A Searcher admits he doesn’t know the answers in advance”. He explains that Searchers treat problem-solving as an incremental discovery process, relying on competition and feedback to figure out what works.
Measuring Success: Easterly argues that aid projects rarely get enough feedback whether from competition or complaint. Instead of introducing outcome evaluation where results are relatively easy to measure (e.g. public health, school attendance, etc), advocates of aid measure success by looking at how much money rich countries spend. He says this is like reviewing movies based on their budgets.
You can read the full review of Easterly’s book on the IHT website.
These are some excellent points that we can apply to evaluation across the board. Evaluators can probably classify many projects they have evaluated as either of a Planner or Searcher nature. The Searcher approach integrates feedback and evaluation throughout the whole process – look at PR activities, we may not know what communication works best with our target audience to begin with, but by integrating feedback and evaluation we can soon find out.
His analogy about reviewing movies also strikes a chord. How many projects are evaluated on expenditure alone?
Aside from the evaluation aspects, his thoughts on international aid are interesting. I spent my formative years as an aid worker in Africa, Eastern Europe and Asia and a lot of what he says confirms my own experiences and conclusions.
Glenn
Combining Qualitative and Quantitative Methods for Evaluation
In evaluation, we often choose between using qualitative (e.g. focus group) and quantitative (e.g. survey) methods. In fact, we should always try and use both approaches. This is what is referred to as triangulation: the combination of several research methods in the study of the same phenomenon. My experience has been that a combination of research methods helps provide more data to work with and ultimately a more accurate evaluation. In a recent project, I was able to use interviews combined with surveys to assess participant reaction to training. I found that the information we could draw from the interviews was complimentary – and of added value – to what we discovered through the surveys.
Even if you are only conducting online surveys, the inclusion of open questions (where respondents put in comments in a free text field) is not quite triangulation but will provide you with insight into the phenomenon being evaluated. In a recent online survey project, we were able to clarify important issues by sorting and classifying the comments made in open questions. This proved invaluable information and gave the evaluation heightened status within the organisation.
Glenn
PR Measurement – New Dictionary
The USA’s Institute for Public Relations, which offers all of its research and publications free on the web, has published a new edition of the Dictionary of Public Relations Measurement and Research (pdf – 198kb), edited by Dr. Don W. Stacks of the University of Miami.
We all measure different things and we measure them all differently (and not always intelligently). Not a good idea if you want to benchmark or compare data.
Don Stacks and his colleagues have done a magnificent job in assembling, codifying and defining most of the common terms we come across when talking about measurement, from “Algorithm” to “Z-score”.
It’s a giant step forward in defining some standard methodologies which everyone can use.
The Institute’s website, http://www.instituteforpr.org also contains a large number of downloadable papers on every aspect of communications measurement and evaluation. Well worth a visit.
Richard
Measuring the “Soft Issues” for Investors
One company that is a good example of “intelligent measurement” is Innovest Strategic Value Advisors, Inc. an investment research and advisory firm (disclosure: there are no links between the authors of this blog and Innovest). This company specializes in analyzing “non traditional” drivers of risk and shareholder value, including companies’ performance on environmental, social and strategic governance issues.
Innovest’s research is focused on those factors which contribute most heavily to financial performance. Environmental and social performance measures are used as leading indicators for management quality and long-term financial performance, not as commentaries on the intrinsic ethical worth of the companies. At the heart of Innovest’s analytical model is the attempt to balance the level of environmentally and socially driven investment risk with the companies’ managerial and financial capacity to manage that risk successfully and profitably into the future.
Environmental assessment criteria:
In total, the Innovest EcoValue‘21™ model synthesizes over 60 data points and performance metrics, grouped together under six key value drivers – Historical contingent liabilities, Financial Risk assessment, Operating risk exposure, Sustainability risk, Strategic management capability and Sustainable profit opportunities.
Social assessment criteria:
Over 50 individual performance indicators are addressed in Innovest’s IVA™ rating model. The principal value drivers are Sustainable Governance, Stakeholder management, Human Capital management, Products and Services, and behaviour in relation to oppressive regimes or exploitative labour markets in emerging markets.
The measurement process is complex, intensive and time consuming. Once the interview/data gathering process is completed, each company is rated relative to its industry competitors. Companies are rated against the Innovest performance criteria, and given a weighted score, as well as a letter grade (AAA, BB etc.). Each of the factors has an industry-specific weighting, based in part on a regression-based factor attribution analysis examining recent (5 year) stock market performance.
This approach is intelligent measurement at its finest. The methodology is rigorous, and the outputs have enormous strategic value to investors, and management of any capital intensive company trying to understand the correlation between investment in traditionally regarded “soft” management issues and shareholder value.
Which why Innovest commands a high price for its reports and is generally regarded as the leader in its field.
Richard
New PR Measurement Blog from IABC
The International Association of Business Communicators (IABC) has launched a new blog with three orientations: branding, employee and measurement. Interesting to note that PR measurement has been given a high profile by the IABC – an indication of its importance to communicators today.
The measurement blog has already some interesting posts and comments on measuring employee communications and context and evaluation, something I’ve written about before.
Glenn