Posts filed under ‘Communication evaluation’
I’ve been reviewing what handbooks and guides are available on communication evaluation – so far I’ve located five, here they are (all links to PDFs):
Communication Network (2008) Are we there yet? A Communications Evaluation Guide
DFID (2005). Monitoring and Evaluating Information and Communication for Development (ICD) Programmes – Guidelines. Department for International Development, London.
Slighty more specialised, but still interesting:
The Coalition for Public Relations Research Standards has just released their interim metrics (pdf) on PR research and measurement. They’ve split it by traditional media, social media, communications lifestyle and Return on Investment (ROI). For each metric, there is also more information available. I’m yet to go through all the information, but it seems like a comprehensive list – even if not all will agree on the different definitions, etc. Of interest, four major US corporations – GE, GM, McDonald’s USA and Southwest Airlines – reportedly have already adopted these metrics.
Here is a brand new article (it’s a chapter from a book*) by Fraser Likely and Tom Watson entitled “Measuring the Edifice - PR Measurement and Evaluation Practices Over the Course of 40 Years”.
It provides an excellent overview of developments in the last 40 years and the challenges currently faced in PR measurement and evaluation. A summary from the authors:
“Public relations measurement and evaluation practices have been major subjects for practitioners and academician research from the late 1970s onwards. This chapter will commence with a brief survey of the historical evolution of the research into these practices. Then, we will discuss James E. Grunig’s enduring contribution to their theorization, particularly with financial and non-financial indicators of public relations value. Next, we will consider the current debate on financial indicators, focusing on Return on Investment and alternative methods of financial vlauation. Finally, we will look to the future at the measurement and evaluation practices that will attract academic and practitioner research interest.”
*Note: Fraser and Tom’s chapter, “Measuring the Edifice: Public Relations Measurement and Evaluation Practice Over the Course of 40 Years (pp. 143-162)” comes from a “festschrift” (a celebratory book) for Professors Jim and Lauri Grunig – two renowned PR Gurus – which was edited by Professors Krishnamurthy Sriramesh and Ansgar Zerfass and Dr Jeong-Nam Kim. The book’s title is Public Relations and Communication Management: Current Trends and Emerging Topics. It is published by Routledge.
I’ve just had an article published in the journal PR Review. It’s the first article of my ongoing PhD on communication evaluation in intergovernmental organizations and NGOs. Below is the Abstract or if you are really keen you can download the full article below.
Evaluation of international and non-governmental organizations’ communication activities: A 15 year systematic review
The purpose of this paper is to understand how intergovernmental organizations and international non-governmental organizations have evaluated their communication activities and adhered to principles of evaluation methodology from 1995–2010 based on a systematic review of available evaluation reports (N = 46) and guidelines (N = 9). Most evaluations were compliant with principle 1 (defining communication objectives), principle 2 (combining evaluation methods), principle 4 (focusing on outcomes) and principle 5 (evaluating for continued improvement). Compliance was least with principle 3 (using a rigorous design) and principle 6 (linking to organizational goals). Evaluation was found not to be integrated, adopted widely or rigorously in these organizations.
For those interested in PR measurement, what is reassuring is the focus he puts on the need for the better use of data and measurement by agencies. I’m always surprised to see how little PR agencies do in measurement – so any more uptake of evaluation and measurement would be welcome.
Here is a summary of some key points:
- Big data at the center: Sufficient evidence suggests data and analytics can have a powerful effect on communications. There has been an incremental increase in the use of data to drive PR efforts, but the progression is minimal.
- Insight to drive meaningful creativity: Strong data will lead to better insights, giving way to creative PR ideas that effectively solve real world problems. Don’t assume your experience is enough to make a good campaign – use data.
- Understanding the human brain: To better understand how to change behaviors and attitudes, PR pros should read and listen to neuroscientists like David Eagleman. After all, PR is a social science.
- Recruiting differently: Practitioners who understand and even love data exist, but firms need to recruit a broader, more digestive range of people to find them. Seemingly unrelated disciplines should not be ruled out.
- Make it matter: To ensure communications efforts pay off in business terns, every campaign, every stakeholder group, and every advance in how we apply data and science can and should be measured.
Increasingly communicators need the ability to evaluate their activities – being able to design and set-up online surveys is a key tool for communicators for soliciting feedback and interacting with audiences. Here are the slides from a practical workshop that I conducted last Friday for the Geneva Communicators Network and covers surveys for communicators from concept to analysis – hope it’s of use!
The UK Government Communication Network have produced a new publication: “Evaluating Government Communication Activity – standards and guidance” (pdf).
The publication sums up well an approach for government departments (and also applicable to others) to evaluating communication activities. The annex on Recommended Metrics provides some interesting indicators for measuring communication activities – for government and other sectors.
Evaluation of communication activities of international and non-governmental organisations: A 15 year systematic review
As part of my PhD studies, I have undertaken a systematic review of how international and non-governmental organisations are evaluating their communication activities. I’m presenting a summary of this today at the European Evaluation Society Conference in Helsinki, Finland. Below are the slides, hope you find them interesting.
There has been a lot written and researched on the impact of communications – but little thought on how to measure the impact of journalism – how can the media measure the impact of their work?
Two recent posts explore this issue:
Ethan Zuckerman writes about how to measure the civic impact of journalism and one conclusion is:
“A possible metric – the efficacy of a story in connecting people to community organizations, volunteering opportunities, and other forms of civic engagement.”
He goes on to conclude:
“If we measure only how many people view, like or tweet, but not how many people learn more, act or engage, we run the risk of serving only the market and forsaking our civic responsibilities, whether we’re editing a newspaper or writing a blog.”
Jonathan Stray writes about the metrics of journalism and says:
“The first challenge may be a shift in thinking, as measuring the effect of journalism is a radical idea. The dominant professional ethos has often been uncomfortable with the idea of having any effect at all, fearing “advocacy” or “activism.” While it’s sometimes relevant to ask about the political choices in an act of journalism, the idea of complete neutrality is a blatant contradiction if journalism is important to democracy. Then there is the assumption, long invisible, that news organizations have done their job when a story is published. That stops far short of the user, and confuses output with effect.”
Both posts make interesting reaading and propose useful ideas. Both posts come to similar conclusions: The need to go beyond output metrics and look at the impact of journalism on events, individuals and policies. There are also some interesting parallels that can be seen with advocacy evaluation - food for thought!
Here is a fascinating research paper on Understanding public attitudes to aid and development (pdf) from the UK-based ODI and IPPR.
Relevant to monitoring and evaluation, it recommends:
“Campaigns should do more to communicate how change can and does happen in developing countries, including the role aid can play in catalysing or facilitating this change. Process and progress stories about how development actually happens may be more effective communication tools than campaigns focused straightforwardly on either inputs (such as pounds spent) or outputs (such as children educated).”
This is a weakness of campaigning about development and aid, in that the steps towards change are not explained – the so-called “theory of change” – “if we do that – it will lead to that” – is a mystery – for the public and often those running the programmes have not always thought it through either…
Thanks to the Thoughtful Campaigner blog for bringing this to my attention.