Posts filed under ‘Communication evaluation’
Surveys for communicators
Increasingly communicators need the ability to evaluate their activities – being able to design and set-up online surveys is a key tool for communicators for soliciting feedback and interacting with audiences. Here are the slides from a practical workshop that I conducted last Friday for the Geneva Communicators Network and covers surveys for communicators from concept to analysis – hope it’s of use!
Evaluating Government Communication Activity
The UK Government Communication Network have produced a new publication: “Evaluating Government Communication Activity – standards and guidance” (pdf).
The publication sums up well an approach for government departments (and also applicable to others) to evaluating communication activities. The annex on Recommended Metrics provides some interesting indicators for measuring communication activities – for government and other sectors.
Evaluation of communication activities of international and non-governmental organisations: A 15 year systematic review
As part of my PhD studies, I have undertaken a systematic review of how international and non-governmental organisations are evaluating their communication activities. I’m presenting a summary of this today at the European Evaluation Society Conference in Helsinki, Finland. Below are the slides, hope you find them interesting.
Measuring the impact of journalism
There has been a lot written and researched on the impact of communications – but little thought on how to measure the impact of journalism – how can the media measure the impact of their work?
Two recent posts explore this issue:
Ethan Zuckerman writes about how to measure the civic impact of journalism and one conclusion is:
“A possible metric – the efficacy of a story in connecting people to community organizations, volunteering opportunities, and other forms of civic engagement.”
He goes on to conclude:
“If we measure only how many people view, like or tweet, but not how many people learn more, act or engage, we run the risk of serving only the market and forsaking our civic responsibilities, whether we’re editing a newspaper or writing a blog.”
Jonathan Stray writes about the metrics of journalism and says:
“The first challenge may be a shift in thinking, as measuring the effect of journalism is a radical idea. The dominant professional ethos has often been uncomfortable with the idea of having any effect at all, fearing “advocacy” or “activism.” While it’s sometimes relevant to ask about the political choices in an act of journalism, the idea of complete neutrality is a blatant contradiction if journalism is important to democracy. Then there is the assumption, long invisible, that news organizations have done their job when a story is published. That stops far short of the user, and confuses output with effect.”
Both posts make interesting reaading and propose useful ideas. Both posts come to similar conclusions: The need to go beyond output metrics and look at the impact of journalism on events, individuals and policies. There are also some interesting parallels that can be seen with advocacy evaluation – food for thought!
Understanding public attitudes to aid and development
Here is a fascinating research paper on Understanding public attitudes to aid and development (pdf) from the UK-based ODI and IPPR.
Relevant to monitoring and evaluation, it recommends:
“Campaigns should do more to communicate how change can and does happen in developing countries, including the role aid can play in catalysing or facilitating this change. Process and progress stories about how development actually happens may be more effective communication tools than campaigns focused straightforwardly on either inputs (such as pounds spent) or outputs (such as children educated).”
This is a weakness of campaigning about development and aid, in that the steps towards change are not explained – the so-called “theory of change” – “if we do that – it will lead to that” – is a mystery – for the public and often those running the programmes have not always thought it through either…
Thanks to the Thoughtful Campaigner blog for bringing this to my attention.
Standards for social media measurement?
At the recent AMEC Measurement Summit there was an interesting discussion on setting standards for social media – a group of specialists have been working on this for the past years – as more and more companies use social media and wonder how to measure the outcomes- and it would be useful if some common standards were set…View the presentation below for an update on the latest developments:
Overall spending on PR flat but evaluation up by 5%

Spending on PR/communications in companies and organisations is flat – but spending on communication evaluation is up by 5%, according to a new study of senior-level PR/communication practitioners in the USA.
The USC Annenberg’s Generally Accepted Practices (GAP) for Public Relations study found that compared to 2009, total spending on evaluation in PR/communication budgets jumped from 4% to 9% in 2012 – even when some 80% of practitioners reported overall PR/communication budgets flat or decreasing.
The study also found a shift in focus towards “outcome” measures, such as influence on reputation, attitudes and awareness – and away from “output” measures such as clip counting/media coverage.
advocacy evaluation: influencing climate change policy
Often I don’t get to share the findings of the evaluations I undertake, but in this case of an advocacy evaluation, an area that I’ve written about before, the findings are public and can be shared.
I was part of a team that evaluated phase 1 of an advocacy/research project – the Africa Climate Change Resilience Alliance (ACCRA). ACCRA aims to increase governments’ and development actors’ use of evidence in designing and implementing interventions that increase communities’ capacity to adapt to climate hazards, variability and change. Advocacy plays a large role in trying to influence governments and development actors in this project. You can read more in the Executive_Summary (pdf) of the evaluation findings.
The evaluation also produced 5 case studies highlighting successesful advocacy strategies:
- Capacity building and district planning
- Secondment to a government ministry
- Reaching out to government and civil society in Uganda
- Disaster risk profiling in Ethiopia
- Exchanging views and know-how between ACCRA countries
The case studies can be viewed on the ACCRA Eldis community blog (n.b. you have to join the Eldis community to view the case studies, it’s free of charge).
To disseminate the evaluation findings widely we also produced a multimedia clip, as featured below.
2012 European Summit on Communication Measurement announced
The International Association for Measurement and Evaluation has announced the programme for the 4th European Summit on Measurement, scheduled to be held in Dublin from 13-15 June 2012.
The Summit will include a day of workshops followed by two days of plenary sessions with guest speakers and panels.
Guide to evaluating communication products
I’ve written before about the challenges of evaluating communication products, i.e. brochures, videos, magazines and websites. Little systematic follow-up is done on these products that can often form key parts of larger communication programmes. Here is a very interesting guide from the health sector in this area: “Guide to Monitoring and Evaluating Health Information Products and Services” (pdf). Although focused on the health area, the guide provides some insights on evaluating different levels concerning communication products, from reach to use and impact on an organisation.
Thanks to Jeff Knezovich writing on the On Think Tanks blog that brought this to my attention.