Posts filed under ‘Evaluation methodology’
Standardization in Public Relations Measurement and Evaluation
David Michaelson and Don W. Stacks have published a new article on the need for standardization in PR measurement and evaluation, here is a summary:
As the public relations profession continues to focus more and more on outcomes associated with campaigns or public relations initiatives the question of standards has shifted to the forefront of discussions among and between professionals, academics, and research providers. Making this shift even more important to establishing impact on business goals and objectives is the fact that standardized measures for public relations activities have never been recognized. Unlike other marketing communications disciplinesi, public relations practitioners have consistently failed to achieve consensus on what the basic evaluative measures are or how to conduct the underlying research for evaluating and measuring public relations performance.
New US policy on evaluation
USAID, the US government body responsible for foreign aid programs has issued a new policy on evaluation. According to the USAID itself, the new policy “seeks to redress the decline in the quantity and quality of USAID’s recent evaluation practice”. They highlight six key points of this policy:
1. Defining impact evaluation and performance evaluation and requiring at least one performance evaluation for each major program and any untested and innovative interventions, and encouraging impact evaluation for each major development objective in a country program, especially for new or untested approaches and interventions:
2. Calling for evaluation to be integrated into programs when they are designed;
3. Requiring sufficient resources be dedicated to evaluation, estimated at approximately three percent of total program dollars;
4. Requiring that evaluations use methods, whether qualitative or quantitative, that generate the highest quality evidence linked to the evaluation questions and that can reasonably be expected to be reproducible, yielding similar findings if applied by a different team of qualified evaluators;
5. Building local capacity by including local evaluators on evaluation teams and supporting partner government and civil society capacity to undertake evaluations; and
6. Insisting on transparency of findings with the presumption of full and active disclosure barring principled and rare exceptions.
A Guide to Actionable Measurement
The Bill & Melinda Gates Foundation has produced a new publication “A Guide to Actionable Measurement” (pdf).
To paraphrase, what they mean is evaluation and monitoring activities that can be used and acted up.
Following are some excerpts from the guide that are well worth considering:
7 points on what is actionable measurement?
1. Consider measurement needs during strategy development and review
2. Prioritize intended audiences
3. Do not privilege a particular evaluation design or method
4. Focus on a limited set of clearly articulated questions
5. Align results across strategy, initiatives, and grants
6. Obtain information needed to inform decisions in a timely way
7. Allow time for reflection and the development of insight
4 points on evaluation at the strategy level:
1. Measure outcomes more frequently than impact
2. Measure for contribution, not attribution
3. Harmonize and collaborate
4. Limit the tracking of inputs, activities, and outputs at the strategy level
Indices, Benchmarks,and Indicators:Planning and Evaluating Human Rights Dialogues
For those interested in human rights an evaluation, an interesting publication has recently been published:
Indices, Benchmarks,and Indicators:Planning and Evaluating Human Rights Dialogues (pdf)
The publication provides guidance and advice on evaluating human rights dialogues and makes the point that :
“Ratification of treaties as a goal should be distinguished from the goal of improvements in the overall human rights record.”
In other words, the fact that treaties are agreed to by countries doesn’t mean that this is the ultimate end goal – more so that evaluation needs to monitor the actual application of human rights in-country.
Involving stakeholders in the evaluation process
An issue evaluators often come across is to what extent to involve stakeholders in the evaluation process: How much input should stakeholders have into designing evaluation questions? When and what feedback should be given to stakeholders during the evaluation? How to reflect the perspectives of all stakeholders in the evaluation questions and criteria?
An interesting guide has been put together that helps in answering some of these questions: “A Practical Guide for Engaging Stakeholders in Developing Evaluation Questions”(PDF) by the Robert Wood Johnson Foundation.
The guide proposes five steps to engaging with stakeholders:
Step 1: Prepare for stakeholder engagement
Step 2: Identify potential stakeholders
Step 3: Prioritize the list of stakeholders
Step 4: Consider potential stakeholders‘ motivations for participating
Step 5: Select a stakeholder engagement strategy
A new evaluation method: The Evaluation Café
We are always on the lookout for different methods and approaches for evaluation. Here is a new method that we haven’t come across before: “the evaluation café“.
Following is a brief description:
The Evaluation Café is a method for group facilitation that allows stakeholders of a project or programme to evaluate its impact in an informal brief session. The purpose of the Evaluation Café is to build and document stakeholders’ views on success and impacts after a planned activity.
Workshop on communications evaluation
I recently conducted a one day training workshop for the staff of Gellis Communications on communications evaluation. We looked at several aspects including:
- How to evaluate communication programmes, products and campaigns;
- How to use the “theory of change” concept;
- Methods specific to communication evaluation including expert reviews, network mapping and tracking mechanisms;
- Options for reporting evaluation findings;
- Case studies and examples on all of the above.
Gellis Communications and myself are happy to share the presentation slides used during the workshop – just see below (these were combined with practical exercises – write to me if you would like copies)
Evaluating online communication tools
Online tools, such as corporate websites, members’ directories or portals increasingly play an important role in communications’ strategies. And of course, they are increasingly important to evaluate.
I just concluded an evaluation of an online tool, created to facilitate the exchange of information amongst a specific community. The tool in question, the Central Register of Disaster Management Capacities is managed by the United Nations Office for the Coordination of Humanitarian Affairs.
The evaluation methodology that I used for evaluating this online tool is interesting as it combines:
- Content analysis
- Network mapping
- Online survey
- Interviews
- Expert review
- Web metrics
And for once, you can dig into the methodology and findings as the evaluation report is available publicly: View the full report here (pdf) >>
Communications evaluation – 2009 trends

Last week I gave a presentation on evaluation for communicators (pdf) at the International Federation of Red Cross and Red Crescent Societies. A communicator asked me what trends had I seen in communications evaluation, particularly relevant to the non-profit sector. This got me thinking and here are some of the trends I have seen in 2008 that I believe are an indication of some directions in 2009:
Measuring web & social media: as websites and social media increasingly grow in importance for communication programmes, so to is the necessity to have the capacity to measure what their impact is. Web analytics has grown in importance as will the ability to measure social media.
Media monitoring not the be-all and end-all: after many years of organisations only focusing on media monitoring as the means of measuring communications, there is finally some realisation that media monitoring is an interesting gauge of visibility but not more. Organisations are now interested more and more in having some qualitative analysis of data collected (such as looking at how influential the media are, the tone and the importance).
Use of non-intrusive or natural data: organisations are also now considering “non-intrusive” or “natural” data – information that already exists – e.g. blog / video posts, customer comments, attendance records, conference papers, etc. As I’ve written about before, this data is underated by evaluators as everyone rushes to survey and interview people.
Belated arrival of results-based management: Despite existing for over 50 years, results-based management or management by objectives is just arriving in many organsations. What does this mean for communicators? It means that at the minimum they have to set measurable objectives for their activities – which is starting to happen. They have no more excuses(pdf) for not evaluating!
Glenn
Survey responses – do the “don’t know” really know?
I’ve written before about survey respones and the use of “don’t know” as an option on a Likert scale. What I said was that in some situations, a person may not have an opinion on a subject – and cannot say if they agree or disagree – so it may be wise to include a “don’t know” option. Well, i just read an interesting article that suggests that people who respond “don’t know” may actually have an opinion – it’s just that they may require a longer amount of time to develop confidence or awareness of their choice. The article gives an example of how the opinion of undecided people can be acurately predicted by creative means:
In a recent study, 33 residents of an Italian town initially told interviewers that they were undecided about their attitude toward a controversial expansion of a nearby American military base. But researchers found that those people’s opinions could be predicted by measuring how quickly they made automatic associations between photographs of the military base with positive or negative words.
