Understanding and use of evaluation – new report
Here is an interesting paper from ALNAP looking at how the understanding and use of evaluation in humanitarian action can be improved:
Harnessing the Power of Evaluation in Humanitarian Action: An initiative to improve understanding and use of evaluation (pdf)
The paper sets out a framework for improving the understanding and use of evaluation in four key areas:
Capacity Area 1: Leadership, culture and structure
• Ensure leadership is supportive of evaluation and monitoring
• Promote an evaluation culture
• Increase the internal demand for evaluation information
• Create organisational structures that promote evaluation
Capacity Area 2: Evaluation purpose and policy
• Clarify the purpose of evaluation (accountability, audit, learning)
• Clearly articulate evaluation policy
• Ensure evaluation processes are timely and form an integral part of the decision-making cycle
• Emphasise quality not quantity
Capacity Area 3: Evaluation processes and systems
• Develop a strategic approach to selecting what should be evaluated
• Involve key stakeholders throughout the process
• Use both internal and external personnel to encourage a culture of evaluation
• Improve the technical quality of the evaluation process
• Assign high priority to effective dissemination of findings, including through new media (video, web)
• Ensure there is a management response to evaluations
• Carry out periodic meta-evaluations and evaluation syntheses, and review recommendations
Capacity Area 4: Supporting processes and mechanisms
• Improve monitoring throughout the programme cycle
• Provide the necessary human resources and incentive structures
• Secure adequate financial resources
• Understand and take advantage of the external environment:
– Use peer networks to encourage change
– Engage with media demands for information
– Engage with donors on their evaluation needs
The Elusive Craft of Evaluating Advocacy
Here is an interesting article “The Elusive Craft of Evaluating Advocacy” (pdf) that examines some of the challenges of undertaking advocacy evaluation, mostly from the US perspective.
The authors consider advocacy evaluation more of a “craft” than an exact science:
“The real art of advocacy evaluation, which is beyond the reach of quantitative methods, is assessing influence..Advocacy evaluation is a craft—an exercise in trained judgment—one in which tacit knowledge, skill, and networks are more useful than the application of an all-purpose methodology.”
North American Summit on Measurement, September 18-20, 2011
The annual North American Summit on communication/PR measurement is coming up in September 2011:
Since it began in 2003, the North American Summit on Public Relations Measurement has enjoyed an international reputation for being one of the world’s leading annual conferences about research, measurement and evaluation in communications and public relations.
Each year this event features a number of unique, hands-on pre-conference workshops along with a day and a half of superb program sessions focusing on how measurement is being used effectively throughout the communications industry. This measurement summit is also noted for having several superb networking events where attendees have opportunities to exchange insights with international experts.
Through lectures, case studies and interactive discussions led by some of the world’s most noted measurement experts, the North American Summit on Public Relations Measurement annually exposes conference delegates to innovations, methodologies and best practices from some of the world’s most successful public relations measurement programs.
how much is publicity worth?
For those interested to learn more about the debate on if – or should – we put a montary value on media coverage – here is an excellent article from Carl Bialik, “the numbers guy” at the WSJ. His blog post on the same subject also provides more insights.
As he points outs, a recent study (pdf) indicates that editorial/media coverage is not necessarily more valuable to the reader than a paid ad – which is the crutch that all media evaluation costing rests upon…
The communication evaluation challenges for 2020?
At the recent AMEC Measurement Summit participants voted on what they believed should be the top priorities for communications evaluation until 2020. The top five priorities (ranked) were:
1. How to measure the return on investment (ROI) of public relations
2. Create and adopt global standards for social media measurement
3. Measurement of PR campaigns and programmes needs to become an intrinsic part of the PR toolkit
4. Institute a client education program such that clients insist on measurement of outputs, outcomes and business results from PR programs
5. Define approaches that show how corporate reputation builds/creates value.
Well, no.1 I feel is not going to be an easy one, given the diverse opinions on the issue. No. 2 merits attention but I believe “standards” for measuring social media may be a pipe dream (for me, it all depends upon what you want out of social media which in turn determines what you measure). No. 3 certainly makes sense, although we heard at the AMEC summit the main obstacle to having evaluation in the PR kit is fear of PR agencies loosing part of their budgets to evaluation…No. 4 I would fully support and No. 5 I believe there is already interesting work being done.
Read more on the AMEC website>>
Social media measurement – standards or ritual measurement
I am currently at AMEC’s 3rd European Summit on Measurement in Lisbon where a major discussion about social media measurement was held today – the Dummy Spit blog by Tom Watson provides a good summary of the debate.
New thinking on conference evaluation
Here is an interesting fact sheet from the Evaluation Uncertainty blog:
How to evaluate a conference (pdf)
The author looks at the perspective of the attendees, exhibitors and sponsors.
For example, for attendees, the focus is placed on knowledge transfer, networking and expectations.
Why are evaluation results not used…?
I’ve written previously on the issue of how to make sure that evaluation results are used (or at least considered…). Here is a new publication Making Evaluations Matter: A Practical Guide for Evaluators (pdf) from the Centre for Development Innovation that goes into much depth about this issue.
They state four general reasons why evaluation results are often not used:
- Fail to focus on intended use by intended users and are not designed to fit the context and situation
- Do not focus on the most important issues – resulting in low relevance
- Are poorly understood by stakeholders
- Fail to keep stakeholders informed and involved during the process and when design alterations are necessary.
I think the first and last reasons are particularly pertinent. We often don’t have enough insights into how evaluation results will be used – and we also fail to inform and involve stakeholders during the actual evaluation.
Advocacy impact Assessment Guidelines
Here is an interesting fact sheet from CABI.org – “Advocacy Impact Assessment Guidelines” (pdf).
The fact sheet provides a very good summary of evaluating advocacy actions – the “how” and “what” to evaluate. It also highlights some key points to keep in mind, summarised here:
- Different stakeholders will have different views on what success is;
- If you cannot prove impact, be satisfied with a critically informed assessment of change;
- Include subjective criteria, (i.e. what successes people feel have taken place but cannot substantiate with evidence);
- Break down your advocacy intervention into manageable components;
- Be practical, yet flexible. The external environment in which your advocacy takes place will be changing all the time;
- Monitor changes in your strategy itself;
- Collaborative advocacy means that individual contributions cannot be separated from the success of the whole effort;
- Share evaluation results with a wide range of people to show the disbelievers that advocacy can work and to motivate those who have been involved.
View the fact sheet here (pdf) >>
How does evaluation results influence policy?
Here is an interesting paper from the International Initiative for Impact Evaluation that focuses on how does evaluation results (of impact evaluations) influence policy:
“Sound expectations: from impact evaluations to policy change” (pdf)
A main conclusion of the paper is as follows:
“The paper concludes that, ultimately, the fulfillment of policy change based on the results of impact evaluations is determined by the interplay of the policy influence objectives with the factors that affect the supply and demand of research in the policymaking process.”