Posts filed under ‘Evaluation methodology’

New resource: Evaluation of Humanitarian Action Guide

ALNAP has recently released their Evaluation of Humanitarian Action Guide.

The guide was six years in the making and contains detailed advice and tips on evaluating humanitarian action. Even if your focus is not on evaluating humanitarian activities, Chapter 17 on Communicating and Reporting Findings and Results is well worth a read.

View the guide here>>

 

 

 

December 7, 2016 at 7:16 am Leave a comment

8 golden rules for communication evaluation

The UK government’s Communication Service has produced a framework for evaluating communications (pdf). 

The framework provides an overview of an integrated approach to evaluating communication activities and sets out eight golden rules for communication evaluation:

1. Seeightt SMART objectives well before the start of your activity

2. Think carefully about who your target audience is when selecting relevant metrics from each of the five disciplines*

3. Ensure you adopt an integrated channel approach when evaluating your communications activity

4. Collect baselines and benchmarks where possible

5. Include a mix of qualitative and quantitative evidence

6. Regularly review performance

7. Act on any insight to drive continuous improvement and inform future planning

8. Make the link between your activity and its impact on your organisational goals or KPIs

*Media, digital, marketing, stakeholder engagement, internal communications

Are there any more to add? I would add the need to integrate evaluation within the daily work of communication professionals – so it is thought about before starting activities and during…

View the complete guide here (pdf)>>

 

December 1, 2016 at 7:10 am Leave a comment

Useful tool: checklist for quality of evidence

I came across this checklist tool (pdf) from BOND, the UK NGO network on quality of evidence in evaluation. I find the checklist a useful way of…well…checking…an evaluation report to assess it’s quality of evidence. It’s based on five principles: voice and inclusion, appropriateness, triangulation, contribution and transparency. As an evaluator, I will try using it myself to “check” the evaluation reports I author…

View the checklist here (pdf)>>

 

November 8, 2016 at 10:43 am 1 comment

Monitoring and Evaluation in a Complex Organisation

Here is an interesting briefing note from the Danish Refugee Council on “Monitoring and Evaluation in a Complex Organisation”

Monitoring and evaluation can be relatively straightforward processes within simple projects, and there are well established procedures that can be applied. However, as this Evaluation and Learning Brief highlights, M&E systems are much more difficult to design and implement at the level of complex organisations. The key here is to strive for balance between an M&E system with too much rigidity, which suits head offices but allows little room for flexibility at field level, and one with too much flexibility, which may lead to a loss of coherence throughout the organisation.

Read the full note (pdf)>>

 

May 12, 2016 at 2:28 pm 1 comment

Adapting M&E at the field level

The NGO Saferworld has published a very interesting Learning Paper (pdf) on  their approach to monitoring and evaluation (M&E) focused on the field level. What is interesting in their paper, is that they explain some of the challenges they faced with reporting and logframes and the approaches they adopted consequently – adapting such tool as outcome harvesting and outcome mapping. Also for those interested in  advocacy evaluation, many of the examples featured are from evaluating advocacy activities.

January 22, 2016 at 1:36 pm 1 comment

What sort of evaluator are you?

From the folks at ImpactReady, a fun quiz to determine what sort of evaluator are you:
Positivist, Constructivist or Transformative?

Take the quiz now!

p.s. I came out as a  Constructivist Evaluator…

June 22, 2015 at 11:57 am 1 comment

New paper: Beneficiary feedback in evaluation

DFID have released a new paper on the practice of beneficiary feedback in evaluation (pdf).

The paper highlights five key messages (listed below), with a main point being that beneficiaries are often only seen as a provider of data and aren’t given a broader role in the evaluation process – a point I can confirm from having been involved in many evaluations.

Rather ironically, the DFID study on beneficiary feedback includes no feedback from beneficiaries on the study…

Key Message 1: Lack of definitional clarity has led to a situation where the term beneficiary feedback is subject to vastly differing interpretations and levels of ambition within evaluation.

Key Message 2: There is a shared, normative value that it is important to hear from those who are affected by an intervention about their experiences. However, in practice this has been translated into beneficiary as data provider, rather than beneficiary as having a role to play in design, data validation and analysis and dissemination and communication.

Key Message 3: It is possible to adopt a meaningful, appropriate and robust approach to beneficiary feedback at key stages of the evaluation process, if not in all of them.

Key Message 4: It is recommended that a minimum standard is put in place. This minimum standard would require that evaluation commissioners and evaluators give due consideration to applying a beneficiary feedback approach at each of the four key stages of the evaluation process.

Key Message 5: A beneficiary feedback approach to evaluation does not in any way negate the need to give due consideration to the best combination of methods for collecting reliable data from beneficiaries and sourcing evidence from other sources.

View the full paper here (pdf)>>

May 13, 2015 at 6:35 am 1 comment

Older Posts Newer Posts


Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 1,123 other followers

Categories

Feeds