Posts filed under ‘Evaluation methodology’
New e-learning course: cutting edge M&E
A new e-learning course is available from TRAASS international ; Cutting-Edge M&E: A Guide for Practitioners. The course is taught by Colin Jacobs, a senior trainer with more than 25 years’ experience in international development. Colin’s recent roles include President at the UK Evaluation Society and Head, Governance and Civil Society at British Council.
This online course lays the ground for Monitoring and Evaluation (M&E) to make vital contributions to incentivising change and measure performance. The course considers challenges in current M&E practice, introduces a tool-box of evaluation techniques and shows where these can be best applied. Ways of promoting early participation and the engagement of key stakeholders are explored and a step-by-step action plan to improve practice of M&E is provided. Further information>>
Full disclosure; I also present an e-learning course for TRAASS International; Effective and creative evaluation report writing.
Event – The Future of Technology for Evaluation
A very interesting event is scheduled for February 20-21 2017 in London; the Future of technology for monitoring, evaluation, research and learning – MERL TECH; learn more about the event>>
New report: Evaluation Capacity and Practice in the US Nonprofit Sector
A very interesting report is just out from the Innovation Network on the Evaluation Capacity and Practice in the US Nonprofit Sector (pdf).
Here are some excerpts on resources and evaluation:
- 99% of organisations have someone responsible for evaluation
- 84% of organisations spend less than 5% on evaluation
- 16% spend zero on evaluation (!)
There are also more interesting findings on evaluation use and barriers/supporting factors for evaluation – view the report here (pdf)>>
New resource: Evaluation of Humanitarian Action Guide
ALNAP has recently released their Evaluation of Humanitarian Action Guide.
The guide was six years in the making and contains detailed advice and tips on evaluating humanitarian action. Even if your focus is not on evaluating humanitarian activities, Chapter 17 on Communicating and Reporting Findings and Results is well worth a read.
8 golden rules for communication evaluation
The UK government’s Communication Service has produced a framework for evaluating communications (pdf).
The framework provides an overview of an integrated approach to evaluating communication activities and sets out eight golden rules for communication evaluation:
1. Set SMART objectives well before the start of your activity
2. Think carefully about who your target audience is when selecting relevant metrics from each of the five disciplines*
3. Ensure you adopt an integrated channel approach when evaluating your communications activity
4. Collect baselines and benchmarks where possible
5. Include a mix of qualitative and quantitative evidence
6. Regularly review performance
7. Act on any insight to drive continuous improvement and inform future planning
8. Make the link between your activity and its impact on your organisational goals or KPIs
*Media, digital, marketing, stakeholder engagement, internal communications
Are there any more to add? I would add the need to integrate evaluation within the daily work of communication professionals – so it is thought about before starting activities and during…
View the complete guide here (pdf)>>
Useful tool: checklist for quality of evidence
I came across this checklist tool (pdf) from BOND, the UK NGO network on quality of evidence in evaluation. I find the checklist a useful way of…well…checking…an evaluation report to assess it’s quality of evidence. It’s based on five principles: voice and inclusion, appropriateness, triangulation, contribution and transparency. As an evaluator, I will try using it myself to “check” the evaluation reports I author…
View the checklist here (pdf)>>
Monitoring and Evaluation in a Complex Organisation
Here is an interesting briefing note from the Danish Refugee Council on “Monitoring and Evaluation in a Complex Organisation”
Monitoring and evaluation can be relatively straightforward processes within simple projects, and there are well established procedures that can be applied. However, as this Evaluation and Learning Brief highlights, M&E systems are much more difficult to design and implement at the level of complex organisations. The key here is to strive for balance between an M&E system with too much rigidity, which suits head offices but allows little room for flexibility at field level, and one with too much flexibility, which may lead to a loss of coherence throughout the organisation.
Adapting M&E at the field level
The NGO Saferworld has published a very interesting Learning Paper (pdf) on their approach to monitoring and evaluation (M&E) focused on the field level. What is interesting in their paper, is that they explain some of the challenges they faced with reporting and logframes and the approaches they adopted consequently – adapting such tool as outcome harvesting and outcome mapping. Also for those interested in advocacy evaluation, many of the examples featured are from evaluating advocacy activities.
What sort of evaluator are you?
From the folks at ImpactReady, a fun quiz to determine what sort of evaluator are you:
Positivist, Constructivist or Transformative?
p.s. I came out as a Constructivist Evaluator…
New paper: Beneficiary feedback in evaluation
DFID have released a new paper on the practice of beneficiary feedback in evaluation (pdf).
The paper highlights five key messages (listed below), with a main point being that beneficiaries are often only seen as a provider of data and aren’t given a broader role in the evaluation process – a point I can confirm from having been involved in many evaluations.
Rather ironically, the DFID study on beneficiary feedback includes no feedback from beneficiaries on the study…
Key Message 1: Lack of definitional clarity has led to a situation where the term beneficiary feedback is subject to vastly differing interpretations and levels of ambition within evaluation.
Key Message 2: There is a shared, normative value that it is important to hear from those who are affected by an intervention about their experiences. However, in practice this has been translated into beneficiary as data provider, rather than beneficiary as having a role to play in design, data validation and analysis and dissemination and communication.
Key Message 3: It is possible to adopt a meaningful, appropriate and robust approach to beneficiary feedback at key stages of the evaluation process, if not in all of them.
Key Message 4: It is recommended that a minimum standard is put in place. This minimum standard would require that evaluation commissioners and evaluators give due consideration to applying a beneficiary feedback approach at each of the four key stages of the evaluation process.
Key Message 5: A beneficiary feedback approach to evaluation does not in any way negate the need to give due consideration to the best combination of methods for collecting reliable data from beneficiaries and sourcing evidence from other sources.