Posts filed under ‘Evaluation methodology’

Monitoring and Evaluation in a Complex Organisation

Here is an interesting briefing note from the Danish Refugee Council on “Monitoring and Evaluation in a Complex Organisation”

Monitoring and evaluation can be relatively straightforward processes within simple projects, and there are well established procedures that can be applied. However, as this Evaluation and Learning Brief highlights, M&E systems are much more difficult to design and implement at the level of complex organisations. The key here is to strive for balance between an M&E system with too much rigidity, which suits head offices but allows little room for flexibility at field level, and one with too much flexibility, which may lead to a loss of coherence throughout the organisation.

Read the full note (pdf)>>

 

May 12, 2016 at 2:28 pm 1 comment

Adapting M&E at the field level

The NGO Saferworld has published a very interesting Learning Paper (pdf) on  their approach to monitoring and evaluation (M&E) focused on the field level. What is interesting in their paper, is that they explain some of the challenges they faced with reporting and logframes and the approaches they adopted consequently – adapting such tool as outcome harvesting and outcome mapping. Also for those interested in  advocacy evaluation, many of the examples featured are from evaluating advocacy activities.

January 22, 2016 at 1:36 pm 1 comment

What sort of evaluator are you?

From the folks at ImpactReady, a fun quiz to determine what sort of evaluator are you:
Positivist, Constructivist or Transformative?

Take the quiz now!

p.s. I came out as a  Constructivist Evaluator…

June 22, 2015 at 11:57 am 1 comment

New paper: Beneficiary feedback in evaluation

DFID have released a new paper on the practice of beneficiary feedback in evaluation (pdf).

The paper highlights five key messages (listed below), with a main point being that beneficiaries are often only seen as a provider of data and aren’t given a broader role in the evaluation process – a point I can confirm from having been involved in many evaluations.

Rather ironically, the DFID study on beneficiary feedback includes no feedback from beneficiaries on the study…

Key Message 1: Lack of definitional clarity has led to a situation where the term beneficiary feedback is subject to vastly differing interpretations and levels of ambition within evaluation.

Key Message 2: There is a shared, normative value that it is important to hear from those who are affected by an intervention about their experiences. However, in practice this has been translated into beneficiary as data provider, rather than beneficiary as having a role to play in design, data validation and analysis and dissemination and communication.

Key Message 3: It is possible to adopt a meaningful, appropriate and robust approach to beneficiary feedback at key stages of the evaluation process, if not in all of them.

Key Message 4: It is recommended that a minimum standard is put in place. This minimum standard would require that evaluation commissioners and evaluators give due consideration to applying a beneficiary feedback approach at each of the four key stages of the evaluation process.

Key Message 5: A beneficiary feedback approach to evaluation does not in any way negate the need to give due consideration to the best combination of methods for collecting reliable data from beneficiaries and sourcing evidence from other sources.

View the full paper here (pdf)>>

May 13, 2015 at 6:35 am 1 comment

Geneva event of systematic reviews and evaluation – 7 May 2015

For those in the Geneva region, here is an interesting event  presented by the Geneva Evaluation Network:

Dr. Davies will make a presentation on the role of Systematic Reviews in strengthening empirical evidence base for policies or programmes. Dr Davies’ presentation will be followed by two SR case studies from ILO and WSSCC.

Dr. Davies is the Deputy Director of the Systematic Reviews section at the International Initiative for Impact Evaluation, 3ie. He heads 3ie’s European office based at the London International Development Centre and is responsible for representing 3ie in Europe, the Middle East and Africa.

The event will take place on May 7, 2015 from 13.30 to 16.00, Room AB 13.1 in the WIPO’s main building. The presentation will be made in English. Coffee and tea will be served.

To attend, please email: Su Perera, Su.perera@wsscc.org from WSSCC AND Claude Hilfiker claude.hilfiker@wipo.int from WIPO by 1 May.

 

April 27, 2015 at 8:20 am Leave a comment

Two New Advocacy Evaluation Tools

Here are two new advocacy evaluation tools from the Center for Evaluation Innovation:

The Advocacy Strategy Framework (pdf): presents a simple one-page tool for thinking about theories of change that underlie policy advocacy strategies. Check out the “interim outcomes and indicators” on the last page – very good range of advocacy outcomes/indicators.

Four Tools for Assessing Grantee Contribution to Advocacy Efforts (pdf): offers funders practical guidance on how to assess a grantee’s contribution to advocacy outcomes.The four tools include:
1. A question bank
2. Structured grantee reporting
3. An external partner interview guide
4. Contribution analysis

 

April 1, 2015 at 4:49 pm 1 comment

advocacy evaluation – new methods

sankey_chartThe latest edition of Evaluation Connections (pdf), newsletter of the European Evaluation Society, has an interesting article “Advocacy evaluation: lessons from Brazil (and the internet)” by William N. Faulkner.

The article describes some new methods the evaluation team has used such as Sankey diagrams and mind mapping for qualitative analysis.

I’ve reproduced in this post, the Sankey diagram from the article, which shows the flow from “outputs” to “outcomes” for the advocacy – quite a good visualisation of the information.
View the article in this pdf, go to page 7>>

 

 

 

 

March 18, 2015 at 4:40 pm 1 comment

Addressing causation in humanitarian evaluation

A new discussion paper has been released by ALNAP entitled “Addressing causation in humanitarian evaluation: A discussion on designs, approaches and examples”.

The paper discusses possible evaluation designs and approaches that can help provide credible answers to these types of questions, using examples from Oxfam, WFP, UNHCR and NRC.

The paper cites an evaluation I worked on with NRC in the realm of advocacy evaluation.

View the paper here>> 

February 10, 2015 at 3:00 pm 1 comment

The checklist as an evaluation tool: examples from other fields

 Rick Davies of the Monitoring and Evaluation NEWS blog has published an interesting post exploring how surgeons and pilots use checklist – and lists other interesting resources on this issue.

See also my earlier posts here and here on using checklists.

January 15, 2015 at 2:48 pm Leave a comment

Three guides for focus groups

Recently I was running a series of focus groups and wanted to update myself on the “ways” and “hows” – I found the following three guides useful:

Designing and Conducting Focus Group Interviews (Richard A. Krueger, University of Minnesota) (pdf) >> 

 Guidelines for Conducting a Focus Group (Eliot & Associates) (pdf) >>

Toolkit for Conducting Focus Groups (Omni) (pdf) >>

 

September 25, 2014 at 7:23 pm 3 comments

Older Posts Newer Posts


Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 2,665 other subscribers

Categories

Feeds