Posts filed under ‘Development evaluation’

New paper: Implementing Development Evaluations under Severe Resource Constraints

A very interesting paper from the Center for Development Impact on implementing evaluations with limited resources, here is a summary:

Most agency evaluations are very short both on resources and in duration, with no proper opportunity to assess impact in a valid manner. The methodology for these evaluations is based on interviews, a review of available programme literature and possibly a quick visit to one (often unrepresentative and usually successful) programme site. This means that the results of the evaluations are heavily dependent on the experience and judgement of the evaluator, the opinions received, and level of support from the commissioner. This CDI Practice Paper by Richard Longhurst reviews how to make the best of such a situation, drawing on lessons learned from techniques of better resourced evaluations and other techniques that have been used. A simple framework can relate the type of evaluation to the resources available and enable better planning and use of evaluations across an organisation.

View the paper here (pdf)>>

 

 

 

October 15, 2013 at 2:28 pm Leave a comment

advocacy evaluation: influencing climate change policy

Often I don’t get to share the findings of the evaluations I undertake, but in this case of an advocacy evaluation, an area that I’ve written about before, the findings are public and can be shared.

I was part of a team that evaluated phase 1 of an advocacy/research project – the Africa Climate Change Resilience Alliance (ACCRA).  ACCRA aims to increase governments’ and development actors’ use of evidence in designing and implementing interventions that increase communities’ capacity to adapt to climate hazards, variability and change.  Advocacy plays a large role in trying to influence governments and development actors in this project. You can read more in the Executive_Summary (pdf) of the evaluation findings.

The evaluation also produced 5 case studies highlighting successesful advocacy strategies:

  • Capacity building and district planning
  • Secondment to a government ministry
  • Reaching out to government and civil society in Uganda
  • Disaster risk profiling in Ethiopia
  • Exchanging views and know-how between ACCRA countries

The case studies can be viewed on the ACCRA Eldis community blog (n.b. you have to  join the Eldis community to view the case studies, it’s free of charge).

To disseminate the evaluation findings widely we also produced a multimedia clip, as featured below.

May 1, 2012 at 9:12 pm 2 comments

New US policy on evaluation

USAID, the US government body responsible for foreign aid programs has issued a new policy on evaluation.  According to the USAID itself,  the new policy “seeks to redress the decline in the quantity and quality of USAID’s recent evaluation practice”. They highlight six key points of this policy:

1.       Defining impact evaluation and performance evaluation and requiring at least one performance evaluation for each major program and any untested and innovative interventions, and encouraging impact evaluation for each major development objective in a country program, especially for new or untested approaches and interventions:

2.       Calling for evaluation to be integrated into programs when they are designed;

3.       Requiring sufficient resources be dedicated to evaluation, estimated at approximately three percent of total program dollars;

4.       Requiring that evaluations use methods, whether qualitative or quantitative, that generate the highest quality evidence linked to the evaluation questions and that can reasonably be expected to be reproducible, yielding similar findings if applied by a different team of qualified evaluators;

5.       Building local capacity by including local evaluators on evaluation teams and supporting partner government and civil society capacity to undertake evaluations; and

6.       Insisting on transparency of findings with the presumption of full and active disclosure barring principled and rare exceptions.

view the new policy here (pdf)>>

January 28, 2011 at 4:19 pm 2 comments

Evaluating Strategic Communication for Avian Influenza/Pandemic Influenza

As the world is focused on the current flu pandemics, we have seen many efforts to communicate prevention and treatment approaches.

And what about how to measure the results of such communication efforts? Here is an interesting set of guidelines from UNICEF on this issue:

Research, Monitoring and Evaluating Strategic Communication for Behaviour and Social Change with Special Reference to the Prevention and Control of Avian Influenza/Pandemic Influenza (pdf) >>

Although it’s a technical document, it provides interesting insight into sampling and interviewing techniques for evaluating communication campaigns.

December 2, 2009 at 8:13 am Leave a comment

New guide to evaluating communications for non-profits and foundations

The Communications Network has just published a new guide “Are we there yet? A Communications Evaluation Guide (pdf) “. The guide has been written for communicators working in non-profit organisations and foundations – it contains interesting case studies and useful advice.  Download the guide here (pdf)>>

January 10, 2009 at 12:23 pm Leave a comment

Communications evaluation – 2009 trends

2009

Last week I gave a presentation on evaluation for communicators (pdf) at the International Federation of Red Cross and Red Crescent Societies. A communicator asked me what trends had I seen in communications evaluation, particularly relevant to the non-profit sector. This got me thinking and here are some of the trends I have seen in 2008 that I believe are an indication of some directions in 2009:

Measuring web & social media: as websites and social media increasingly grow in importance for communication programmes, so to is the necessity to have the capacity to measure what their impact is. Web analytics has grown in importance as will the ability to measure social media.

Media monitoring not the be-all and end-all: after many years of organisations only focusing on media monitoring as the means of measuring communications, there is finally some realisation that media monitoring is an interesting gauge of visibility but not more. Organisations are now interested more and more in having some qualitative analysis of data collected (such as looking at how influential the media are, the tone and the importance).

Use of non-intrusive or natural data:  organisations are also now considering “non-intrusive” or “natural” data – information that already exists – e.g. blog / video posts, customer comments, attendance records,  conference papers, etc.  As I’ve written about before, this data is underated by evaluators as everyone rushes to survey and interview people.

Belated arrival of results-based management: Despite existing for over 50 years, results-based management or management by objectives is just arriving in many organsations. What does this mean for communicators? It means that at the minimum they have to set measurable objectives for their activities – which is starting to happen. They have no more excuses(pdf) for not evaluating!

Glenn

December 23, 2008 at 6:08 pm 3 comments

Research in communication projects

I came across this useful table from the Devcom blog which explains how research can be used at different stages of communication projects. There are many elements that will be familiar to readers, but what caught my eye was the first method “audience analysis” – which is often ignored by communicators in their rush to create materials and campaigns. The blog also has an example of an audience analysis (pdf) for readers. And method 3 – pretesting of prototype material – is another step often skipped over.

Read the full post here >>

Method

Purpose

1. Audience analysis

To characterize audience (demographics, communication environment) to develop content of materials, set campaign targets

2. Baseline survey

To assess knowledge, beliefs and behavior – to document current scenario

3. Pretesting of prototype materials

To determine appeal, understandability of materials (radio drama, campaign materials)

4. Management monitoring survey

To track implementation plans and make adjustments as needed

5. Content analysis

To analyze the content of audience feedback

6. Post-test survey

To determine whether the project has achieved its objectives

October 28, 2008 at 7:36 pm Leave a comment

Evaluation and communications for development

If, like me, you are interested in how communications can support development programmes – and consequently how it can be evaluated – then you might want to check out the evaluation page of the Communication Initiative Network website – where you can find 100s of evaluation studies and reports on communication for development – ranging from reproductive health to tobacco control to conflict. View the range of subjects here>>

October 20, 2008 at 7:02 pm 1 comment

From broad goals to specific indicators

No doubt you have heard of the Millenium Development Goals (MDGs), eight broad goals on poverty, ill-health, etc, agreed upon by all countries to try and reach by 2015.

From a monitoring and evaluation point-of-view, what is interesting is that these goals are broad sweeping statements, such as:

Goal 1: Eradicate Extreme Hunger and Poverty

Goal 3: Promote Gender Equality and Empower Women

One could ask – how can these broad goals be possibly monitored and evaluated?

As detailed on this MDGs monitoring website, what has been done is to set specific indicators for each goal, for example:

Goal 3: Promote Gender Equality and Empower Women

Description: Eliminate gender disparity in primary and secondary education, preferably by 2005, and in all levels of education no later than 2015

Indicators:
3.1 Ratios of girls to boys in primary, secondary and tertiary education
3.2 Share of women in wage employment in the non-agricultural sector
3.3 Proportion of seats held by women in national parliament

So from broad goals, the MDGs focus on two to seven specific indicators per goal that they are monitoring. That’s an interesting approach, as often we see broad goals set by organisations and then no attempt made to actually detail any indicators.

the MDGs monitoring website plays an active role in monitoring these indicators combining quantitative data (statistics) and qualitative data (case studies) – also an interesting approach to show how such indicators can be tracked.

Glenn

July 30, 2008 at 6:16 am Leave a comment

Social network analysis and evaluation

nullMeasuring networks can have many applications: how influence works, how change happens within a community, how people meet, etc. I’m interested in measuring networks as indicator of how contacts are established amongst people, particularly in events and conferences, as I’ve written about previously.

In this area, there is a new resource page available on social network analysis and evaluation from M&E news. The page contains many useful resources and examples of network analysis and evaluation for non-profit organisations, education, events and research and development – including one from myself.

(Above image is from a network analysis of a conference, further information is available here>> )

Glenn

June 24, 2008 at 2:27 pm Leave a comment

Older Posts


Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 1,162 other subscribers

Categories

Feeds