Posts filed under ‘Development evaluation’

Evaluation and communications for development

If, like me, you are interested in how communications can support development programmes – and consequently how it can be evaluated – then you might want to check out the evaluation page of the Communication Initiative Network website – where you can find 100s of evaluation studies and reports on communication for development – ranging from reproductive health to tobacco control to conflict. View the range of subjects here>>

October 20, 2008 at 7:02 pm 1 comment

From broad goals to specific indicators

No doubt you have heard of the Millenium Development Goals (MDGs), eight broad goals on poverty, ill-health, etc, agreed upon by all countries to try and reach by 2015.

From a monitoring and evaluation point-of-view, what is interesting is that these goals are broad sweeping statements, such as:

Goal 1: Eradicate Extreme Hunger and Poverty

Goal 3: Promote Gender Equality and Empower Women

One could ask – how can these broad goals be possibly monitored and evaluated?

As detailed on this MDGs monitoring website, what has been done is to set specific indicators for each goal, for example:

Goal 3: Promote Gender Equality and Empower Women

Description: Eliminate gender disparity in primary and secondary education, preferably by 2005, and in all levels of education no later than 2015

3.1 Ratios of girls to boys in primary, secondary and tertiary education
3.2 Share of women in wage employment in the non-agricultural sector
3.3 Proportion of seats held by women in national parliament

So from broad goals, the MDGs focus on two to seven specific indicators per goal that they are monitoring. That’s an interesting approach, as often we see broad goals set by organisations and then no attempt made to actually detail any indicators.

the MDGs monitoring website plays an active role in monitoring these indicators combining quantitative data (statistics) and qualitative data (case studies) – also an interesting approach to show how such indicators can be tracked.


July 30, 2008 at 6:16 am Leave a comment

Social network analysis and evaluation

nullMeasuring networks can have many applications: how influence works, how change happens within a community, how people meet, etc. I’m interested in measuring networks as indicator of how contacts are established amongst people, particularly in events and conferences, as I’ve written about previously.

In this area, there is a new resource page available on social network analysis and evaluation from M&E news. The page contains many useful resources and examples of network analysis and evaluation for non-profit organisations, education, events and research and development – including one from myself.

(Above image is from a network analysis of a conference, further information is available here>> )


June 24, 2008 at 2:27 pm Leave a comment

Cultural issues in evaluation

Having spent the last week in the Congo – mostly in Kisangani (pictured above) for an evaluation project, I’ve been thinking about cultural issues and evaluation – in particular how evaluators are perceived in different societies as I’ve written about before.

Interestingly, when I was recently in Central Asia, it was explained to me that evaluation in the Soviet tradition was traditionally seen as an inspection-like function which would search for small mistakes for which people could then be punished for (demotion or worse..).

In Africa, the perception is quite different. People see you as coming to listen, investigate and relay what you have found. Those working with NGOs are now familiar with evaluation.

Of course, cultural issues and how you are percieved can affect your evaluation. I don’t believe there are any quick learning points except to understand as much as you can about the cultural you are working in – and to test your evaluation methodology and questions by discussing with local people prior to any gathering of data.

This article (pdf) has some interesting points on evaluating across cultures, for example, explaining local relevance and usefulness of the evaluation and to be careful in the use of questionnaire types (such as the Likert scale) which may be misunderstood in some contexts.


June 7, 2008 at 3:25 pm 3 comments

Perceptions of evaluation

I’ve just spent a week in Armenia and Georgia (pictured above) for an evaluation project where I interviewed people from a cross section of society. These are both fascinating countries, if you ever get the chance to visit… During my work there, I was wondering – what do people think about evaluators? For this type of in-site evaluation, we show up, ask some questions – and leave – and they may never see us again.

From this experience and others I’ve tried to interpret how people see evaluators – and I believe people see us in multiple ways including:

The auditor: you are here to check and control how things are running. Your findings will mean drastic changes for the organisation. Many people see us in this light.

The fixer: you are here to listen to the problems and come up with solutions. You will be instrumental in changing the organisation.

The messenger: you are simply channelling what you hear back to your commissioning organisation. But this is an effective way to pass a message or an opinion to the organisation via a third party.

The researcher: you are interested in knowing what works and what doesn’t. You are looking at what causes what. This is for the greater science and not for anyone in particular.

The tourist: you are simply visiting on a “meet and greet” tour. People don’t really understanding why you are visiting and talking to them.

The teacher: you are here to tell people how to do things better. You listen and then tell them how they can improve.

We may have a clear idea of what we are trying to do as evaluators (e.g. to assess results of programmes and see how they can be improved), but we also have to be aware that people will see us in many different ways and from varied perspectives – which just makes the work more interesting….


April 21, 2008 at 8:46 pm 1 comment

Measurement and NGOs – contradicting voices

For those working in the NGO field, measurement and evaluation implicates different issues, often in contradiction:

– Donors, that provide funding for programmes, increasingly ask NGOs to focus on evaluating the impact of their programmes – the long term results;

– At the same time, many donors require an annual feedback from NGOs on the progress of their programmes, which often focuses on outputs – how much was spent and on what;

– NGOs often desire to focus on measuring outcomes – what has been achieved as a result of programmes – as they provide more feedback on what has actually changed than outputs – but can be measured in a shorter time frame than impact (as I’ve written about before);

– NGOs, if they want to provide both a feedback on outputs, outcomes and impact means an increase in administrative overheads for programmes – something which donors are never happy about.

These issues, the potential contradictions and possible solutions are discussed further in this article “Measure what you treasure” (pdf)” from the InterAction Monday Developments journal.


March 3, 2008 at 4:31 pm Leave a comment

Checklists and evaluation

Often in evaluation, we are asked to evaluate projects and programmes from several different perspectives: the end user, the implementer or that of an external specialist or “expert”. I always favour the idea that evaluation is representing the *target audiences* point of view – as is often the case in evaluating training or communications programmes – we are trying to explain the effects of a given programme or project on target audiences. However, often a complementary point of view from an “expert” can be useful. A simple example – imagine if you making an assessment of a company website – a useful comparison would be comparing the feedback from site visitors with that of an “expert” who examines the the website and gives his/her opinion.

However, often opinions of “experts” are mixed in with feedback from audiences and comes across as unstructured opinions and impressions. A way of avoiding this is for “experts” to use checklists – a structured way to assess the overall merit, worth or importance of something.

Now many would consider checklists as being a simple tool not worthy of discussion. But actually a checklist is often a representation of a huge body of knowledge or experience: e.g. how do you determine and describe the key criteria for a successful website?

Most checklists used in evaluation are criteria of merit checklists – where a series of criteria are established and given a standard scale (e.g. very poor to excellent) and are weighed equally or not (e.g. one criteria is equal or more crucial than the next one). Here are several examples where checklists could be useful in evaluation:

  • Evaluating an event: you determine “success criteria” for the event and have several experts use a checklist and then compare results.
  • Project implementation: a team of evaluators are interviewing staff/partners on how a project is being implemented. The evaluators use a checklist to assess the progress themselves.
  • Evaluating services/products: commonly used, where a checklist is used by a selection panel to determine the most appropriate product/services for their needs.

This post by Rick Davies actually got me thinking about this subject and discusses the use of checklists in assessing the functioning of health centres.


November 6, 2007 at 10:04 am 2 comments

Older Posts Newer Posts

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 1,131 other followers