Posts filed under ‘Development evaluation’

Cultural issues in evaluation

Having spent the last week in the Congo – mostly in Kisangani (pictured above) for an evaluation project, I’ve been thinking about cultural issues and evaluation – in particular how evaluators are perceived in different societies as I’ve written about before.

Interestingly, when I was recently in Central Asia, it was explained to me that evaluation in the Soviet tradition was traditionally seen as an inspection-like function which would search for small mistakes for which people could then be punished for (demotion or worse..).

In Africa, the perception is quite different. People see you as coming to listen, investigate and relay what you have found. Those working with NGOs are now familiar with evaluation.

Of course, cultural issues and how you are percieved can affect your evaluation. I don’t believe there are any quick learning points except to understand as much as you can about the cultural you are working in – and to test your evaluation methodology and questions by discussing with local people prior to any gathering of data.

This article (pdf) has some interesting points on evaluating across cultures, for example, explaining local relevance and usefulness of the evaluation and to be careful in the use of questionnaire types (such as the Likert scale) which may be misunderstood in some contexts.


June 7, 2008 at 3:25 pm 3 comments

Perceptions of evaluation

I’ve just spent a week in Armenia and Georgia (pictured above) for an evaluation project where I interviewed people from a cross section of society. These are both fascinating countries, if you ever get the chance to visit… During my work there, I was wondering – what do people think about evaluators? For this type of in-site evaluation, we show up, ask some questions – and leave – and they may never see us again.

From this experience and others I’ve tried to interpret how people see evaluators – and I believe people see us in multiple ways including:

The auditor: you are here to check and control how things are running. Your findings will mean drastic changes for the organisation. Many people see us in this light.

The fixer: you are here to listen to the problems and come up with solutions. You will be instrumental in changing the organisation.

The messenger: you are simply channelling what you hear back to your commissioning organisation. But this is an effective way to pass a message or an opinion to the organisation via a third party.

The researcher: you are interested in knowing what works and what doesn’t. You are looking at what causes what. This is for the greater science and not for anyone in particular.

The tourist: you are simply visiting on a “meet and greet” tour. People don’t really understanding why you are visiting and talking to them.

The teacher: you are here to tell people how to do things better. You listen and then tell them how they can improve.

We may have a clear idea of what we are trying to do as evaluators (e.g. to assess results of programmes and see how they can be improved), but we also have to be aware that people will see us in many different ways and from varied perspectives – which just makes the work more interesting….


April 21, 2008 at 8:46 pm 1 comment

Measurement and NGOs – contradicting voices

For those working in the NGO field, measurement and evaluation implicates different issues, often in contradiction:

– Donors, that provide funding for programmes, increasingly ask NGOs to focus on evaluating the impact of their programmes – the long term results;

– At the same time, many donors require an annual feedback from NGOs on the progress of their programmes, which often focuses on outputs – how much was spent and on what;

– NGOs often desire to focus on measuring outcomes – what has been achieved as a result of programmes – as they provide more feedback on what has actually changed than outputs – but can be measured in a shorter time frame than impact (as I’ve written about before);

– NGOs, if they want to provide both a feedback on outputs, outcomes and impact means an increase in administrative overheads for programmes – something which donors are never happy about.

These issues, the potential contradictions and possible solutions are discussed further in this article “Measure what you treasure” (pdf)” from the InterAction Monday Developments journal.


March 3, 2008 at 4:31 pm Leave a comment

Checklists and evaluation

Often in evaluation, we are asked to evaluate projects and programmes from several different perspectives: the end user, the implementer or that of an external specialist or “expert”. I always favour the idea that evaluation is representing the *target audiences* point of view – as is often the case in evaluating training or communications programmes – we are trying to explain the effects of a given programme or project on target audiences. However, often a complementary point of view from an “expert” can be useful. A simple example – imagine if you making an assessment of a company website – a useful comparison would be comparing the feedback from site visitors with that of an “expert” who examines the the website and gives his/her opinion.

However, often opinions of “experts” are mixed in with feedback from audiences and comes across as unstructured opinions and impressions. A way of avoiding this is for “experts” to use checklists – a structured way to assess the overall merit, worth or importance of something.

Now many would consider checklists as being a simple tool not worthy of discussion. But actually a checklist is often a representation of a huge body of knowledge or experience: e.g. how do you determine and describe the key criteria for a successful website?

Most checklists used in evaluation are criteria of merit checklists – where a series of criteria are established and given a standard scale (e.g. very poor to excellent) and are weighed equally or not (e.g. one criteria is equal or more crucial than the next one). Here are several examples where checklists could be useful in evaluation:

  • Evaluating an event: you determine “success criteria” for the event and have several experts use a checklist and then compare results.
  • Project implementation: a team of evaluators are interviewing staff/partners on how a project is being implemented. The evaluators use a checklist to assess the progress themselves.
  • Evaluating services/products: commonly used, where a checklist is used by a selection panel to determine the most appropriate product/services for their needs.

This post by Rick Davies actually got me thinking about this subject and discusses the use of checklists in assessing the functioning of health centres.


November 6, 2007 at 10:04 am 2 comments

Sharpening the focus on measurement

It is often difficult to get organisations away from simply measuring “outputs” – what is produced – to measuring “outcomes” – what are the effects of outputs.

Funny enough, many organisations want to go from the very superficial measuring of output (e.g. how many news articles did we generate) to the very in-depth measuring of impact (e.g. the long term effect of our media visibility on audiences). Impact is feasible but difficult to measure, as I’ve written about before. However, instead of focusing on the two ends of the measurement scale, organisations would perhaps be wise to focus on “outcome” measurement.

I think this quote from a UN Development Programme Evaluation Manual (pdf) sums up why outcome is an appropriate level to measure for most organisations:

“Today, the focus of UNDP evaluations is on outcomes, because this level of results reveals more about how effective UNDP’s actions are in achieving real development changes. A focus on outcomes also promises a shorter timeframe and more credible linkages between UNDP action and an eventual effect than does a focus on the level of overall improvement in people’s lives, which represent much longer-term and diffuse impacts .”

The notion of the shorter timeframe and more credible linkages is certainly appealing for many organisations considering their focus of evaluation.


October 16, 2007 at 1:53 pm 2 comments

Assumptions, Evaluation and Development

At the international conference of the European Evaluation Society, I attended an interesting workshop on Assumptions Based Comprehensive Development Evaluation Framework (ABCDEF) presented by Professor Osvaldo Feinstein, evaluation consultant. The main thrust of his workshop was to challenge us to consider fully the assumptions that are made in development projects – and consequently the impact on evaluation. He has created a guide to what we should consider when exploring assumptions, namely: Incentives, Capacities, Adoption, Risk, Uncertainty and Sustainability. Cleverly, it makes the acronym Icarus, whom we all know flew too close to the sun which melted the wax holding together his wings. And that is the underlying theory of Professor Feinstein, as he put it:

“An unexamined assumption can be very dangerous!”

More information on ABCDEF can be found in the Sage Handbook of Evaluation.


October 9, 2006 at 8:56 pm 2 comments

New Methodologies in Evaluation

The Uk-based Overseas Development Institute have published a very comprehensive guide “Tools for Knowledge and Learning: A guide for development and humanitarian organisations” (pdf) which contains descriptions of 30 knowledge and learning tools and techniques.

It contains guidelines for several relatively new methodologies useful for evaluation, notably Social Network Analysis, Most Significant Change and Outcome Mapping. I believe these new methodologies could be useful in a lot fields, not only for the development / humanitarian sector.


September 19, 2006 at 8:04 pm Leave a comment

Evaluation – going beyond your own focus

If your focus is on evaluating PR, training or another business competency, it is sometimes helpful to learn more about evaluation by looking further than your particular focus. Look at international aid. I just read a review of a new book The White Man’s Burden – Why the West’s Efforts to Aid the Rest Have Done So Much Ill and So Little Good By William Easterly. Based on the book review, he raises two interesting points about evaluation:

Planners vs. Searchers: he says most aid projects have either one of these approaches. He writes “A Planner thinks he already knows the answer. A Searcher admits he doesn’t know the answers in advance”. He explains that Searchers treat problem-solving as an incremental discovery process, relying on competition and feedback to figure out what works.

Measuring Success: Easterly argues that aid projects rarely get enough feedback whether from competition or complaint. Instead of introducing outcome evaluation where results are relatively easy to measure (e.g. public health, school attendance, etc), advocates of aid measure success by looking at how much money rich countries spend. He says this is like reviewing movies based on their budgets.

You can read the full review of Easterly’s book on the IHT website.

These are some excellent points that we can apply to evaluation across the board. Evaluators can probably classify many projects they have evaluated as either of a Planner or Searcher nature. The Searcher approach integrates feedback and evaluation throughout the whole process – look at PR activities, we may not know what communication works best with our target audience to begin with, but by integrating feedback and evaluation we can soon find out.

His analogy about reviewing movies also strikes a chord. How many projects are evaluated on expenditure alone?

Aside from the evaluation aspects, his thoughts on international aid are interesting. I spent my formative years as an aid worker in Africa, Eastern Europe and Asia and a lot of what he says confirms my own experiences and conclusions.


March 20, 2006 at 10:27 pm Leave a comment

Newer Posts

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 1,163 other followers