Posts filed under ‘General’

From broad goals to specific indicators

No doubt you have heard of the Millenium Development Goals (MDGs), eight broad goals on poverty, ill-health, etc, agreed upon by all countries to try and reach by 2015.

From a monitoring and evaluation point-of-view, what is interesting is that these goals are broad sweeping statements, such as:

Goal 1: Eradicate Extreme Hunger and Poverty

Goal 3: Promote Gender Equality and Empower Women

One could ask – how can these broad goals be possibly monitored and evaluated?

As detailed on this MDGs monitoring website, what has been done is to set specific indicators for each goal, for example:

Goal 3: Promote Gender Equality and Empower Women

Description: Eliminate gender disparity in primary and secondary education, preferably by 2005, and in all levels of education no later than 2015

Indicators:
3.1 Ratios of girls to boys in primary, secondary and tertiary education
3.2 Share of women in wage employment in the non-agricultural sector
3.3 Proportion of seats held by women in national parliament

So from broad goals, the MDGs focus on two to seven specific indicators per goal that they are monitoring. That’s an interesting approach, as often we see broad goals set by organisations and then no attempt made to actually detail any indicators.

the MDGs monitoring website plays an active role in monitoring these indicators combining quantitative data (statistics) and qualitative data (case studies) – also an interesting approach to show how such indicators can be tracked.

Glenn

July 30, 2008 at 6:16 am Leave a comment

Social network analysis and evaluation

nullMeasuring networks can have many applications: how influence works, how change happens within a community, how people meet, etc. I’m interested in measuring networks as indicator of how contacts are established amongst people, particularly in events and conferences, as I’ve written about previously.

In this area, there is a new resource page available on social network analysis and evaluation from M&E news. The page contains many useful resources and examples of network analysis and evaluation for non-profit organisations, education, events and research and development – including one from myself.

(Above image is from a network analysis of a conference, further information is available here>> )

Glenn

June 24, 2008 at 2:27 pm Leave a comment

Cultural issues in evaluation

Having spent the last week in the Congo – mostly in Kisangani (pictured above) for an evaluation project, I’ve been thinking about cultural issues and evaluation – in particular how evaluators are perceived in different societies as I’ve written about before.

Interestingly, when I was recently in Central Asia, it was explained to me that evaluation in the Soviet tradition was traditionally seen as an inspection-like function which would search for small mistakes for which people could then be punished for (demotion or worse..).

In Africa, the perception is quite different. People see you as coming to listen, investigate and relay what you have found. Those working with NGOs are now familiar with evaluation.

Of course, cultural issues and how you are percieved can affect your evaluation. I don’t believe there are any quick learning points except to understand as much as you can about the cultural you are working in – and to test your evaluation methodology and questions by discussing with local people prior to any gathering of data.

This article (pdf) has some interesting points on evaluating across cultures, for example, explaining local relevance and usefulness of the evaluation and to be careful in the use of questionnaire types (such as the Likert scale) which may be misunderstood in some contexts.

Glenn

June 7, 2008 at 3:25 pm 3 comments

network mapping tool


As regular readers will now, I am interested in network mapping and have undertaken some projects where I have used network mapping to assess networks that have emerged as a result of conferences.

Here is quite an interesting tool, Net-Map, an interview-based mapping tool. The creators of this tool state that it is a “tool that helps people understand, visualize, discuss, and improve situations in which many different actors influence outcomes”.

Read further about the tool and view many of the illustrative images here>>

Glenn

May 20, 2008 at 12:57 pm Leave a comment

Tonsils, run over dogs and comparisons

In evaluation, we often make judgements based on “feelings” or “gut reaction” without any proper inquiry or comparison with other data. That is why this story about Ludwig Wittgenstein, the Austrian philosopher appealed to me. Apparently he telephoned a friend in hospital, Fania Pascal, who told the following story:

“I had my tonsils out and was in the Evelyn Nurshing Home feeling sorry for myself. Wittgenstein called. I croaked: “I feel just like a dog that has been run over.” He was disgusted: “You don’t know what a dog that has been run over feels like.”

The point being that Fania Pascal (in the hospital) is making a comparison that she cannot possibly provide any support for – how could she know what it feels like to be a dog that has been run over?

In the same way, you often hear people saying “our results are terrible” – or “we are doing too much of XY”. But my first reaction is “How do you judge that – what are you comparing it to?” – often no real inquiry or comparative data are used (which reminds me of another quote from Groucho Marx).

For those interested, the above quote comes from the book “On Bullshit” by Harry G. Frankfurt – well worth a read.

Glenn

April 28, 2008 at 8:05 pm 1 comment

Perceptions of evaluation

I’ve just spent a week in Armenia and Georgia (pictured above) for an evaluation project where I interviewed people from a cross section of society. These are both fascinating countries, if you ever get the chance to visit… During my work there, I was wondering – what do people think about evaluators? For this type of in-site evaluation, we show up, ask some questions – and leave – and they may never see us again.

From this experience and others I’ve tried to interpret how people see evaluators – and I believe people see us in multiple ways including:

The auditor: you are here to check and control how things are running. Your findings will mean drastic changes for the organisation. Many people see us in this light.

The fixer: you are here to listen to the problems and come up with solutions. You will be instrumental in changing the organisation.

The messenger: you are simply channelling what you hear back to your commissioning organisation. But this is an effective way to pass a message or an opinion to the organisation via a third party.

The researcher: you are interested in knowing what works and what doesn’t. You are looking at what causes what. This is for the greater science and not for anyone in particular.

The tourist: you are simply visiting on a “meet and greet” tour. People don’t really understanding why you are visiting and talking to them.

The teacher: you are here to tell people how to do things better. You listen and then tell them how they can improve.

We may have a clear idea of what we are trying to do as evaluators (e.g. to assess results of programmes and see how they can be improved), but we also have to be aware that people will see us in many different ways and from varied perspectives – which just makes the work more interesting….

Glenn

April 21, 2008 at 8:46 pm 1 comment

Fact sheets & “fun” sheets on evaluation

I’ve put together a series of fact sheets on evaluation and related subjects – mostly inspired by posts I’ve made on this blog. Plus I’ve created two “fun” sheets – on favourite quotes – and excuses for not evaluating:

Fact sheets on evaluation:
Evaluating communication campaigns (pdf)>>
Evaluating networks (pdf)>>
Ten tips for better web surveys (pdf)>>

“Fun” sheets:
Top ten excuses for not evaluating (pdf)>>
Top ten quotes on evaluation (pdf)>>

Glenn

P.S. Those with a sharp eye will notice that these fact/fun sheets are from my new company Owl RE, which offers research and evaluation services in the communications, training/events and development fields.

March 24, 2008 at 9:07 pm Leave a comment

Found verses manufactured data

In evaluation projects, we often feel the strong need to talk to people – to assess a situation or judge a phenomena by surveying or interviewing people. However, this is “manufacturing” data – we are framing questions and then putting them to people – and perhaps in doing so are influencing how they respond.

Alternatively, there is a lot to say for “found” or “natural” data – information that already exists – e.g. media reports, blog posts, conference papers, etc. We often forget about this type of data in our rush to speak to people.

Take this example. I recently saw a paper presenting “current challenges in the PR/communications field”. After surveying PR/comm. professionals, a list of five current challenges were presented by the authors. This is “manufactured” data. An approach using “found” data would be to examine recent PR/comm. conference papers and see what challenges are spoken about – or study the websites of PR/comm. agencies and see what they are presenting as the main challenges.

Another example. Imagine you would like to study the experiences of US troops in Iraq. Of course you could survey and interview military personnel. However, a rich body of data certainly exists online in blog posts, videos and photos from military personnel describing their experiences.

Of course, there are limitations to using “found” data (such as it may present only the views of a select part of a population/phenomena) – but an evaluation project combining both “manufactured” and “found” would certainly make its findings more solid.

Examples of “found” data:

  • blog posts
  • discussion forums
  • websites
  • website statistics
  • photo/video archives (online or offline)
  • media reporting
  • conference papers
  • policy documents
  • records (attendance, participation, complaints, sales, calls, etc.)

If you are interested to read further on this subject, this book “A Very Short, Fairly Interesting and Reasonably Cheap Book about Qualitative Research” by David Silverman provides more examples and information on this concept.

Glenn

March 20, 2008 at 8:31 am Leave a comment

More favourite quotes on evaluation and measurement

To add to my previous favourite quotes on evaluation and measurement, I have collected the following quotes – enjoy!:

“Everything that can be counted does not necessarily count; everything that counts cannot necessarily be counted”
Albert Einstein

“The most serious mistakes are not being made as a result of wrong answers. The truly dangerous thing is asking the wrong question”
Peter Drucker

“One of the great mistakes is to judge policies and programs by their intentions rather than their results”
Milton Friedman

“The pure and simple truth is rarely pure and never simple”
Oscar Wilde

“First get your facts; then you can distort them at your leisure”
Mark Twain

I know that half of my advertising dollars are wasted … I just don’t know which half”
John Wanamaker

Glenn

December 23, 2007 at 8:58 pm 1 comment

The ultimate user test?

The new Terminal Five at Heathrow Aiport, London – which had quite a controversial birth – is going to undertake an unusual experience in “testing” there facilities before public launch. As I’ve written about before, the aspect of evaluation prior to the launch of a project or activity is often overlooked – and this is an extreme example of this principle in action.

Terminal Five is seeking 15,000 volunteers to act as test users of their new facilities. They will ask volunteers to act as “real” passengers and go through the steps of checking in just to stepping on to the aircraft. I find it fascinating that they will “test” their facilities in such a large-scale manner. Always recommended is “field testing” of new products and services – but this is going quite far. Of course, the question begs, what happens if the user testing brings up major issues with the terminal?  Well, Terminal Five lists the following as their aim:

“What are we trying to achieve?: Proof that Terminal 5 is safe, secure and works like clockwork. We’ll also ensure the team who will be running the terminal get the chance to test and develop their service. We also need to identify anything we need to fix prior to opening”

So let’s see what such a trial will bring – how big will the “fixes” be – or will all work like clockwork? Regardless, I am sure they will receive interesting feedback and discover, as I have in usability testing,  that people view and use products or facilities in ways totally unanticipated.  Do you want to volunteer? Read more here>>

Glenn

December 4, 2007 at 7:09 am Leave a comment

Older Posts Newer Posts


Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 2,665 other subscribers

Categories

Feeds