Posts filed under ‘General’

Sharpening the focus on measurement

It is often difficult to get organisations away from simply measuring “outputs” – what is produced – to measuring “outcomes” – what are the effects of outputs.

Funny enough, many organisations want to go from the very superficial measuring of output (e.g. how many news articles did we generate) to the very in-depth measuring of impact (e.g. the long term effect of our media visibility on audiences). Impact is feasible but difficult to measure, as I’ve written about before. However, instead of focusing on the two ends of the measurement scale, organisations would perhaps be wise to focus on “outcome” measurement.

I think this quote from a UN Development Programme Evaluation Manual (pdf) sums up why outcome is an appropriate level to measure for most organisations:

“Today, the focus of UNDP evaluations is on outcomes, because this level of results reveals more about how effective UNDP’s actions are in achieving real development changes. A focus on outcomes also promises a shorter timeframe and more credible linkages between UNDP action and an eventual effect than does a focus on the level of overall improvement in people’s lives, which represent much longer-term and diffuse impacts .”

The notion of the shorter timeframe and more credible linkages is certainly appealing for many organisations considering their focus of evaluation.

Glenn

October 16, 2007 at 1:53 pm 2 comments

Output or outcome?

I did appreciate the following quote from Alberto Gonzales, US Attorney General who when defending the work of his department said:

“Good, if you look at the output”

Regardless of what you think of Mr Gonzales and his department’s performance, I find it interesting the use of the word output – it has sneaked in from management-by-objective speak… but output is usually a poor measure for performance, as it represents the products or services produced. It is just like..

A press officer judges her performance by the number of press releases she writes
A training office judges his performance by the number of people that attends his training sessions

What is far more important are outcomes – the effects and changes that are a result of the outputs:

A press officer should judge her performance by how her press activities change the knowledge and attitudes of audiences
A training officer should judge his performance by how the people he trains uses what they have learnt

Like Mr Gonzales, most people prefer to look at outputs to judge performance as they are much easier to control and monitor compared to outcomes, which I’ve written about previously. But increasingly activities are assessed on what they achieve (outcome) rather than what they produce (output).

Glenn

August 7, 2007 at 8:23 pm 4 comments

Online polling – legitimate or not?

Here is something from the mainstream media, the International Herald Tribune has published an interesting article about the legitimacy or not of online polling – basically the use of online survey tools to conduct polling with broad audiences.

The article pits two main players in the field – Yougov against TNS.

The representative from TNS says about internet polling:

“Internet polling is like the Far West, with no rules, no sheriff and no reference points.”

Of course, he has a point although the counter argument is that *offline* polling – usually done by calling people on their fixed phone line – is fastly becoming obsolete as countries create “no call” lists and people use increasingly their mobile phones.

From my perspective, I would see the debate from a slightly different angle:

  • We shouldn’t forget the approach of triangulation, that is the combination of different research methods to bring us closer to the *truth*. Internet polling, if combined with other research methods (interviews, focus groups, observations) can become more useful.
  • The whole article focuses only on surveying *unknown* audiences – that is members of the public as a whole. However, most evaluation that I see undertaken is done with *known” audiences, e.g. staff, partners, customers, members, etc. For *known* audiences, the use of internet polling is efficient and reliable, assuming of course that your audience does have access to the Internet. Of course, if you are trying to gauge the opinions of audiences that are obviously not using the Internet, then another approach would be appropriate.

Glenn

May 31, 2007 at 9:51 am Leave a comment

Ambitious but Achievable

I was recently at a presentation where the speaker commented on the SMART approach to writing objectives and said that that “A” stood for “Ambitious”. The person next to me nudged me and whispered “I think he means “Achievable””. And that got me thinking that the A should stand for “Ambitious but Achievable”.

For it to be possible to evaluate a programme or project, it is necessary to set SMART objectives that allow the indentification of indicators that facilitate evaluation. What do SMART objectives look like? I’ve created several examples (pdf) if you are interested at my interpretation of SMART.

But there is a potential conflict in the concept of “Ambitious but Achievable”: we need to set realistic objectives that can be evaluated – but at the same time we want to make our objectives ambitious enough in order to contribute significantly to the overall aims of a project or programme.

Glenn

May 15, 2007 at 8:43 pm Leave a comment

Accountability and Outcomes – the Penny Drops…

In the past months I’ve had many discussions with project managers about evaluation where they clearly feel uncomfortable linking their objectives to outcomes – and now after reading a recent article in the Evaluation journal, the penny has dropped (i.e. I am able to link up the dots and understand a little more…).

In an article by John Mayne, “Challenges and Lessons in Implementing Results-Based Management”, he discusses the issue of accountability and outcomes:

“People are generally comfortable with being accountable for things they can control. Thus, managers can see themselves as being accountable for the outputs produced by the activities they control. When the focus turns to outcomes, they are considerably less comfortable, since the outcomes to be achieved are affected by many factors not under the control of the manager.”

And that’s it – a communications manager prefers to be accountable for the number of press releases s/he publishes and not the changes to an audiences knowledge or attitude, a training manager prefers to be accountable for the number of courses s/he organises and not the impact on an organisation’s efficency, etc.

So is there a solution? John Mayne speaks of a more sophisticated approach to accountability, notably to look at the extent to which a programme has influenced and contributed to the outcomes observed. And that leads us to the next question – how much influence is good enough?

Food for thought…you can read an earlier version of the article here (pdf).

Glenn

May 1, 2007 at 8:30 pm Leave a comment

Macro or Micro Approach to Evaluation?

When I’m asked to take on an evaluation project, I usually categorise it in my own mind as “micro” or “macro”. Let me explain. I see evaluation projects falling into these two categories:

Macro: evaluation of an overall project or programme (e.g. training programme, communications project)

Micro: evaluation of an element that is part of a larger programme (e.g. evaluation of online communications – part of larger communications project, evaluation of an event that is part of larger campaign).

I find that a lot of evaluation projects that I get involved with are at a “micro” level. And I’ve been wondering why is this so?

I believe that evaluation is more often approached at a “micro” level because it is easier to deal with and less daunting for an organisation to cope with. Many people do not have the resources, time and political authority to launch a “macro” evaluation of projects/programmes.

And a lot of the literature on evaluation recommends implementing evaluation aspects in small steps. And there’s certainly some merit in this – to start at the “micro” level and build up to the “macro” level. In this way, people can hopefully see the benefits of evaluation and will support larger evaluation efforts when needed.

Glenn

March 29, 2007 at 8:47 pm 2 comments

Using employee opinion surveys to guide HR policies

In a previous post Glenn wrote that evaluation is frustrating when no changes result.

Quite so. I have seen many an employee survey gather dust or been quietly forgotten.
Why should employees bother? Often the survey will avoid touchy issues- “The elephant in the room”.  There is often a huge delay in making the survey results public. And nothing changes anyway.

Well, here’s a little case history to cast a ray of hope.

A Benchpoint client in the City of London’s financial sector (often characterised by macho management cultures) wanted to run an anonymous poll of employee attitudes on just about everything relating to the job and the relationship with the employer.

The company had very enlightened HR policies by traditional City standards. A “listening culture”, personal development goals, work/life balance, continuous personal feedback etc. But were the policies working? What needed to change?

More than “just a survey”, The Benchpoint poll enabled important decisions to be taken quickly with the benefit of employee input. The company announced operational changes just 5 days after the survey closed, and immediately after a presentation of the results to all employees.

Employee response and feedback was positive; just one employee was unable to take part. For the first time they had an opportunity to voice their true opinions anonymously and confidentially. And they were impressed by the speed of their management’s reaction.

For management, this was a relatively low-cost exercise, and was particularly economic of senior people’s time. It was also an excellent way of “walking the talk” and demonstrating that HR policies regarding inclusivity, development and teamwork were real.

The data collected provided an insight into the behaviour drivers of groups of employees, and a dynamic tool for future goal setting and benchmarking.

Conclusions

  • Don’t do a survey unless you are prepared to communicate the results and make changes.
  • Don’t make organisation or operational changes unless you have surveyed and understood the real issues.
  • Do it quickly, share the results quickly and make your announcements quickly.
  • Don’t assume that everything is OK in your organisation. Measure intelligently to find out what the real issues are.

Richard

February 8, 2007 at 1:26 pm Leave a comment

Client Perspective of Evaluation

The Active Learning Network for Accountability and Performance in Humanitarian Action (ALNAP), has recently published the results (pdf) of a survey of evaluation managers from humanitarian organisations.

ALNAP had responses from 46 evaluation managers that commission and manage evaluation projects for humanitarian organisations. This provides an interesting insight into the “client” perspective of evaluation, some highlights:

What helps promote an evaluation?
ownership or “buy in” of an evaluation was the most often stated promoting factor. Quality of the evaluation report and recommendations was also important.

What inhibits the use of an evaluation?
The most frequently mentioned factor was the imposing of and evaluation by HQ and/or donor. The arrogant, fault-finding or ivory tower approach of evaluators and insufficient time for the evaluation leading to superficial results were also important factors.

What other factors induce changes in an organisation?
A very interesting question – what factors do they believe impact on change – apart from evaluation. Respondents mentioned two important influences: the media and donors. And to a lesser extent, the influence of peers (exchange/discussions between agencies)

Why do organisations evaluate?
Formal accountability (obligation to donors, trustees, etc.)
Improving the programme
Learning for the organisation
Legitimising (to add credence or challenge existing agenda)

how to increase use of evaluation?
Most respondents mentioned changing the atttitude of senior manager and the culture of learning within their organisations. Respondents spoke of a culture of defensiveness rather than of learning and reflection.

Some very interesting results. It also confirms what I have seen in the humanitarian field: communications professionals are slowly coming around to recognise that evaluation is necessary and important – but this is being prompted by pressure from donors and the monitoring and evaluation units that are sprouting up in their organisations.

Glenn

September 6, 2006 at 2:17 pm Leave a comment

Measuring Online Behavior

A lot has already been written about how we can measure online behavior through looking at indicators from web site statistics or “web metrics”. As part of PR measurement, web metrics can provide an interesting complement to other measures being taken. For example, in campaigning, online behaviour such as signing a petition, referring a page to a friend or uploading a message of support can be measures of behavior change and supplement “offline” measures. In advertising, the use of web metrics is making advertising more “measurable” and impacting the business model in general. This article in The Economist sums up well this change.

I found this explanation from a Google representative quoted in the article of interest:

Old way of “offline” advertising:
“Advertisers are always trying to block the stream of information to the user in order to blast their message to him.”

New way of “online” advertising:
“On the internet, by contrast, advertisers have no choice but to go with the user, the information coming back from the users is more important than the messages going out.The interactive nature of the Internet makes this possible; the medium more measurable and a two-way symmetrical approach to communications feasible.

Glenn

August 3, 2006 at 8:00 pm 2 comments

Training and innovative evaluation

If you are based in Europe and interested in learning more about evaluation, the European Evaluation Society is organising their biennial conference in London from 4-6 October 2006. Even more interesting is the series of Professional Development Training Workshops they are organising around the event. One workshop looks particularly interesting – "innovative evaluation approaches" covering the following subjects:

  • Theory based approaches
  • Participative evaluation and ‘social inclusion’
  • Integrating qualitative and quantitative methods
  • Monitoring and Evaluation in international development settings
  • Synthesis reviews and meta-evaluations
  • Experimental methods and RCTs
  • Modelling and risk analysis
  • Self evaluations
  • Cost benefit analysis  

Glenn

May 4, 2006 at 7:40 pm Leave a comment

Older Posts Newer Posts


Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 2,665 other subscribers

Categories

Feeds