Posts filed under ‘Training evaluation’

The value of checklists and evaluation: 7 reasons

photo by Leo Reynolds, flickr

Further to what I wrote last week  about checklists and their use in evaluation, I have found an excellent article on the logic and methodology of checklists.

Dr Michael Scriven of the Evaluation Centre of Western Michigan University describes the different types of  checklists and how good checklists are put together. In particular, I like his list of the seven values of checklists, of which I summarise as follows: 

  1. Reduces the chance of forgetting to check something important
  2. Are easier for the lay stakeholder to understand and evaluate
  3. Reduces the “halo effect”- it forces an evaluator to look at all criteria and not be overwhelmed by one highly valued feature
  4. Reduces the influence of the “rorschach effect” – that is the tendancy to see what one wants to see in a mass of data – evaluators have to look at all dimensions
  5. Avoids criteria being counted twice or given too much importance
  6. Summarises a huge amount of professional knowledge and experience
  7. Assists in evaluating what we cannot explain

As Dr Scriven points out, checklists are very useful tools in getting us to think through the “performance criteria” of all kinds of processes, projects or occurences, e.g. what are the key criteria that make a good trainer – and what criteria are more important than other?

Read the full article here >>

Glenn

November 13, 2007 at 7:52 am 3 comments

Checklists and evaluation

Often in evaluation, we are asked to evaluate projects and programmes from several different perspectives: the end user, the implementer or that of an external specialist or “expert”. I always favour the idea that evaluation is representing the *target audiences* point of view – as is often the case in evaluating training or communications programmes – we are trying to explain the effects of a given programme or project on target audiences. However, often a complementary point of view from an “expert” can be useful. A simple example – imagine if you making an assessment of a company website – a useful comparison would be comparing the feedback from site visitors with that of an “expert” who examines the the website and gives his/her opinion.

However, often opinions of “experts” are mixed in with feedback from audiences and comes across as unstructured opinions and impressions. A way of avoiding this is for “experts” to use checklists – a structured way to assess the overall merit, worth or importance of something.

Now many would consider checklists as being a simple tool not worthy of discussion. But actually a checklist is often a representation of a huge body of knowledge or experience: e.g. how do you determine and describe the key criteria for a successful website?

Most checklists used in evaluation are criteria of merit checklists – where a series of criteria are established and given a standard scale (e.g. very poor to excellent) and are weighed equally or not (e.g. one criteria is equal or more crucial than the next one). Here are several examples where checklists could be useful in evaluation:

  • Evaluating an event: you determine “success criteria” for the event and have several experts use a checklist and then compare results.
  • Project implementation: a team of evaluators are interviewing staff/partners on how a project is being implemented. The evaluators use a checklist to assess the progress themselves.
  • Evaluating services/products: commonly used, where a checklist is used by a selection panel to determine the most appropriate product/services for their needs.

This post by Rick Davies actually got me thinking about this subject and discusses the use of checklists in assessing the functioning of health centres.

Glenn

November 6, 2007 at 10:04 am 2 comments

Sharpening the focus on measurement

It is often difficult to get organisations away from simply measuring “outputs” – what is produced – to measuring “outcomes” – what are the effects of outputs.

Funny enough, many organisations want to go from the very superficial measuring of output (e.g. how many news articles did we generate) to the very in-depth measuring of impact (e.g. the long term effect of our media visibility on audiences). Impact is feasible but difficult to measure, as I’ve written about before. However, instead of focusing on the two ends of the measurement scale, organisations would perhaps be wise to focus on “outcome” measurement.

I think this quote from a UN Development Programme Evaluation Manual (pdf) sums up why outcome is an appropriate level to measure for most organisations:

“Today, the focus of UNDP evaluations is on outcomes, because this level of results reveals more about how effective UNDP’s actions are in achieving real development changes. A focus on outcomes also promises a shorter timeframe and more credible linkages between UNDP action and an eventual effect than does a focus on the level of overall improvement in people’s lives, which represent much longer-term and diffuse impacts .”

The notion of the shorter timeframe and more credible linkages is certainly appealing for many organisations considering their focus of evaluation.

Glenn

October 16, 2007 at 1:53 pm 2 comments

Impact – how feasible for evaluation?

As I mentioned in an earlier post, people often confuse “impact” with “results”. Is it possible to measure “long term impact” of projects? It is, however for most projects it is unrealistic to do so for two reasons: time and cost.

To evaluate impact, you would usually need to wait some 12 months after the major elements of a project have been implemented. Many organisations cannot simply wait that long. In term of costs, an impact study requires a triangulation methodology that uses various quantitative and qualitative research methods which could be costly. However, if time and cost are not issues, an impact evaluation is possible, keeping in mind the following points:

Was the impact desired defined at the beginning of the project?

For example, greater organisation efficiency; change in the way a target audience and/or an organisation behaves; or improvements in how services for a given audience are managed?

What have been the other elements influencing the impact you want to measure?

Your project cannot be viewed in isolation; there must have been other factors influencing the changes being observed. Identifying these factors will help you to assess the level of influence of your project compared to other factors.

Do you have a mandate to measure impact?

When assessing impact, you will be looking at long term effects that probably go outside of your own responsibilities and into the realms of other projects and units – you are looking at an area of the wider effects of your organisation’s activities and this needs to be taken into consideration. For example, if you are looking at the longer term effects of a training program, you would want to look at how individuals and the organisation as a whole are more efficent as a result of the training. Do you have the political mandate to do so? – As you may discover effects that go way beyond your own responsibilities.

Evaluating impact is a daunting but not impossible task. For most projects, it would be more realistic to focus on measuring outputs and preferably outcomes – and think of short term outcomes as I have written about previously.

Glenn

October 9, 2007 at 9:28 am 1 comment

Impact or results?

When speaking of achieving objectives for a project, I’ve heard a lot of people speak of the “intended impact” and I’ve read quite some “impact reports”. I know it’s a question of language, but often people use the word “impact” when in fact they should use the word “results”. Impact in the evaluation field has a specific application to long term effects of a project. The DAC Glossary of Key Terms in Evaluation and Results Based Management (pdf) produced by the OECD contains the most widely accepted definitions in this field. Impact is defined as:

“Positive and negative, primary and secondary long-term effects produced by a development intervention, directly or indirectly, intended or unintended”.

And “results” is defined as

“The output, outcome or impact (intended or unintended, positive and/or negative) of a development intervention”.

Consequently I believe that when we produce a report that shows media visibility generated by a project, this is a short term output and should be called “results” rather than “impact” which applies to more long-term effects.

Glenn

October 1, 2007 at 2:52 pm 1 comment

Changing behaviour – immediate responses


Adding to what I wrote about last week concerning measuring behaviour changes that result from communication campaigns – and why I recommend to consider looking at immediate responses (or “outtakes”) as an alternative to long-term changes – I can see parallels in areas other than in campaigns.

As you may know, a favourite of mine is measuring the impact of conferences and meetings. Industry conferences are traditionally sold as being great places to learn something and network, network – and network. But I’m always surprised when attending such conferences at how organisers, if they measure something, focus on measuring the reactions to the conferences, usually in terms of satisfaction. No attempt is made to measure immediate changes to behaviour (such as extending a network) or longer term behaviour or impact in general.

But it is certainly possible, this diagram (pdf) illustrates what I did to measure immediate and mid-term changes to behaviour following a conference (LIFT). Despite the limitations of the research as I explain here, I was able to track some responses following the conference that could be largely contributed to participating in the conference – such as meeting new people or using new social media in their work. One year after the conference, participants also provided us with types of actions that they believed were influenced largely by their participation. Actions included:
– launching a new project
– launching a new service/product
– establishing a new partnership
– Initiating a career change
– Invitations for speaking engagements

Some of these actions were anticipated by the conference organisers – but many were not. It shows that it can be done and is certainly worth thinking about in conference evaluation.

Glenn

September 6, 2007 at 1:57 pm Leave a comment

Workshop participation & short term impact

An interest of mine is looking at the short & long term impact of conferences and workshops. A lot of work has been done on evaluating the impact of training that I have written about before. Basically, we can look at four levels of impact: 1. Reaction, 2. Learning, 3. Behavior & 4. Results. A lot of conference/workshop evaluation focus on the “reaction” aspect – what did participants like/prefer about an event.

But more interesting is to look at the learning, behavior and- if possible – results aspect. This usually takes time – however, if we are clear about what a workshop/conference is trying to achieve, we can often identify changes in learning/behavior in the short term.

A practical example. When I ran the “Do-It-Yourself Monitoring and Evaluation” workshop (pictured above that’s – David Washburn and myself at the workshop) at the LIFT07 conference, my main objective was to get people thinking about how they could integrate monitoring and evaluation into their own projects. Using a basic evaluation framework (pdf) groups worked to break down projects into the main steps needed for evaluation.

So was the “learning” aspect succesful? – I’d like to think so. Quite a few people commented to me how it got them thinking about monitoring/evaluation and what they could do with their own projects. Also, the following participants blogged about the workshop, an indication of what they took away from the workshop – and also crossing into the “behaviour” area: they processed some thoughts and took the action (behaviour) of writing about it:

Even more so, one participant told me about how he used the information from the workshop the same week, which supports my idea about the possiblity to identify short term impact, even in terms of behaviour:

“When we got back from the workshop, I took out the evaluation framework and sat down with my colleagues and planned out what we were going to monitor and evaluate in our major projects, setting down objectives and evaluation indicators. So we can use the framework as a guide in the coming six months.”

Glenn

February 23, 2007 at 10:03 pm 2 comments

Six factor to ensure that your evaluation results are used

As I wrote in a previous post, evaluation can be quite frustrating when all your effort and work doesn’t actually lead to any significant change in future practices. Why are evaluations not used? A new report “The Utilisation of Evaluations” (pdf) from ALNAP throws some light on this subject. Although focusing on the humanitarian sector, the report has some points that apply to all types of evaluations. I found interesting the six quality factors the author identifies that contribute to the findings of an evaluation being utilised, notably:

  1. Designing carefully the purpose and approach of the evaluation
  2. Managing quality participation of all stakeholders throughout the evaluation
  3. Allowing enough time to have all relevant staff and stakeholders involved
  4. Ensuring that the evidence is credible and the report is easy to read with clear, precise recommendations with who is responsible for what and when
  5. Putting in place follow-up plans at the outset
  6. Ensuring that the evaluator(s) are credible, balanced and constructive – wholesale negativity is never welcomed

Going through these six factors I can see where I’ve faced obstacles in past evaluations, notably points 2 and 5. I find managing stakeholder involvement is often difficult and so is setting out follow-up plans – it often comes as an after-thought. Certainly some factor to consider for all evaluators…

Read the full report (pdf) here >>

Glenn

January 17, 2007 at 9:11 pm 3 comments

Presenting monitoring & evaluation results

The more I work in the M&E field, the more I see the importance of presenting results in a consumable way. If you are leading an evaluation project, there is nothing more frustrating than finishing your project and finding the comprehensive report you wrote gathering dust on a manager’s desk.

But that’s what I have learnt, the comprehensive report will perhaps only be read by one or two people of the commissioning team – but the powerpoint summarising the report will be widely distributed and viewed by many. We may think this is a “dumbing-down” of the work undertaken but it is a reality of how our work is consumed. Here are some points on presenting results that I find useful:

  • Think carefully about the data and findings you want to present. We can often be overwhelmed by data (from survey results for example). If in doubt, put data you consider important but not essential in report annexes.
  • Make the evaluation report attractive and easy to ready – facilitate this by summarising the main points and creating a brief presentation.
  • Organise an event such as a staff or team meeting to discuss the results – this could have more impact than the written document.
  • Through blogs and wikis, use the evaluation results to generate more discussion and interest in the given subject. A good example is the blog created to present the results of the 2006 Euroblog survey.

Jim Macnamara in a recent article (pdf) touches on this subject on how presenting results with a “two-tier” approach is useful – that is, presenting to top management only key data and information while fully digesting all data at the corporate communications level.

Glenn

Cartoon from toothpaste for dinner>>

January 8, 2007 at 9:27 pm 2 comments

The “Before” Aspect of evaluation

Evaluation is often thought of as a “concluding” activity – something that is done once a programme or project is finished. But evaluation has its role “before” and “during” an activity. A recent experience highlighted for me the importance that evaluation can play in the “before” phase.

I have been involved in setting-up a pan-European e-learning platform and prior to its launch, we decided to test the platform with a select group of users. In the learning or communications field that would be a standard procedure – to pre-test material before it is used with its target audiences. But I am amazed at how many organisations don’t pre-test material – a “before” evaluation activity.

The feedback we received from the test users was incredibly informative – they identified issues that we did not even think about; access, usablity and broader issues on motivation and incentives for using the platform. User tests for online websites/platforms do not have to be complicated and costly – Jakob Nielsen, the specialist in this field explains well why usability is not necessarily expensive.

The “before” evaluation phase is much broader than simply pre-testing material. The establishment of baseline data (e.g. attitude levels on issues), the gathering of existing research on a subject, benchmarking with comparable projects and ensuring that a project’s objectives are clear and measurable are some of the components of this phase.

Glenn

September 30, 2006 at 12:44 pm 1 comment

Older Posts Newer Posts


Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 1,154 other followers

Categories

Feeds