Posts filed under ‘Training evaluation’

Hints on interviewing for evaluation projects

Evaluators often use interviews as a primary tool to collect information. Many guides and books exist on interviewing – but not so many for evaluation projects in particular. Here are some hints on interviewing based on my own experiences:

1. Be prepared: No matter how wide-ranging you would like an interview to be, you should as a minimum note down some subjects you would like to cover or particular questions to be answered. A little bit of structure will make the analysis easier.

2. Determine what is key for you to know: Before starting the interview, you might have a number of subjects to cover. It may be wise to determine what is key for you to know – what are the three to four things you would like to know from every person interviewed? Often you will get side-tracked during an interview and later on going through your notes you may discover that you forgot to ask about a key piece of information.

3. Explain the purpose: Before launching into questions, explain in broad terms the nature of the evaluation project and how the information from the discussion will be used.

4. Take notes as you discuss: Even if it is just the main points. Do not rely on your memory as after you have done several interviews you may mix up some of the responses. Once the interview has concluded try to write further on the main points raised. Of course, recording and then transcribing interviews is recommended but not always possible.

5. Take notes about other matters: It’s important also to note down not only what a person says but how they say it – you need to look out for body language, signs of frustration, enthusiasm, etc. Any points of this nature I would normally note down at the end of my interview notes. This is also important if someone else reads your notes in order for them to understand the context.

6. Don’t offer your own opinion or indicate a bias: Your main role is to gather information and you shouldn’t try to defend a project or enter into a debate with an interviewee. Remember, listening is key!

7. Have interviewees define terms: If someone says “I’n not happy with the situation”, you have understood that they are not happy but not much more. Have them define what they are not happy about. It’s the same if an interviewew says “we need more support”. Ask them to define what they mean by “support”.

8. Ask for clarification, details and examples: Such as “why is that so?”, “can you provide me with an example?”, “can you take me through the steps of that?” etc.

Hope these hints are of use..

Glenn

April 1, 2008 at 8:07 pm Leave a comment

The path from outputs to outcomes

Organisations often focus on evaluating the “outputs” of their activities (what they produce) and not on “outcomes” (what their activities actually achieve), as I’ve written about before. Many international organisations and NGOs have now adopted a “results-based management” approach involving the setting of time-bound measurable objectives which aim to focus on outcomes rather than outputs – as outcomes are ultimately a better measure of whether an activity has actually changed anything or not.

Has this approach been successful? A new report from the UN (of their development agency – UNDP) indicates that the focus is still on outputs rather than outcomes as the link between the two is not clear, as they write:

“The attempt to shift monitoring focus from outputs to outcomes failed for several reasons…For projects to contribute to outcomes there needs to be a convincing chain of results or causal path. Despite familiarity with tools such as the logframe, no new methods were developed to help country staff plan and demonstrate these linkages and handle projects collectively towards a common monitorable outcome.”
(p.45)

Interestingly, they highlight the lack of clarity in linking outputs to outcome – to show a causal path between the two. For example, the difficulty in showing that something that I planned for and implemented (e.g. a staff training program – an output) led to a desirable result (e.g. better performance of an organisation – an outcome).

One conclusion we can make from this study is that we do need more tools to help us establish the link between outputs and outcomes – that would certainly be a great advance.

Read the full UN report here >>

Glenn

February 25, 2008 at 2:13 pm 2 comments

Seven tips for better email invitations for web surveys


Further to my earlier post on ten tips for better web surveys, the email that people receive inviting them to complete an online survey is an important factor in persuading people to complete the survey – or not. Following are some recommended practices and a model email to help you with this task:

1. Explain briefly why you want an input: it’s important that people know why you are asking their opinion or feedback on a given subject. State this clearly at the beginning of you email, e.g. “As a client of XYZ, we would appreciate your feedback on products that you have purchased from us”.  

2. Tell people who you are:
it’s important that people know who you are (so they can assess whether they want to contribute or not). Even if you are a marketing firm conducting the research on behalf of a client, this can be stated in the email as a boiler plate message (see example below). In addition, the name and contact details of a “real” person signing off on the email will help.

3. Tell people how long it will take: quite simply, “this survey will take you some 10 minutes to complete”. But don’t underestimate – people do get upset if you tell them it will take 10 minutes and 30 minutes later they are still going through your survey…

4. Make sure your survey link is clickable: often survey softwares generate very long links for individual surveys.  You can often get around this by masking the link, like this “click to go to survey >>“. However, some email systems do not read correctly masked links so you may be better to copy the full link into the email as in the example below. In addition, also send your email invitation to yourself as a test – so you can click on your survey link just to make sure it works…

5. Reassure people about their privacy and confidentiality: people have to be reassured that their personal data and opinions will not be misused. A sentence covering these points should be found in the email text and repeated on the first page of the web survey (also check local legal requirements on this issue).

6. Take care with the “From”, “To” and “Subject”: If possible, the email address featured in the “From” field should be a real person. The problem will be if your survey comes from info@wizzbangsurveys.net it may end up in many people’s spam folders.  For the “To”, it should contain an individual email only – we still receive email invitations where we can see 100s of email addresses in the “To” field – it doesn’t really instill confidence as to how your personal data will be used. The “Subject” is important also – you need something short and straight to the point (see example below). Avoid using spam-catching terms such as “win” or “prize”.

7. Keep it short: You often can fall into the trap of over explaining your survey and hiding the link somewhere in the email text or right at the bottom. Try and keep your text brief – most people will decide in seconds if they want to participate or not – and they need to be able to understand why they should, for whom, how long it will take and how (“Where is the survey link?!).

Model email invitation:   

From: j.jones@xyzcompany.net 
To: glenn.oneil@gmail.com
Subject: XYZ Summit 2008 – Seeking your feedback

Dear participant,

On behalf of XYZ, we thank you for your participation in the XYZ Summit.

We would very much appreciate your feedback on the Summit by completing a brief online survey. This survey will take some 10 minutes to complete. All replies are anonymous and will be treated confidentially.

To complete the survey, please click here >>

If this link does not work, please copy and paste the following link into your internet window:
http://optima.benchpoint.com/optima/SurveyPop.aspx?query=view&SurveyID=75&SS=0ZJk1RORb

Thank you in advance; your feedback is very valuable to us.

Kind regards,
J. Jones
Corporate Communications
XYZ Company
email: j.jones@xyzcompany.net 
tel: ++ 1 123 456 789

****
Benchpoint has been commissioned by XYZ to undertak this survey. Please contact Glenn O’Neil of Benchpoint Ltd. if you have any questions: oneil@benchpoint.com

The following article from Quirks Marketing Research Review also contains some good tips on writing email invitations.

Glenn

February 12, 2008 at 9:54 am 2 comments

The value of checklists and evaluation: 7 reasons

photo by Leo Reynolds, flickr

Further to what I wrote last week  about checklists and their use in evaluation, I have found an excellent article on the logic and methodology of checklists.

Dr Michael Scriven of the Evaluation Centre of Western Michigan University describes the different types of  checklists and how good checklists are put together. In particular, I like his list of the seven values of checklists, of which I summarise as follows: 

  1. Reduces the chance of forgetting to check something important
  2. Are easier for the lay stakeholder to understand and evaluate
  3. Reduces the “halo effect”- it forces an evaluator to look at all criteria and not be overwhelmed by one highly valued feature
  4. Reduces the influence of the “rorschach effect” – that is the tendancy to see what one wants to see in a mass of data – evaluators have to look at all dimensions
  5. Avoids criteria being counted twice or given too much importance
  6. Summarises a huge amount of professional knowledge and experience
  7. Assists in evaluating what we cannot explain

As Dr Scriven points out, checklists are very useful tools in getting us to think through the “performance criteria” of all kinds of processes, projects or occurences, e.g. what are the key criteria that make a good trainer – and what criteria are more important than other?

Read the full article here >>

Glenn

November 13, 2007 at 7:52 am 3 comments

Checklists and evaluation

Often in evaluation, we are asked to evaluate projects and programmes from several different perspectives: the end user, the implementer or that of an external specialist or “expert”. I always favour the idea that evaluation is representing the *target audiences* point of view – as is often the case in evaluating training or communications programmes – we are trying to explain the effects of a given programme or project on target audiences. However, often a complementary point of view from an “expert” can be useful. A simple example – imagine if you making an assessment of a company website – a useful comparison would be comparing the feedback from site visitors with that of an “expert” who examines the the website and gives his/her opinion.

However, often opinions of “experts” are mixed in with feedback from audiences and comes across as unstructured opinions and impressions. A way of avoiding this is for “experts” to use checklists – a structured way to assess the overall merit, worth or importance of something.

Now many would consider checklists as being a simple tool not worthy of discussion. But actually a checklist is often a representation of a huge body of knowledge or experience: e.g. how do you determine and describe the key criteria for a successful website?

Most checklists used in evaluation are criteria of merit checklists – where a series of criteria are established and given a standard scale (e.g. very poor to excellent) and are weighed equally or not (e.g. one criteria is equal or more crucial than the next one). Here are several examples where checklists could be useful in evaluation:

  • Evaluating an event: you determine “success criteria” for the event and have several experts use a checklist and then compare results.
  • Project implementation: a team of evaluators are interviewing staff/partners on how a project is being implemented. The evaluators use a checklist to assess the progress themselves.
  • Evaluating services/products: commonly used, where a checklist is used by a selection panel to determine the most appropriate product/services for their needs.

This post by Rick Davies actually got me thinking about this subject and discusses the use of checklists in assessing the functioning of health centres.

Glenn

November 6, 2007 at 10:04 am 2 comments

Sharpening the focus on measurement

It is often difficult to get organisations away from simply measuring “outputs” – what is produced – to measuring “outcomes” – what are the effects of outputs.

Funny enough, many organisations want to go from the very superficial measuring of output (e.g. how many news articles did we generate) to the very in-depth measuring of impact (e.g. the long term effect of our media visibility on audiences). Impact is feasible but difficult to measure, as I’ve written about before. However, instead of focusing on the two ends of the measurement scale, organisations would perhaps be wise to focus on “outcome” measurement.

I think this quote from a UN Development Programme Evaluation Manual (pdf) sums up why outcome is an appropriate level to measure for most organisations:

“Today, the focus of UNDP evaluations is on outcomes, because this level of results reveals more about how effective UNDP’s actions are in achieving real development changes. A focus on outcomes also promises a shorter timeframe and more credible linkages between UNDP action and an eventual effect than does a focus on the level of overall improvement in people’s lives, which represent much longer-term and diffuse impacts .”

The notion of the shorter timeframe and more credible linkages is certainly appealing for many organisations considering their focus of evaluation.

Glenn

October 16, 2007 at 1:53 pm 2 comments

Impact – how feasible for evaluation?

As I mentioned in an earlier post, people often confuse “impact” with “results”. Is it possible to measure “long term impact” of projects? It is, however for most projects it is unrealistic to do so for two reasons: time and cost.

To evaluate impact, you would usually need to wait some 12 months after the major elements of a project have been implemented. Many organisations cannot simply wait that long. In term of costs, an impact study requires a triangulation methodology that uses various quantitative and qualitative research methods which could be costly. However, if time and cost are not issues, an impact evaluation is possible, keeping in mind the following points:

Was the impact desired defined at the beginning of the project?

For example, greater organisation efficiency; change in the way a target audience and/or an organisation behaves; or improvements in how services for a given audience are managed?

What have been the other elements influencing the impact you want to measure?

Your project cannot be viewed in isolation; there must have been other factors influencing the changes being observed. Identifying these factors will help you to assess the level of influence of your project compared to other factors.

Do you have a mandate to measure impact?

When assessing impact, you will be looking at long term effects that probably go outside of your own responsibilities and into the realms of other projects and units – you are looking at an area of the wider effects of your organisation’s activities and this needs to be taken into consideration. For example, if you are looking at the longer term effects of a training program, you would want to look at how individuals and the organisation as a whole are more efficent as a result of the training. Do you have the political mandate to do so? – As you may discover effects that go way beyond your own responsibilities.

Evaluating impact is a daunting but not impossible task. For most projects, it would be more realistic to focus on measuring outputs and preferably outcomes – and think of short term outcomes as I have written about previously.

Glenn

October 9, 2007 at 9:28 am 1 comment

Older Posts Newer Posts


Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 1,123 other followers

Categories

Feeds