Posts filed under ‘Evaluation tools (surveys, interviews..)’

Adapting M&E at the field level

The NGO Saferworld has published a very interesting Learning Paper (pdf) on  their approach to monitoring and evaluation (M&E) focused on the field level. What is interesting in their paper, is that they explain some of the challenges they faced with reporting and logframes and the approaches they adopted consequently – adapting such tool as outcome harvesting and outcome mapping. Also for those interested in  advocacy evaluation, many of the examples featured are from evaluating advocacy activities.

January 22, 2016 at 1:36 pm 1 comment

5 tips for increasing survey completion rates

From the SurveyMonkey blog, a useful article on increasing survey completion rates.

Based on an analysis from 25,000 surveys, some of their conclusions state the obvious (i.e. the longer the survey the lower the response rate….) but here are five tips from the article I found useful:

  1. Starting a survey with an open-ended questions reduces completion rates
  2. Starting a survey with a simple easy-to-answer closed question will facilitate completion rates
  3. Placing open-ended questions towards the end of the survey is better than at the start of the survey
  4. A matrix or rating style questions doesn’t reduce completion rates – but too many of them do
  5. Each additional word in a question text has a direct negative effect on completion rates

View the full article here>>

 

August 18, 2015 at 10:03 am 1 comment

Practical Advice for Selecting Sample Sizes

A new publication has been released by Donor Committee for Enterprise Development providing practical advice for selecting sample sizes (pdf).  The publication is particularly useful for considering sampling issues for online surveys and provides a lot of good advice and tips.

June 12, 2015 at 7:03 am Leave a comment

Two New Advocacy Evaluation Tools

Here are two new advocacy evaluation tools from the Center for Evaluation Innovation:

The Advocacy Strategy Framework (pdf): presents a simple one-page tool for thinking about theories of change that underlie policy advocacy strategies. Check out the “interim outcomes and indicators” on the last page – very good range of advocacy outcomes/indicators.

Four Tools for Assessing Grantee Contribution to Advocacy Efforts (pdf): offers funders practical guidance on how to assess a grantee’s contribution to advocacy outcomes.The four tools include:
1. A question bank
2. Structured grantee reporting
3. An external partner interview guide
4. Contribution analysis

 

April 1, 2015 at 4:49 pm 1 comment

The checklist as an evaluation tool: examples from other fields

 Rick Davies of the Monitoring and Evaluation NEWS blog has published an interesting post exploring how surgeons and pilots use checklist – and lists other interesting resources on this issue.

See also my earlier posts here and here on using checklists.

January 15, 2015 at 2:48 pm Leave a comment

Three guides for focus groups

Recently I was running a series of focus groups and wanted to update myself on the “ways” and “hows” – I found the following three guides useful:

Designing and Conducting Focus Group Interviews (Richard A. Krueger, University of Minnesota) (pdf) >> 

 Guidelines for Conducting a Focus Group (Eliot & Associates) (pdf) >>

Toolkit for Conducting Focus Groups (Omni) (pdf) >>

 

September 25, 2014 at 7:23 pm 3 comments

Likert scale – left or right?

I’ve written previously about the Likert  scale and using it in surveys. On one point, I discussed whether the response options should be displayed positive to negative or negative to positive – the image below is negative (‘strongly disagree’) to positive (‘strongly agree’).  

I’ve recently come across two articles (listed below) where they have researched this issue – and they have found that the place of the response option does matter.  In summary, they found that the items placed on the left hand side gets more people selecting them than those on the right. They also found that when using vertical lists, the first items are more selected than others further down the list.

So what is the solution? One option I see is that many survey software offer the possibility when creating questions for the response options to be “flipped” – that is, some people will see them negative to positive and others will see them positive to negative.  It also makes sense to vary the response order in long surveys, particularly when using same or similar scales – to avoid respondents suffering from “survey fatigue”.

The relevant articles:
Response order effects in Likert-type Scales (pdf) >>
The biasing effect of scale-checking styles on response to a Likert Scale (pdf)>>

January 14, 2014 at 9:15 pm 8 comments

Surveys for communicators

Increasingly communicators need the ability to evaluate their activities – being able to design and set-up online surveys is a key tool for communicators for soliciting feedback and interacting with audiences.  Here are the slides from a practical workshop that I conducted last Friday for the Geneva Communicators Network and covers surveys for communicators from concept to analysis – hope it’s of use!

June 18, 2013 at 12:20 pm Leave a comment

New advocacy evaluation guide

Bond, the UK alliance of NGOs, has produced an interesting guide on advocacy evaluation:

Assessing effectiveness in influencing power holders (pdf)

The guide looks at the challenges of influencing power holders (usually done through activities grouped under the umbrella of “advocacy”) but comes to the conclusion that evaluation is feasible:

it is possible to tell a convincing story of an organisation’s contribution to change through their influencing and campaigning work by breaking down the steps of the process that led to change, and looking at how an organisation has created change at each step.

The guide also sets out these steps and provides examples of advocacy evaluation tools from NGOs including Oxfam, CARE, Transparency International amongst others.

View the guide (pdf)>>

September 22, 2012 at 1:48 pm Leave a comment

Likert scale & surveys – best practices – 2

i’ve written previously about the Likert scale and surveys – and received literally 100s of enquiries about it. A reader has now  pointed me towards this excellent article on survey questions and Likert scales that adds some interesting points to the discussion.

From my previous post, I listed the following best practices on using the Likert Scale in survey questions:

  • More than seven points on a scale are too much.
  • Numbered scales are difficult for people
  • Labelled scales need to be as accurate as possible

And here are some further points to add drawn from this article:

  • Be careful with the choice of words for labels:

“Occasionally” has been found to be very different than “seldom” but relatively close in meaning to “sometimes” (quote from article)

  • Include a “don’t know” if for a point where people may simply not have an opinion:

“Providing a “don’t know” choice significantly reduced the number of meaningless responses.”

  • People will respond more often to those items on the left hand side of the scale:

“There is evidence of a bias towards the left side of the scale”

On that last point, I always write my scales left to right – bad to good… This means that people may tend to select more easily the “bad” ratings. I haven’t found that to be the case (respondents often seem to be over-positive in their ratings I feel), but I stand corrected…

View the full article here>>

November 27, 2011 at 2:07 pm 3 comments

Older Posts Newer Posts


Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 1,145 other followers

Categories

Feeds