Posts filed under ‘Evaluation methodology’
Using video for evaluation baseline
I’ve written before about using video for data collection and reporting evaluation results – but I’ve just come across this interesting example of using video for a baseline, that is to record the situation before the project starts.
Miki Tsukamoto of the the International Federation of Red Cross and Red Crescent Societies explains this approach on the AEA365 blog which they used for a project in Uganda. A summarised version of the resulting video is found below. They will return in 2017 to make an “endline” video – so stand-by!
The voices of affected populations in evaluation
The notion of listening to the voices of the affected populations is nothing new in humanitarian evaluation. However, in the past there has been a lot of talk with little action. The Listening Project is one of the first structured and global initiatives to look at this issue – not only from the evaluation perspective but more broadly – and have recently produced a summary study Time to Listen: Hearing People on the Receiving End of International Aid (pdf) based on discussions with almost 6,000 people in 20 countries. You can also read a news report about this issue on IRIN news.
As part of a stakeholder consultations I’ve been involved with for the Joint Standards Initiative, we’ve also been listening to affected populations – from Senegal to Pakistan to Mexico. The video below provides some short excerpts of interviews with affected populations, in addition to humanitarian workers from these consultations.
Blogging the evaluation process
Blogging and other social media are often used as part of a communicating evaluation results – that is, once the evaluation is finished. However, blogging can also be useful to communicate the evaluation process – that is, as the evaluation is collecting data. I’ve recently been involved in a stakeholder consultation for the Joint Standards Initiative, where as part of communicating the progess of the consultation, myself and the other team members have been blogging “snapshots from the consultation” – from various and diverse locations such as Beirut, Juba and Richard Toll (Senegal).
This we found useful to provide stakeholders with an update of our work and offer some insights into our initial findings.
(The image above is taken from a discussion in Cairo by team member Inji El Abd)
Evaluation the lowest priority for US non-profits
The US-based Innovation Network has published a very interesting study on the State of Evaluation in US non-profit organisations.
The study, based on a survey of some 550 non-profits in the US produced some interesting findings, including the headline above, which is admittedly the more pessimistic of the following:
- 90% of organizations report evaluating their work (up from 85% in 2010)
- 100% (!) of organizations reported using and communicating their evaluation findings
- Budgeting for evaluation is still low. More than 70% of organizations are spending less than 5% of organizational budgets on evaluation
- On average, evaluation-and its close relation, research, continue to be the lowest priorities (compared to fundraising, financial management, communications, etc.)
I find it incredible that 100% report using and communicating their evaluations – If only this would be “significant” usage then we would all be happy…
BetterEvaluation – great resources for M&E
Here is a new website (well, new for me), that I recently discovered:
“An international collaboration to improve evaluation practice and theory by sharing information about options (methods or tools) and approaches. “
There are many resources on the website, for example, if you are interested in advocacy evaluation, there are useful resources on “process tracing”, a useful method for this area.
Evaluation of communication activities of international and non-governmental organisations: A 15 year systematic review
As part of my PhD studies, I have undertaken a systematic review of how international and non-governmental organisations are evaluating their communication activities. I’m presenting a summary of this today at the European Evaluation Society Conference in Helsinki, Finland. Below are the slides, hope you find them interesting.
A focus on the “M” in M & E: Monitoring
An often over-looked aspect of M & E – is the “M” – monitoring. As we often focus on the “E” – evaluation – as this can easily be outsourced by organisations and undertaken by consultants (like myself..). But monitoring requires the commitment of staff within organisations and faces many other competing priorities for resources.
A new resource book “Integrated Monitoring: A Practical Manual for Organisations That Want to Achieve Results” (pdf) provides a very good overview to the “how” and “what” of monitoring in addition to the challenges faced.
Thanks to the Monitoring and Evaluation NEWS blog for bringing this to my attention.
What I Wish I Had Known in My Beginner Evaluator Days
An excellent post from Priya Small on the AEA365 blog where she describes what she wish she’d known in her beginner evaluation days. Some of the points she highlights are:
– Listen more, speak less
– Observe more, take less notes
– Compete less and cooperate more – Team work has great potential to produce optimal outcomes
From my experience as an evaluator, I would add the following:
– the process is important as the report: I realised later in the day…the actual act of undertaking an evaluation can have a significant influence on the concerned organisation and its stakeholders – so you should be aware of this and not only focus on the “end product” – usually the final written report.
– evaluation is scary for some: yes, simple as it seems, when the evaluation team arrives it can be misperceived as being the creators or bearers of bad news – job/project/budget cuts – so it helps that evaluators are able to explain well the purpose of their work.
– evaluators need to be guided but not too guided: Organisations employing evaluation teams want an independent evaluation – then again, they often want to mold the evaluation process in their own views – interview these persons, see these documents, etc. I learnt that evaluators sometimes have to be insistent in designing the evaluation to get the best results. I heard a good suggestion recently where an evaluation team insisted that a follow-up process be included in the evaluation planning.
There are certainly more lessons learnt about data collection, budgets, deadlines and reports – but I leave that for next time…!
Likert scale & surveys – best practices – 2
i’ve written previously about the Likert scale and surveys – and received literally 100s of enquiries about it. A reader has now pointed me towards this excellent article on survey questions and Likert scales that adds some interesting points to the discussion.
From my previous post, I listed the following best practices on using the Likert Scale in survey questions:
- More than seven points on a scale are too much.
- Numbered scales are difficult for people
- Labelled scales need to be as accurate as possible
And here are some further points to add drawn from this article:
- Be careful with the choice of words for labels:
“Occasionally” has been found to be very different than “seldom” but relatively close in meaning to “sometimes” (quote from article)
- Include a “don’t know” if for a point where people may simply not have an opinion:
“Providing a “don’t know” choice significantly reduced the number of meaningless responses.”
- People will respond more often to those items on the left hand side of the scale:
“There is evidence of a bias towards the left side of the scale”
On that last point, I always write my scales left to right – bad to good… This means that people may tend to select more easily the “bad” ratings. I haven’t found that to be the case (respondents often seem to be over-positive in their ratings I feel), but I stand corrected…
EU Manual: Evaluating Legislation and Non-Spending Interventions in the Area of Information Society and Media
A very interesting manual published by the European Union:
Despite the wordy title, the manual is really about how to evaluate the effects of legislation and initiatives taken by governments (in this case the regional body – EU).
The toolbox at page 72 is well worth a look.