A focus on the “M” in M & E: Monitoring

An often over-looked aspect of  M & E – is the “M” – monitoring.  As we often focus on the “E” – evaluation – as this can easily be outsourced by organisations and undertaken by consultants (like myself..). But monitoring requires the commitment of staff within organisations and faces many other competing priorities for resources.

A new resource book “Integrated Monitoring: A Practical Manual for Organisations That Want to Achieve Results” (pdf)  provides a very good overview to the “how” and “what” of monitoring in addition to the challenges faced.

Thanks to the Monitoring and Evaluation NEWS blog for bringing this to my attention.

May 14, 2012 at 9:37 pm Leave a comment

advocacy evaluation: influencing climate change policy

Often I don’t get to share the findings of the evaluations I undertake, but in this case of an advocacy evaluation, an area that I’ve written about before, the findings are public and can be shared.

I was part of a team that evaluated phase 1 of an advocacy/research project – the Africa Climate Change Resilience Alliance (ACCRA).  ACCRA aims to increase governments’ and development actors’ use of evidence in designing and implementing interventions that increase communities’ capacity to adapt to climate hazards, variability and change.  Advocacy plays a large role in trying to influence governments and development actors in this project. You can read more in the Executive_Summary (pdf) of the evaluation findings.

The evaluation also produced 5 case studies highlighting successesful advocacy strategies:

  • Capacity building and district planning
  • Secondment to a government ministry
  • Reaching out to government and civil society in Uganda
  • Disaster risk profiling in Ethiopia
  • Exchanging views and know-how between ACCRA countries

The case studies can be viewed on the ACCRA Eldis community blog (n.b. you have to  join the Eldis community to view the case studies, it’s free of charge).

To disseminate the evaluation findings widely we also produced a multimedia clip, as featured below.

May 1, 2012 at 9:12 pm 2 comments

2012 European Summit on Communication Measurement announced

The International Association for Measurement and Evaluation has announced the programme for the 4th European Summit on Measurement, scheduled to be held in Dublin from 13-15 June 2012.

The Summit will include a day of workshops followed by two days of plenary sessions with guest speakers and panels.

Further information>>

April 20, 2012 at 9:21 am Leave a comment

What I Wish I Had Known in My Beginner Evaluator Days

An excellent post from Priya Small on the AEA365 blog where she describes what she wish she’d known in her beginner evaluation days. Some of the points she highlights are:

– Listen more, speak less
– Observe more, take less notes
– Compete less and cooperate more – Team work has great potential to produce optimal outcomes

From my experience as an evaluator, I would add the following:

– the process is important as the report: I realised later in the day…the actual act of undertaking an evaluation can have a significant influence on the concerned organisation and its stakeholders – so you should be aware of this and not only focus on the “end product” –  usually the final written report.

– evaluation is scary for some: yes, simple as it seems,  when the evaluation team arrives it can be misperceived as being the creators or bearers of bad news – job/project/budget cuts – so it helps that evaluators are able to explain well the purpose of their work.

– evaluators need to be guided but not too guided: Organisations employing evaluation teams want an independent evaluation – then again, they often want to mold the evaluation process in their own views – interview these persons, see these documents, etc. I learnt that evaluators sometimes have to be insistent in designing the evaluation to get the best results. I heard a good suggestion recently where an evaluation team insisted that a follow-up process be included in the evaluation planning.

There are certainly more lessons learnt about data collection, budgets, deadlines and reports – but I leave that for next time…!

April 1, 2012 at 8:24 pm 1 comment

Tips on blogging and evaluation

The AEA365 blog is featuring evaluators that blog – and this week I’ve been featured. You can read my post here where I give some tips about blogging and evaluation.

March 19, 2012 at 9:55 am Leave a comment

Guide to evaluating communication products

I’ve written before about the challenges of evaluating communication products, i.e. brochures, videos, magazines and websites. Little systematic follow-up is done on these products that can often form key parts of larger communication programmes.  Here is a very interesting guide from the health sector in this area: “Guide to Monitoring and Evaluating Health Information Products and Services” (pdf).   Although focused on the  health area, the guide provides some insights on evaluating different levels concerning communication products, from reach to use and impact on an organisation.

Thanks to Jeff Knezovich writing on the On Think Tanks blog that brought this to my attention.

March 14, 2012 at 12:04 pm Leave a comment

When is a social media “visitor” not a visitor?

I’ve been looking into recently what constitutes a ” visit” or “action” on social media platforms. This may seem straightforward as on websites it’s well established what constitutes a “visitor“. However, in social media there is a lot of variation in what constitutes a “visitor” or “action”.  Andrew Ross Sorkin writes in the Dealbook blog about what Facebook considers as a “visit”. He notes that Facebook says it has 483 million “daily active users”. Within this it counts visits to its web and mobile websites – which seems legitimate. But it also includes those who visit third party websites and click on a Facebook “Like” button; those who share a Twitter post on their Facebook page; and those who leave a comment on such a website that then gets fed into Facebook. Rightly so, Sorkin is astounded by including such “visits” (which largely inflates visitor numbers of course).

For me, Facebook should count these so called “active users” as “actions” – they are more so actions using Facebook features/tools but not actual visits to the website.

Measuring activity on Twitter also throws up some interesting questions. There are many services that measure activity on Twitter, mostly based on the use of the Twitter #hastags. You can find out all sorts of interesting statistics such as how many people used a hashtag, how many people and how many times they received a Tweet containing a hashtag, etc.  For example, you can see that a hashtag generated by a campaign was used by 1000s of people that then reached millions. But what does that actually mean? In reality it means that millions have received a Tweet containing a hashtag that they may or may not have looked at – and the hashtag may or may not have been used in a Tweet in a way compatible or not with the original intention of the campaign that created it. So there is more work to be done as to what is the impact of message exposure through Twitter and other social media.

For those interested in this subject, here is an interesting post from Metrics Man on three fundamentals of social media measurement>>

February 22, 2012 at 8:37 am 1 comment

2 day training workshop: Realist evaluation for complex programs, Berne, May 2012

A two day workshop on “realist evaluation” is planned for 3-4 May  2012 in Berne, Switzerland. some more information:

Realist evaluation (Pawson and Tilley, 1997) is a form of theory driven evaluation. It starts by assuming that nothing works for everyone ; that how participants respond matters just as much as what programs do ; and that context does indeed make a difference. It doesn’t ask “Does this work ?”, but “In what contexts does this work, for whom, to what extent, and how ?”. Realist evaluation can be used at any level from individual casework to international development, and with simple, complicated and complex programs.

This two day workshop will introduce the key concepts of realist evaluation and their implications for evaluation design, methods and the role of the evaluator ; introduce the key ideas from complexity theory that are useful in realist evaluation ; explain the structure of realist program theory and provide practice in developing realist program theories for simple, complicated and complex programs… The program will involve a combination of presentations, discussion, and exercises.

The course is being organised by the University of Fribourg, Sociology, Social Policy and Social Work Department in collaboration with Marlène Läubli Loud of LAUCO Evaluation and Training and the Swiss Evaluation Society (SEVAL).

Further information and registration:
http://www.unifr.ch/travsoc/fr/newsdetail/?nid=6412

February 9, 2012 at 8:43 am Leave a comment

Advocacy evaluation – top resources

Today I spoke to the students of the Executive Certificate of Advocacy in International Affairs at the Graduate Institute of Geneva on advocacy evaluation.  I promised the students to list the top resources I’d recommend on advocacy evaluation, here they are:

Practical Guide to Advocacy Evaluation from Innovation Network (pdf)>>

Guide on measuring advocacy and policy (pdf) from the Annie E. Casey Foundation

A guide to monitoring and evaluating policy influence (pdf)” of the UK-based Overseas Development Institute describes the different approaches to evaluating policy influence.

“Advocacy Impact Evaluation” (pdf) by Michael Q. Patton – an interesting case study on influencing the US Supreme Court.

“Lessons in Evaluating Communication Campaigns: Five Case Studies” from the Harvard Family Research Project looks at evaluating advocacy campaigns ranging from gun safety to emmissions (ozone) reduction.

January 27, 2012 at 4:41 pm 6 comments

A pragmatic guide to monitoring and evaluating research communications using digital tools

Here is a very comprehensive post from the On Think Tanks blog that explains an approach for using digital tools to monitor and evaluate research communications for a think tank (ODI).

The approach taken relates online measurement tools to four levels of assessing influence of communications on policy (an aim of research communications):

  • Management, outputs, uptake, outcomes and impact.

The last level, outcomes and impact is of course the hardest to measure with digital tools. But I think if you have access to your target audiences, this can be done through in-depth interviews or more simply through email surveys to ask how they have used the research products  – which can give then provide an indication of the role they have taken in influencing policy.

View the full post here>>

January 11, 2012 at 7:07 pm 1 comment

Older Posts Newer Posts


Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 2,665 other subscribers

Categories

Feeds