New publication: Learning about Measuring Advocacy and Policy Change: Are Baselines always Feasible and Desirable?

IDS have produced an new paper “ Learning about Measuring Advocacy and Policy Change: Are Baselines always Feasible and Desirable?” (pdf).  Here is a summary from the author:

This paper captures some recent challenges that emerged from establishing a baseline for an empowerment and accountability fund. It is widely accepted that producing a baseline is logical and largely uncontested – with the recent increased investment in baselines being largely something to be welcomed. This paper is therefore not a challenge to convention, but rather a note of caution: where adaptive programming is necessary, and there are multiple pathways to success, then the ‘baseline endline’ survey tradition has its limitations. This is particularly so for interventions which seek to alter complex political-economic dynamics, such as between citizens and those in power.

The paper raises some very valid points about the challenges of establishing baselines, particularly for advocacy/policy change projects – one which I’ve also experienced in that with advocacy we are never rarely starting from “zero” – organisations could have been working on a given issue for some time when this given project came along.

View the full paper (pdf) >>

 

December 10, 2013 at 12:18 pm Leave a comment

Integrating Communication in Evaluation workshop, Berne 2014 – registrations open

Further to my earlier post, registrations are now open for the course that I’ll be co-presenting on communications and evaluation in 2014 in collaboration with the Swiss Evaluation Association (SEVAL):

1) Integrating Communication in Evaluation 

Dates: March 13 and 14, 2014 (2 full days)
Venue: Federal Office of Personnel, Eigerstrasse 71, Room 205, 3007 Berne, Switzerland
Cost: CHF 700
Description:  An often-overlooked step in evaluation is ensuring that findings are communicated, understood and acted upon.  Equally important, however,is what, how and when we communicate with different stakeholders throughout the evaluation process.  Communicating effectively implies using different means,messages and methods to reach different groups with very different needs and expectations. A mix of presentations, case studies and practical exercises will be used to introduce and discuss new approaches for communicating and engaging with stakeholders and presenting results to different audiences (for example social media, interactive presentations and data visualisation).  Participants are encouraged to bring examples of evaluations they have commissioned or implemented to be used as case studies during the workshop. The workshop will be co-facilitated by Glenn O’Neil of Owl RE, Geneva and Marlène Läubli Loud of Lauco Evaluation & Training
Register here >>

December 3, 2013 at 8:40 pm Leave a comment

Using video in evaluation

The Better Evaluation blog has published a series of posts focusing on the use of video in evaluation:

Participatory Video as a tool to engage communities and stakeholders in evaluation by Soledad Muniz

–  Learning about evaluation from the inside using video by Paul Barese

– Using video to communicate evaluation findings by Glenn O’Neil (this writer)

 

November 27, 2013 at 8:12 am 1 comment

5 handbooks on communication evaluation

I’ve been reviewing what handbooks and guides are available on communication evaluation – so far I’ve located five, here they are (all links to PDFs):

Communication Network (2008) Are we there yet? A Communications Evaluation Guide 

DFID (2005). Monitoring and Evaluating Information and Communication for Development (ICD) Programmes – Guidelines. Department for International Development, London.

Westminster City Council (2011). Evaluating Your Communication Tools, What Works,What Doesn’t – The Westminster Model

EETAP (2002), Guidelines for Developing and Evaluating Communicatioon Tools/Efforts

Slighty more specialised, but still interesting: 

USAID (2007) Guide to Monitoring and Evaluating Health Information Products and Services

November 20, 2013 at 8:38 am 7 comments

New PR Research & Measurement Standards available

The Coalition for Public Relations Research Standards has just released their interim metrics (pdf) on PR research and measurement.  They’ve split it by traditional media, social media, communications lifestyle and Return on Investment (ROI).  For each metric, there is also more information available. I’m yet to go through all the information, but it seems like a comprehensive list – even if not all will agree on the different definitions, etc. Of interest, four major US corporations – GE, GM, McDonald’s USA and Southwest Airlines – reportedly have already adopted these metrics.

November 12, 2013 at 2:51 pm Leave a comment

Data visualization – tips for evaluation reports

For those who use graphs in evaluation reports and other documents, here is an excellent presentation from Ann K. Emery of the Innovation Network – well worth a look!

November 5, 2013 at 10:13 am 2 comments

Two evaluation workshops in Switzerland – 2014

I’m happy to announce that I’ll be co-presenting a course on communications and evaluation in 2014 in collaboration with the Swiss Evaluation Association (SEVAL). Below are details about this course and another on complexity and evaluation by Patricia Rogers. Please find below a short outline of the two courses and the dates. More details about the courses will be available in January 2014.

1) Integrating Communication in Evaluation – March 13 and 14, 2014
An often-overlooked step in evaluation is ensuring that findings are communicated, understood and acted upon.  Equally important, however,is what, how and when we communicate with different stakeholders throughout the evaluation process.  Communicating effectively implies using different means,messages and methods to reach different groups with very different needs and expectations.
A mix of presentations, case studies and practical exercises will be used to introduce and discuss new approaches for communicating and engaging with stakeholders and presenting results to different audiences (for example social media, interactive presentations and data visualisation).  Participants are encouraged to bring examples of evaluations they have commissioned or implemented to be used as case studies during the workshop.
The workshop will be co-facilitated by Glenn O’Neil of Owl RE, Geneva and Marlène Läubli Loud of Lauco Evaluation & Training


2) Addressing complexity in evaluation – June 5 and 6, 2014
Increasingly evaluations have to address programs, projects and policies with complex aspects. The activities and objectives of these interventions are fundamentally dynamic and emergent in response to needs and opportunities, and they often involve multiple organisations with emergent and unpredictable roles. These characteristics present challenges to traditional evaluation approaches.

The workshop will examine the particular challenges that complexity presents and explore practical strategies for evaluation, including developmental evaluation, use of non-linear logic models, and emergent evaluation design. The workshop will include case studies of successful and unsuccessful attempts to address complexity in evaluation.  It will also provide opportunities to analyse participants’ own examples in terms of identifying the particular challenges that complexity presents and how the different strategies might be applied.

The workshop will be facilitated by Professor Patricia Rogers, Royal Melbourne Institute of Technology, Australia.

Both courses will will be held in the Federal Office of Personnel, Bern, Switzerland.  Please note that there are no scholarships or travel funds available for these courses.

October 29, 2013 at 6:09 pm 1 comment

News from Benchpoint

Benchpoint, which brought together the two authors of this blog, has been very quiet lately. At the end of last year I decided to retire our unique software suite, which has performed magnificently for 12 years or so. The reason was simply that I was approaching retirement age and was loathe to invest a couple of hundred thousand in a re-write to bring the features and performance up to modern day standards.

 So, with many misgivings, I shut down the server, the website and dissolved the company. We never made a bomb, but we had proved something which many doomsayers doubted in 2000 –

  • ·         That the internet would never take off
  • ·         That people would never do internet surveys
  • ·         To create a software suite with real time editable questionnaires and fully analysed results was impossible.

Rubbish, of course, we did it! So, with head held high, it was time to go fishing, as they say.

But Benchpoint was never a software company, it was a survey company. In the time since we developed the survey engine, hundreds of others tried the same thing. Some succeeded, but many failed. I’m proud that some of the results we produced are still being cited in today’s academic articles, like this one from Tom Watson and Fraser Likely (pdf).  Even now, very few software providers can do the results like we could. But a few can, and that’s the key.

Fishing is not much fun when the autumn gales come crashing in. Someone asked me if I could do a survey for them. Why not? I could use one of the available packages, and simply charge my time accordingly. And gradually I found myself thinking, why not re-start the business, working in semi retirement, for chosen customers? The same old Benchpoint expertise. No software to maintain, no Limited company to run, no VAT (hopefully, if we can keep it small).

So I have set up  a simple website, still on www.benchpoint.com.

We offer 3 specialisations – employee surveys, membership surveys, and the unique organisation tooI  I developed, “Management Probe”, which analyses the hell out of a company’s DNA and tells you what makes it tick. Or not.

 I am now open for business. With organisations I want to work with. And I will still go fishing, sailing skiing and keep my bees. But not when there’s a project on. That’s a promise.

Richard

October 24, 2013 at 9:12 am Leave a comment

New article – PR Measurement and Evaluation Practices Over the Course of 40 Years

Here is a brand new article (it’s a chapter from a book*) by Fraser Likely and Tom Watson entitled “Measuring the Edifice – PR Measurement and Evaluation Practices Over the Course of 40 Years”.

It provides an excellent overview of developments in the last 40 years and the challenges currently faced in PR measurement and evaluation. A summary from the authors:

“Public relations measurement and evaluation practices have been major subjects for practitioners and academician research from the late 1970s onwards. This chapter will commence with a brief survey of the historical evolution of the research into these practices. Then, we will discuss James E. Grunig’s enduring contribution to their theorization, particularly with financial and non-financial indicators of public relations value. Next, we will consider the current debate on financial indicators, focusing on Return on Investment and alternative methods of financial vlauation. Finally, we will look to the future at the measurement and evaluation practices that will attract academic and practitioner research interest.”

View the article/chapter in full (pdf)>>

*Note: Fraser and Tom’s chapter, “Measuring the Edifice: Public Relations Measurement and Evaluation Practice Over the Course of 40 Years (pp. 143-162)” comes from a “festschrift” (a celebratory book) for Professors Jim and Lauri Grunig – two renowned PR Gurus –  which was edited by Professors Krishnamurthy Sriramesh and Ansgar Zerfass and Dr Jeong-Nam Kim. The book’s title is Public Relations and Communication Management: Current Trends and Emerging Topics. It is published by Routledge.

October 21, 2013 at 6:41 pm Leave a comment

New paper: Implementing Development Evaluations under Severe Resource Constraints

A very interesting paper from the Center for Development Impact on implementing evaluations with limited resources, here is a summary:

Most agency evaluations are very short both on resources and in duration, with no proper opportunity to assess impact in a valid manner. The methodology for these evaluations is based on interviews, a review of available programme literature and possibly a quick visit to one (often unrepresentative and usually successful) programme site. This means that the results of the evaluations are heavily dependent on the experience and judgement of the evaluator, the opinions received, and level of support from the commissioner. This CDI Practice Paper by Richard Longhurst reviews how to make the best of such a situation, drawing on lessons learned from techniques of better resourced evaluations and other techniques that have been used. A simple framework can relate the type of evaluation to the resources available and enable better planning and use of evaluations across an organisation.

View the paper here (pdf)>>

 

 

 

October 15, 2013 at 2:28 pm Leave a comment

Older Posts Newer Posts


Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 2,665 other subscribers

Categories

Feeds