Challenges of evaluating advocacy campaigns

For those interested in evaluating advocacy campaign, here is an interesting resource:

The Challenge of Assessing Policy and Advocacy Activities: Strategies for a Prospective Evaluation Approach (pdf)

The publication, from the California Endowment focuses on evaluating advocacy for policy changes from the perspective of foundations and NGOs. They also highlight some challenges to advocacy evaluation, including:

  • Complexity
  • Role of External Forces
  • Time Frame
  • Shifting Strategies and Milestones
  • Attribution
  • Limitations on Lobbying
  • Grantee Engagement

What they say about “attribution”, that is to what extent can we attribute the change seen in policy or other developments to an individual campaign, is very much my experience also:

“It is also important to note that most policy work involves multiple players often working in coalitions and, in fact, requires multiple players “hitting” numerous leverage points. In this complex system, it is difficult to sort out the distinct effect of any individual player or any single activity…Even when an evaluator has the luxury of interviewing all key stakeholders involved in a policy development, there’s rarely, if ever, clear agreement on why something turned out the way it did. That can be frustrating to funders who want to pinpoint their influence.”

Read more about these challenges in the publication (pdf)>>

April 7, 2010 at 6:28 pm Leave a comment

Communications and behaviour change

Here is a fascinating study produced by UK government’s Central Office of Information that summarises what influences people’s behaviour and the implications for communicators seeking to influence it.

Some of the key implications for communication programmes and campaigns that come out of the study include:

  • Communications should not be viewed in isolation
  • Developing a practical behavioural model can help make communications more effective at influencing behaviour
  • Paid-for media opportunities (which traditionally account for the biggest part of the government communications budget) are not always the most trusted sources
  • Understanding behaviour and its influences will enable us to harness the most efficient and effective communications channels.
  • Understanding behaviour will support more robust and meaningful evaluation.

Read the full report here (pdf)>>

March 25, 2010 at 9:24 pm Leave a comment

Evaluating advocacy and policy change

This 32-page issue of The Evaluation Exchange (pdf) from the Havard Family Research Project describes new developments in evaluating advocacy and policy change – excellent points on the challenges and best practices for  evaluating advocacy campaigns.


March 12, 2010 at 9:50 pm Leave a comment

summer school on evaluation, Italy

The University of Bologna have announced their summer school for six intensive days (in English) on monitoring and evaluation from 7-12 June.

Learn more about the course >>

March 3, 2010 at 5:14 pm Leave a comment

Virtual Conference – Methodology in Programme Evaluation – April 2010

Following is an interesting initiative for a “virtual conference” on programme evaluation:

The Wits Programme Evaluation Group, University of the Witwatersrand, Johannesburg is organising a virtual conference on methodology in programme evaluation during April 2010. The conference aims to attract papers, case studies and workshop or teaching materials, from persons working both in the developed and the developing world. It is being held between April 7th and April 9th 2010, when upload of draft conference contributions takes place.

More information relative to the procedures, deadlines and costs of making a contribution can be found at:

http://wpeg.wits.ac.za/evaluationconference2009-CallForPapers.pdf

Call for Papers, which can be downloaded from the conference website at: http://wpeg. wits.ac.za/

The conference proceedings will be published. Our aim as a programme committee is to create a resource will be useful not only to evaluation practitioners, but also to those teaching and training programme evaluation in both developing and developed countries. Costs of participation and of accessing the proceedings have been deliberately kept low, for the reason that the programme committee aims to attract contributions both from persons working in both developing as well as in more developed country contexts.

February 22, 2010 at 1:05 pm Leave a comment

Standardisation of PR Evaluation Metrics?

The  UK government’s  Central Office of Information (yes, I know it’s rather Orwellian sounding…) has produced a set of metrics for measuring PR campaigns: “Standardisation of PR Evaluation Metrics” (pdf).

Frankly, they are disappointing.  The title is deceiving, it should be called “Standardisation of media monitoring metrics”. As that all the document covers – the superficial stuff – “reach”, “Favourability of coverage”, etc. These are  “outputs” of PR activities.

But what about metrics for measuring “outcomes”? These don’t get a mention. Well there is an admission but you have to dig deep, they do say:

“It is worth bearing in mind that these standardised core metrics for media evaluation are only one component of any campaign evaluation. It is crucial to agree specific key performance indicators (KPIs) at the outset of a campaign.”

So they admit it, these are metrics for media coverage only. And no guidance is given on these KPIs (that are typically “outcome” level).  If you are interested in learning more about metrics for “outcomes”, I’d recommend you start with the excellent guide from the Institute of PR: “Guidelines and Standards for Measuring the Effectiveness of PR Programs and Activities” (pdf).

February 17, 2010 at 8:39 pm Leave a comment

Measuring the influence of Twitter

As Twitter becomes more present in communications, is there any way to measure how influential it is? Well, there are plenty of tools to monitor Twitter usage.  But the MetricsMan blog has  some wise word of caution about these tools. He warns that the tools are not really measuring influence, as he puts it well:

The problem here is no one is actually measuring true Influence –  the ability of one individual to change another’s opinions, attitudes or behavior.  You can’t surmise whether or not an opinion or attitude has been impacted, you have to conduct research.  Opinions and attitudes exist within individuals.  You cannot assess this by proxy, looking strictly at online metrics.  Online behavior can be measured without primary research, but offline behaviors have to be observed or reported.

Read the full post here>>

February 7, 2010 at 8:38 pm 1 comment

AVE to WMC – A wolf in sheep’s clothes?

The Institute for Public Relations has published a new research paper explaining a new media measurement concept called “Weighted Media Cost”.  But is this anything new – or simply the dreaded Ad Value Equivalent (AVE) in disguise – a wolf in sheep’s clothes? PR measurement guru KD Paine certainly thinks so.

I think that any measurement based on media space generated by PR efforts is bound to be flawed and increasingly illrelevant. Why?

  • Generating media space is rapidly loosing importance as a PR objective – particularly with the growth of other ways that people can obtain information. These measurements typically look at print media – which is a media with a declining readership base
  • Measuring how much media space was generated takes the focus away from the more important objectives to measure – what did PR efforts actually change in terms of knowledge, attitudes and behaviors of target audiences. That’s harder to do, but it’s worth the effort…!

January 29, 2010 at 5:06 pm 2 comments

Presenting evaluation results – interactive mapping

One of the challenges faced in evaluation is presenting evaluation findings in way that facilitates their use, as I’ve written about before.

Now here’s an interesting idea – presenting evaluation results in an interactive map. This example is for the monitoring, evaluation, and communications for an agriculture development program in Afghanistan. Here is a screenshot of the map:

View the interactive map>>

Interactive map produced by Jasha Levenson of Cartametrix.

January 21, 2010 at 1:41 pm Leave a comment

20 minute webinars for evaluators

The American Evaluation Association is presenting a series of 20 minute webinar-based demonstrations by and for evaluators – Demonstrations are free for members of the AEA and non-members can purchase a demonstration pass ($80 for unlimited demonstrations for one year, $30 for students including AEA membership).

Here are the next scheduled webinars:

Thursday, February 4,  2:00-2:20 PM EST:   Submitting an AEA Conference Proposal –  Susan Kistler

Tuesday, February 9, 10:00-10:20 AM EST: Tools for Evaluation Teaching and Training: Approaches to Introducing the Concept of Evaluation –  José María Díaz Puente and Michael Newman

Thursday, February 18,  2:00-2:20  PM EST: Using Jing for Teaching and Learning – Theresa Murphrey

More information and registration>>

January 13, 2010 at 9:52 am Leave a comment

Older Posts Newer Posts


Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 2,665 other subscribers

Categories

Feeds