Posts filed under ‘Research & Studies’

Accountability and Outcomes – the Penny Drops…

In the past months I’ve had many discussions with project managers about evaluation where they clearly feel uncomfortable linking their objectives to outcomes – and now after reading a recent article in the Evaluation journal, the penny has dropped (i.e. I am able to link up the dots and understand a little more…).

In an article by John Mayne, “Challenges and Lessons in Implementing Results-Based Management”, he discusses the issue of accountability and outcomes:

“People are generally comfortable with being accountable for things they can control. Thus, managers can see themselves as being accountable for the outputs produced by the activities they control. When the focus turns to outcomes, they are considerably less comfortable, since the outcomes to be achieved are affected by many factors not under the control of the manager.”

And that’s it – a communications manager prefers to be accountable for the number of press releases s/he publishes and not the changes to an audiences knowledge or attitude, a training manager prefers to be accountable for the number of courses s/he organises and not the impact on an organisation’s efficency, etc.

So is there a solution? John Mayne speaks of a more sophisticated approach to accountability, notably to look at the extent to which a programme has influenced and contributed to the outcomes observed. And that leads us to the next question – how much influence is good enough?

Food for thought…you can read an earlier version of the article here (pdf).


May 1, 2007 at 8:30 pm Leave a comment

Evaluating Advocacy Campaigns

I’ve written previously about work that others and myself have done on evaluating communication campaigns, particulary concerning campaigns that aim for both changes in individual behaviour and government/private sector policies. In this same direction, a post from the Mission Measurement blog caught my eye on evaluating advocacy campaigns. They make the very relevant point that although evaluating the impact of advocacy campaigns is difficult – trying to isolate the precise influence on changes being observed – what certainly can be measured is the progress towards the desired change.

They go on to provide some further insight into this issue, by looking at various measurements undertaken, such as:

  • Number of contacts established
  • Intermediate changes to knowledge/attitudes
  • Measuring progress of change on a continuum
  • Bellweather ratings

Read the full post here >>

In the same vein, what I recommend to organisations is to set clear objectives to start with in terms of what is precisely expected from advocacy/campaigning and establish relatively simple “tracking mechanisms” to follow “progress” on an issue – on a policy level (e..g. number of governments that publicly commit to a given issue) or at an individual level (e.g number of people who pledge to undertake a given action). Often this information is “known” within an organisation but is not centralised or analysed – making any conclusion on a campaign’s impact difficult.


January 29, 2007 at 10:18 pm 10 comments

Linking Media Coverage to Business Outcomes

Can we show a link between media coverage and desired business outcomes? A new study (pdf) from the US-based Institute for Public Relations has some interesting case studies that in several instances illustrate corresponding trends between increased media coverage and a desired business outcome occurring.

For example, they speak of a campaign on the importance of mammograms with the desired business outcome being an increase in the number of relevant medical procedures undertaken. Looking at the number of articles published on the issue and comparing it to the number of medical procedures, a correlation seems to exist. This can be seen in the above graph which shows in blue the number of press articles on the issues and in red the number of medical procedures undertaken (over 2 years in the US).

The authors of the study readily admit that they are making a jump in assuming “cause and effect” but what they are looking for is a “preponderance of evidence” that supports a correlation between media coverage and business outcomes.

What I find interesting is the jump from an output measure (clips) to a business outcome. Further, that they were able to find communication campaigns where a clear link was made between communication objectives and business objectives – as often there is a large gap between these two elements.

Read the full study “Exploring the Link Between Volume of Media Coverage and Business Outcomes”(pdf) By Angela Jeffrey, APR, Dr. David Michaelson, and Dr. Don W. Stacks


November 15, 2006 at 9:50 pm 4 comments

Client Perspective of Evaluation

The Active Learning Network for Accountability and Performance in Humanitarian Action (ALNAP), has recently published the results (pdf) of a survey of evaluation managers from humanitarian organisations.

ALNAP had responses from 46 evaluation managers that commission and manage evaluation projects for humanitarian organisations. This provides an interesting insight into the “client” perspective of evaluation, some highlights:

What helps promote an evaluation?
ownership or “buy in” of an evaluation was the most often stated promoting factor. Quality of the evaluation report and recommendations was also important.

What inhibits the use of an evaluation?
The most frequently mentioned factor was the imposing of and evaluation by HQ and/or donor. The arrogant, fault-finding or ivory tower approach of evaluators and insufficient time for the evaluation leading to superficial results were also important factors.

What other factors induce changes in an organisation?
A very interesting question – what factors do they believe impact on change – apart from evaluation. Respondents mentioned two important influences: the media and donors. And to a lesser extent, the influence of peers (exchange/discussions between agencies)

Why do organisations evaluate?
Formal accountability (obligation to donors, trustees, etc.)
Improving the programme
Learning for the organisation
Legitimising (to add credence or challenge existing agenda)

how to increase use of evaluation?
Most respondents mentioned changing the atttitude of senior manager and the culture of learning within their organisations. Respondents spoke of a culture of defensiveness rather than of learning and reflection.

Some very interesting results. It also confirms what I have seen in the humanitarian field: communications professionals are slowly coming around to recognise that evaluation is necessary and important – but this is being prompted by pressure from donors and the monitoring and evaluation units that are sprouting up in their organisations.


September 6, 2006 at 2:17 pm Leave a comment

Special issue focusing on public relations measurement and evaluation

A new special issue of PRism focusing on public relations measurement and evaluation has just been published. PRism is a free-access, online public relations and communication research journal.  

In his editorial (pdf), Tom Watson quotes Jon White as saying that the PR evaluation discussion is like:

“A car stuck in mud with its wheels spinning”

In other words, the debate goes around but there is no traction. Let’s hope that this interesting collection of articles will put some more tread on the tires…

And I’m obliged to mention my own case study in the issue:

Blogs, mash-ups and wikis – new tools for evaluating event objectives: A case study on the LIFT06 conference in Geneva (pdf)


June 27, 2006 at 11:03 am Leave a comment

Evaluating Communication Campaigns

As the whole development and humanitarian sector focus more on accountability and performance, there has been a push for more evaluation of communication activities of this sector.

Most methodology and tools can be adapted from those used in the private sector. However, many communication campaigns of NGOs and international organisations often have dual outcomes they wish to achieve – individual behaviour change (e.g. persuade individuals to adopt a more healthier lifestyle) and policy change (e.g. push governments to change policy on food labeling).

An excellent study “Lessons in Evaluating Communication Campaigns: Five Case Studies” from the Harvard Family Research Project looks at evaluating campaigns ranging from gun safety to emmissions (ozone) reduction.

If you are interested to read more about standards and practices of evaluation in the development and humanitarian sector, a good starting point is The Active Learning Network for Accountability and Performance in Humanitarian Action (ALNAP). This interagency forum focuses on evaluation and accountability issues in this sector.


January 18, 2006 at 2:26 pm 1 comment

Newer Posts

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 1,145 other followers