Evaluating Advocacy Campaigns

I’ve written previously about work that others and myself have done on evaluating communication campaigns, particulary concerning campaigns that aim for both changes in individual behaviour and government/private sector policies. In this same direction, a post from the Mission Measurement blog caught my eye on evaluating advocacy campaigns. They make the very relevant point that although evaluating the impact of advocacy campaigns is difficult – trying to isolate the precise influence on changes being observed – what certainly can be measured is the progress towards the desired change.
They go on to provide some further insight into this issue, by looking at various measurements undertaken, such as:
- Number of contacts established
- Intermediate changes to knowledge/attitudes
- Measuring progress of change on a continuum
- Bellweather ratings
In the same vein, what I recommend to organisations is to set clear objectives to start with in terms of what is precisely expected from advocacy/campaigning and establish relatively simple “tracking mechanisms” to follow “progress” on an issue – on a policy level (e..g. number of governments that publicly commit to a given issue) or at an individual level (e.g number of people who pledge to undertake a given action). Often this information is “known” within an organisation but is not centralised or analysed – making any conclusion on a campaign’s impact difficult.
Glenn
Measuring an organisation’s position
![]()
At a meeting of communicators working in the health field, I was asked about some of the methodology concerning measuring the “position” of an organisation. As organisations frequently speak of “re-positioning” themselves, it is perhaps wise to know what is your current “position” before you move anywhere…
My approach is relatively simplistic but drawn from the research of the great thinkers in this field, J. Grunig, C. Fombrun and C. van Riel. Basically when we speak of an organisation’s “position”, for me I interpret this to mean what attributes we use to describe an organisation (e.g. modern, creative, traditional, research-focused, event-focused, etc.). This could also be interpreted as the “identity” of the organisation.
Thus to determine a “position” of an organisation, a good starting point would be to question management of an organisation about what attributes they believe are important for the organisation. Then, the main external target publics can be surveyed on what are the main attributes they attribute to the given organisation. Comparing the view of the management to the main target publics can be interesting as certain “gaps” will emerge between how the management view the organisation…and how the organisation is viewed externally. To go one step further, “positioning” would be determined by looking at how similiar or competing organisation are perceived on the same type of attributes.
Actually, what I describe is the basis for most “identity”, “reputation” or “positioning” studies. There is usually a measure of attributes/values/associations internally and a measure of the same externally. And often done with similar/competing organisations to provide a comparison point.
Some resources on this issue:
The book “Principles of Corporate Communications” by C. van Riel has very good descriptions of the main methodologies in identity measurement (ISDN 0131509969)
The Gauge newsletter discussing reputation measurement (pdf) >>
“Corporate reputations can be measured and managed” by C Fombrun (pdf) >>
“Methodology for assessing corporate values” by J. van Rekom, C. van riel & B. Wierenga (pdf) >>
And finally, an interesting opinion on the “demise of positioning“. Certainly some valid points there that I agree with – such as most “re-positioning” campaigns fail – notably because changing a logo, font or slogan doesn’t normally change the behaviour of an organisation – a main influence on “position/brand/identity”. But that’s a whole other issue I’d rather not get into…
Glenn
Six factor to ensure that your evaluation results are used
As I wrote in a previous post, evaluation can be quite frustrating when all your effort and work doesn’t actually lead to any significant change in future practices. Why are evaluations not used? A new report “The Utilisation of Evaluations” (pdf) from ALNAP throws some light on this subject. Although focusing on the humanitarian sector, the report has some points that apply to all types of evaluations. I found interesting the six quality factors the author identifies that contribute to the findings of an evaluation being utilised, notably:
- Designing carefully the purpose and approach of the evaluation
- Managing quality participation of all stakeholders throughout the evaluation
- Allowing enough time to have all relevant staff and stakeholders involved
- Ensuring that the evidence is credible and the report is easy to read with clear, precise recommendations with who is responsible for what and when
- Putting in place follow-up plans at the outset
- Ensuring that the evaluator(s) are credible, balanced and constructive – wholesale negativity is never welcomed
Going through these six factors I can see where I’ve faced obstacles in past evaluations, notably points 2 and 5. I find managing stakeholder involvement is often difficult and so is setting out follow-up plans – it often comes as an after-thought. Certainly some factor to consider for all evaluators…
Read the full report (pdf) here >>
Glenn
Presenting monitoring & evaluation results

The more I work in the M&E field, the more I see the importance of presenting results in a consumable way. If you are leading an evaluation project, there is nothing more frustrating than finishing your project and finding the comprehensive report you wrote gathering dust on a manager’s desk.
But that’s what I have learnt, the comprehensive report will perhaps only be read by one or two people of the commissioning team – but the powerpoint summarising the report will be widely distributed and viewed by many. We may think this is a “dumbing-down” of the work undertaken but it is a reality of how our work is consumed. Here are some points on presenting results that I find useful:
- Think carefully about the data and findings you want to present. We can often be overwhelmed by data (from survey results for example). If in doubt, put data you consider important but not essential in report annexes.
- Make the evaluation report attractive and easy to ready – facilitate this by summarising the main points and creating a brief presentation.
- Organise an event such as a staff or team meeting to discuss the results – this could have more impact than the written document.
- Through blogs and wikis, use the evaluation results to generate more discussion and interest in the given subject. A good example is the blog created to present the results of the 2006 Euroblog survey.
Jim Macnamara in a recent article (pdf) touches on this subject on how presenting results with a “two-tier” approach is useful – that is, presenting to top management only key data and information while fully digesting all data at the corporate communications level.
Glenn
Cartoon from toothpaste for dinner>>
Measuring Results and Establishing Value

Jenny Schade has an interesting article “Measuring Results and Establishing Value” on her website. The article focuses on how to establish effective metrics and determining measures of success in communications programmes:
At the end of the day, how do you know you’ve been successful? What value are you providing to your organization or clients? In today’s climate of budget cuts and lay-offs, it’s particularly important that you establish clear measures of success before embarking upon any venture.
Read the full article here >>
Glenn
Does Creativity Equal Results?

Last week I gave a presentation on evaluation at the First ISO and IEC Marketing and Communication Forum which took place in Geneva. The forum gathered communications and marketing professionals from all over the world working in the field of standards development.
My presentation focused on some of my favourite topics of evaluation, notably:
- Why don’t marketing/communication professionals evaluate
- The need for clarity in setting marketing/communication objectives
- How low cost evaluation can be undertaken
- The risks of being over creative in communications
On the last point, I used the example of the Got Milk campaign which has been lauded as one of the most visible and creative ad campaigns of all time (notably by the advertising industry). However, did the highly creative ads actually help achieve campaign objectives? That is, to get people to drink more milk? Well, milk consumption continues to decline and the ads have been criticised for not addressing a key concern for teenagers – that they consider milk to be fattening.
And that’s the point I tried to make, that creativity all is well and good – but it has to help communicators achieve their campaign goals – and be measurable.
My full presentation can be downloaded here:
Presentation: Effective Marketing & Communications through Evaluation (pdf – 1 MB)
Glenn
Acknowledgement: the example of the Got Milk campaign comes from the book “The Fall of Advertising and the Rise of PR”, A & L Ries.
To Monitor or Evaluate – or Both?

In the latest issue of KD Paine’s Measurement Standard, you will find an article I wrote, here is an extract:
Many communication professionals approach PR measurement with an end game attitude — as something that’s done once a programme or project is concluded. But this is an error; PR measurement also needs to take place before and during a project.
Glenn
Measuring Relationships: A Future Auditing Measure?

Now this may not seem gripping… but I wanted to talk about future global financial reporting and public company auditing procedures – no, please do read on.
There has been a recent initiative “Global Public Policy Symposium” from all the big accounting / auditing firms and regulators to look at how company financial reporting should be done in the future.
They speak about the value of “intangible assets” that should figure in future reporting:
“The value of many companies resides in various “intangible” assets (such as employee creativity and loyalty, and relationships with suppliers and customers). However information to assess the value of these intangibles is not consistently reported”
If you are really keen, read the full report here (pdf)>>
And they go on to conclude that company reporting should in the future include measures on customer and employee satisfaction – and – relationships with key publics.
The idea of measuring relationships and their value is nothing new: there are a number of studies in this area – with the most well known being the Linder Childers and Jim Grunig guidelines on measuring relationships (although a number of points have recently been disputed by Joy Chia (pdf)).
And let’s not forget that David Phillips was way ahead of the auditors when he wrote in 1992 that relationships should be included as intangible assets of an organisation.
Will the auditors adopt a standard measure for relationships? Well, at least they have plenty of work already to draw from.
Glenn
Linking Media Coverage to Business Outcomes

Can we show a link between media coverage and desired business outcomes? A new study (pdf) from the US-based Institute for Public Relations has some interesting case studies that in several instances illustrate corresponding trends between increased media coverage and a desired business outcome occurring.
For example, they speak of a campaign on the importance of mammograms with the desired business outcome being an increase in the number of relevant medical procedures undertaken. Looking at the number of articles published on the issue and comparing it to the number of medical procedures, a correlation seems to exist. This can be seen in the above graph which shows in blue the number of press articles on the issues and in red the number of medical procedures undertaken (over 2 years in the US).
The authors of the study readily admit that they are making a jump in assuming “cause and effect” but what they are looking for is a “preponderance of evidence” that supports a correlation between media coverage and business outcomes.
What I find interesting is the jump from an output measure (clips) to a business outcome. Further, that they were able to find communication campaigns where a clear link was made between communication objectives and business objectives – as often there is a large gap between these two elements.
Read the full study “Exploring the Link Between Volume of Media Coverage and Business Outcomes”(pdf) By Angela Jeffrey, APR, Dr. David Michaelson, and Dr. Don W. Stacks
Glenn
LIFT 07 – evaluation, networks & social media
![]()
The LIFT blog beat me to it in announcing that I will be involved in working with the LIFT team in evaluating the 2007 event that will take place in February 2007. LIFT is an international conference that takes place in Geneva and looks at the relationship between technology and society.
My experience with LIFT 06 in evaluating the reactions and initial impact to the event is written up in this journal paper (pdf) or directly on the LIFT website.
In 2007, I hope to go further by exploring the impact of social media on the event setting and looking at networks that develop. It should be fun!
Glenn