Posts filed under ‘Research & Studies’

Global survey on communications evaluation

If you are a communications professional, please take a few minutes and participate in a global benchmarking survey designed to measure best practices in public relations measurement and management.

This survey builds on research undertaken five years ago. The results of the current survey will be presented at the First European Summit on Measurement, June 10-12 in Berlin, Germany.

Complete the survey here>>

The Intelligent Measurement blog will also publish a summary of the results once available!

May 6, 2009 at 6:30 am Leave a comment

Found verses manufactured data

In evaluation projects, we often feel the strong need to talk to people – to assess a situation or judge a phenomena by surveying or interviewing people. However, this is “manufacturing” data – we are framing questions and then putting them to people – and perhaps in doing so are influencing how they respond.

Alternatively, there is a lot to say for “found” or “natural” data – information that already exists – e.g. media reports, blog posts, conference papers, etc. We often forget about this type of data in our rush to speak to people.

Take this example. I recently saw a paper presenting “current challenges in the PR/communications field”. After surveying PR/comm. professionals, a list of five current challenges were presented by the authors. This is “manufactured” data. An approach using “found” data would be to examine recent PR/comm. conference papers and see what challenges are spoken about – or study the websites of PR/comm. agencies and see what they are presenting as the main challenges.

Another example. Imagine you would like to study the experiences of US troops in Iraq. Of course you could survey and interview military personnel. However, a rich body of data certainly exists online in blog posts, videos and photos from military personnel describing their experiences.

Of course, there are limitations to using “found” data (such as it may present only the views of a select part of a population/phenomena) – but an evaluation project combining both “manufactured” and “found” would certainly make its findings more solid.

Examples of “found” data:

  • blog posts
  • discussion forums
  • websites
  • website statistics
  • photo/video archives (online or offline)
  • media reporting
  • conference papers
  • policy documents
  • records (attendance, participation, complaints, sales, calls, etc.)

If you are interested to read further on this subject, this book “A Very Short, Fairly Interesting and Reasonably Cheap Book about Qualitative Research” by David Silverman provides more examples and information on this concept.

Glenn

March 20, 2008 at 8:31 am Leave a comment

conference evaluation and network mapping

lift07_nm_lifters_11_after.jpg

Often we attend conferences where one of the stated objectives is “increase/build/create networking” and I always found it odd that there is never any attempt to measure if networking really took place.

A possible solution is to map networks created by participants at conferences – and compare these networks to those that existed before the conferences.

This is exactly what I have done recently in a network mapping study that you can view here (pdf – 1 MB) and the above image is from. From the LIFT conference of 2007, we mapped the networks of 28 participants (out of 450 total participants) before and after the conferences. We found some quite surprising results:

  • These 28 participants had considerable networks prior to the conference – reaching some 30% of all participants.
  • These networks increased after the conference -the 28 people were then connected to some 50% of all participants.
  • Based on the sample of 28 participants, most participants doubled their networks at LIFT07 – e.g. if you went to the conference knowing five people, you would likely meet another five people at the conference – thus doubling your network to ten.

Although this is only a mapping of 28 participants, it provides some insight into conferences and how networks develop – it’s also quite interesting that 28 people can reach 50% (225 people) of the total conference participants in this case.

View the full report here (pdf – 1 MB).

If you are after further information on network mapping, I recommend Rick Davies’ webpage on network mapping. Although it focuses on development projects it contains a lot of useful information on network mapping in general.

Glenn

January 14, 2008 at 8:20 pm 12 comments

Accountability and Outcomes – the Penny Drops…

In the past months I’ve had many discussions with project managers about evaluation where they clearly feel uncomfortable linking their objectives to outcomes – and now after reading a recent article in the Evaluation journal, the penny has dropped (i.e. I am able to link up the dots and understand a little more…).

In an article by John Mayne, “Challenges and Lessons in Implementing Results-Based Management”, he discusses the issue of accountability and outcomes:

“People are generally comfortable with being accountable for things they can control. Thus, managers can see themselves as being accountable for the outputs produced by the activities they control. When the focus turns to outcomes, they are considerably less comfortable, since the outcomes to be achieved are affected by many factors not under the control of the manager.”

And that’s it – a communications manager prefers to be accountable for the number of press releases s/he publishes and not the changes to an audiences knowledge or attitude, a training manager prefers to be accountable for the number of courses s/he organises and not the impact on an organisation’s efficency, etc.

So is there a solution? John Mayne speaks of a more sophisticated approach to accountability, notably to look at the extent to which a programme has influenced and contributed to the outcomes observed. And that leads us to the next question – how much influence is good enough?

Food for thought…you can read an earlier version of the article here (pdf).

Glenn

May 1, 2007 at 8:30 pm Leave a comment

Evaluating Advocacy Campaigns

I’ve written previously about work that others and myself have done on evaluating communication campaigns, particulary concerning campaigns that aim for both changes in individual behaviour and government/private sector policies. In this same direction, a post from the Mission Measurement blog caught my eye on evaluating advocacy campaigns. They make the very relevant point that although evaluating the impact of advocacy campaigns is difficult – trying to isolate the precise influence on changes being observed – what certainly can be measured is the progress towards the desired change.

They go on to provide some further insight into this issue, by looking at various measurements undertaken, such as:

  • Number of contacts established
  • Intermediate changes to knowledge/attitudes
  • Measuring progress of change on a continuum
  • Bellweather ratings

Read the full post here >>

In the same vein, what I recommend to organisations is to set clear objectives to start with in terms of what is precisely expected from advocacy/campaigning and establish relatively simple “tracking mechanisms” to follow “progress” on an issue – on a policy level (e..g. number of governments that publicly commit to a given issue) or at an individual level (e.g number of people who pledge to undertake a given action). Often this information is “known” within an organisation but is not centralised or analysed – making any conclusion on a campaign’s impact difficult.

Glenn

January 29, 2007 at 10:18 pm 10 comments

Linking Media Coverage to Business Outcomes

Can we show a link between media coverage and desired business outcomes? A new study (pdf) from the US-based Institute for Public Relations has some interesting case studies that in several instances illustrate corresponding trends between increased media coverage and a desired business outcome occurring.

For example, they speak of a campaign on the importance of mammograms with the desired business outcome being an increase in the number of relevant medical procedures undertaken. Looking at the number of articles published on the issue and comparing it to the number of medical procedures, a correlation seems to exist. This can be seen in the above graph which shows in blue the number of press articles on the issues and in red the number of medical procedures undertaken (over 2 years in the US).

The authors of the study readily admit that they are making a jump in assuming “cause and effect” but what they are looking for is a “preponderance of evidence” that supports a correlation between media coverage and business outcomes.

What I find interesting is the jump from an output measure (clips) to a business outcome. Further, that they were able to find communication campaigns where a clear link was made between communication objectives and business objectives – as often there is a large gap between these two elements.

Read the full study “Exploring the Link Between Volume of Media Coverage and Business Outcomes”(pdf) By Angela Jeffrey, APR, Dr. David Michaelson, and Dr. Don W. Stacks

Glenn

November 15, 2006 at 9:50 pm 4 comments

Client Perspective of Evaluation

The Active Learning Network for Accountability and Performance in Humanitarian Action (ALNAP), has recently published the results (pdf) of a survey of evaluation managers from humanitarian organisations.

ALNAP had responses from 46 evaluation managers that commission and manage evaluation projects for humanitarian organisations. This provides an interesting insight into the “client” perspective of evaluation, some highlights:

What helps promote an evaluation?
ownership or “buy in” of an evaluation was the most often stated promoting factor. Quality of the evaluation report and recommendations was also important.

What inhibits the use of an evaluation?
The most frequently mentioned factor was the imposing of and evaluation by HQ and/or donor. The arrogant, fault-finding or ivory tower approach of evaluators and insufficient time for the evaluation leading to superficial results were also important factors.

What other factors induce changes in an organisation?
A very interesting question – what factors do they believe impact on change – apart from evaluation. Respondents mentioned two important influences: the media and donors. And to a lesser extent, the influence of peers (exchange/discussions between agencies)

Why do organisations evaluate?
Formal accountability (obligation to donors, trustees, etc.)
Improving the programme
Learning for the organisation
Legitimising (to add credence or challenge existing agenda)

how to increase use of evaluation?
Most respondents mentioned changing the atttitude of senior manager and the culture of learning within their organisations. Respondents spoke of a culture of defensiveness rather than of learning and reflection.

Some very interesting results. It also confirms what I have seen in the humanitarian field: communications professionals are slowly coming around to recognise that evaluation is necessary and important – but this is being prompted by pressure from donors and the monitoring and evaluation units that are sprouting up in their organisations.

Glenn

September 6, 2006 at 2:17 pm Leave a comment

Older Posts Newer Posts


Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 1,119 other followers

Categories

Feeds