Posts filed under ‘Evaluation tools (surveys, interviews..)’

Media monitoring to behaviour changes

Can we make the logical step to “output” with media monitoring – measuring changes to knowledge, behaviour or attitudes? With traditional media monitoring we cannot. And that’s the missing link of most media monitoring – how can we tell if the media exposure led to a change with a given audience? Polling of audiences and making an informed assumption linking their media use with changes observed is possible – but cost and complexity are the main deterrents for many organisations.

But with the online environment, there are some interesting developments in the ability to link media exposure with an actual behaviour change of an audience. Take this example: people who read an article online and then link to it in their blog have made a behaviour change – a simple example. If we could show the path from media exposure to the triggering of thoughts, comments, actions and ideas we are heading in the “outcome” direction. David Phillips of Leaverwealth blog is working in this area and is developing software to summarise content of RSS feeds under subject headings and show the path to the original stories and posts. This uses a statistical/mathematical technique, Latent Semantic Analysis which extracts and represents the similarity of meaning of words and passages. Now, that’s much more valuable than clip counting.

Glenn

July 11, 2006 at 5:39 am 4 comments

Media monitoring – what is it worth? Part 2

Some further thoughts on media monitoring:

Michael Blower at the Media evaluation blog writes about audience reaction to the media as an “outtake” indicator with the new “BBC most popular stories” tool:

“For too long media content analysis has been driven by media output. This new tool makes it possible to do something which, up to now has been an expensive luxury – see an exact measure of media out-take.”

I find this a refreshing point of view from a media evaluator, moving the focus from “output” to “outtake”. If we consider “outtake” to mean understanding, reaction and favourability to a message, such website tools can provide a feedback – albeit not complete – on this level of measurement. I guess there must be a web-based tool that can collate and prioritise the popularity of news stories (based on visitor traffic) from the main news sites. Googletrends does this with search results and links it to news stories as I wrote about previously, a step in this direction. Of course we have to factor in the limitations of web metrics including pass-word protected content of news sites.

Glenn

July 7, 2006 at 12:23 pm Leave a comment

PR Measurement – Generally Accepted Practices Study

An interesting study has been released by the USC Annenberg Strategic Public Relations Center based in California, USA. The “Generally Accepted Practices” study focuses on budgeting, staffing, evaluation and the use of agencies by communication professionals. Some 500 US-based communication professionals responded to the study survey.

An interesting result of the study is the top three evaluation methodologies used by professionals:

1- Influence on corporate reputation
2-Content Analysis of Clips
3-Influence on employee attitudes

The first result is interesting as although there has been methodologies developed on measuring corporate reputation (notably by the Reputation Institute), there is no generally accepted metric on how communication activities “influence” reputation, as the authors of the study point out. What would be interesting to know is how exactly do the professionals assess the influence of their activities on corporate reputation? Active monitoring, comparison studies (to other potential influences), research/analysis or simple intuition?

The reliance on clips is not surprising, but worrying as the authors point out:

“Despite much discussion and debate, evaluation methods have not advanced beyond various forms of content analysis, which is another way of measuring outputs rather than outcomes. The authors suggest that while content analysis is the state of the media measurement art, it ignores all other public relations functions, thereby reinforcing the notion that PR is nothing more than publicity and media relations. This does a disservice to the increasingly sophisticated and complex nature of the profession. Clearly much more work remains to be done in the field of evaluation.”

Thanks to metricsman blog for bringing this study to my attention, where on this post he also makes an interesting analysis on the results.

Glenn

June 19, 2006 at 7:07 am Leave a comment

Combining Qualitative and Quantitative Methods for Evaluation – Part 2

Further to my earlier post on combining qualitative and quantitative methods for evaluation, I came across some interesting resources on this subject:

An article “Methodological Triangulation,or How To Get Lost Without Being Found Out” – with an interesting review of common errors in triangulation.

User-Friendly Handbook for Mixed Method Evaluations” – good practical advice on mixing evaluation methods.

Glenn

June 6, 2006 at 6:24 am Leave a comment

Mapping Stakeholder Networks

I’ve written previously about measuring networks and how it can be very useful for an organisation to assess the different links between their key stakeholders. I came across another approach to mapping stakeholders which has some very good elements: the clarity concept identifies stakeholders and maps the relationships based on attitude, importance and influence of publics and the strength of the relationships (see the example above). You can read further about this concept on the stakeholder relationship management blog.

Although this is essentially a proprietary solution (offered by Clarity CS), what I appreciate is that the people behind the solution offer all their thoughts, theories and ideas on the subject so you can learn a lot from their approach – at no cost.

Disclaimer: there are no links between the authors of this blog and Clarity CS. Although one of the founders of this solution, Jon White, was a lecturer at my masters programme).

Glenn

May 30, 2006 at 9:04 am 2 comments

Common Myths of Evaluation

 

Looking further into outcome evaluation in different fields, I came across a very good resource with a rather quaint title: The Basic Guide to Outcomes-Based Evaluation for Nonprofit Organizations with Very Limited Resources.  Interesting to note the similiarities between my own top ten excuses for not evaluating and the common myths about evaluation that the author lists; as follows:

Myth: Evaluation is a complex science. I don’t have time to learn it! No! It’s a practical activity. If you can run an organization, you can surely implement an evaluation process!

Myth: It’s an event to get over with and then move on! No! Outcomes evaluation is an ongoing process. It takes months to develop, test and polish — however, many of the activities required to carry out outcomes evaluation are activities that you're either already doing or you should be doing. Read on …

Myth: Evaluation is a whole new set of activities – we don’t have the resources No! Most of these activities in the outcomes evaluation process are normal management activities that need to be carried out anyway in order to evolve your organization to the next level.

Myth: There’s a "right" way to do outcomes evaluation. What if I don’t get it right? No! Each outcomes evaluation process is somewhat different, depending on the needs and nature of the nonprofit organization and its programs. Consequently, each nonprofit is the "expert" at their outcomes plan. Therefore, start simple, but start and learn as you go along in your outcomes planning and implementation.  

Myth: Funders will accept or reject my outcomes plan No! Enlightened funders will (at least, should?) work with you, for example, to polish your outcomes, indicators and outcomes targets. Especially if your's is a new nonprofit and/or new program, then you very likely will need some help — and time — to develop and polish your outcomes plan.

Myth: I always know what my clients need – I don't need outcomes evaluation to tell me if I'm really meeting the needs of my clients or not You don’t always know what you don’t know about the needs of your clients – outcomes evaluation helps ensure that you always know the needs of your clients. Outcomes evaluation sets up structures in your organization so that you and your organization are very likely always focused on the current needs of your clients. Also, you won’t always be around – outcomes help ensure that your organization is always focused on the most appropriate, current needs of clients even after you've left your organization.

Glenn

May 23, 2006 at 9:57 am 4 comments

PR Measurement and Google Trends

The new “Google Trends” product is a nice complementary tool for monitoring the “noise” out there on an issue. It may prove useful for organisations that want to track general interest on a trend, or on related issues to see if there is any correlation.

What is interesting is that we can get an idea of the impact of events and policy announcements on issues – and how it spikes interest in an issue – and thus increased search results and news stories.

In the graph below, I have chartered WWF (blue line) and climate change (red line). The letters (A, B, C etc.) indicate major news stories on events and announcements. What is interesting is the peak around certain events (like D, a climate change conference) and both searches on WWF and climate change rise slightly. Also there are unexplained peaks in the search and news volume.

What can we conclude from this? Firstly, the search and news volumes on WWF and climate change do mirror each other often in terms of peaks and troughs which could be reassuring to the organisation as climate change is one of its key campaigns. Secondly, it supports the notion that events and policy announcements influence a public’s awareness and interest in an issue and may help track and explain these spikes. I wrote about this before in what I termed the “Kylie effect“. Thirdly, it shows that there are still spikes in public interest that are not traceable to news/policy announcements (look at the spike in early 2004 for both WWF and climate change). Could it be partly explained because Google Trends does not include blogs in its analysis? They could be a possible source of some peaks (i.e. a blogger writes about an issue, links to a story or WWF site inciting interest in the issue).  Steve Rubel points out this weakness in the analytical power of the new tool. KD Paine has also written about the tool and PR measurement.

Glenn

May 19, 2006 at 12:15 pm 2 comments

Online Surveys and Best Practices

As I’m setting up and running online surveys using the Benchpoint system (commercial plug), I am always interested to see examples of best practices in online surveying.  Sometimes we come across examples of worst practice also. Below I’ve copied in a screenshot of a survey I was asked to complete. It breaks a simple rule of surveying that dates back to phone and street surveying: personal / demographic information should normally be requested at the end of a survey. As it has been found that people are more comfortable answering such questions once they are familiar with the survey theme and if done in person, with the interviewer. I found the amount of personal information this survey asks even before you “start survey” is excessive and was the reason why I didn’t complete the survey. Hopefully general standards on online surveying are emerging to avoid this type of issue.

Glenn

April 21, 2006 at 12:01 pm Leave a comment

Combining Qualitative and Quantitative Methods for Evaluation

In evaluation, we often choose between using qualitative (e.g. focus group) and quantitative (e.g. survey) methods. In fact, we should always try and use both approaches. This is what is referred to as triangulation: the combination of several research methods in the study of the same phenomenon. My experience has been that a combination of research methods helps provide more data to work with and ultimately a more accurate evaluation. In a recent project, I was able to use interviews combined with surveys to assess participant reaction to training. I found that the information we could draw from the interviews was complimentary – and of added value – to what we discovered through the surveys.

Even if you are only conducting online surveys, the inclusion of open questions (where respondents put in comments in a free text field) is not quite triangulation but will provide you with insight into the phenomenon being evaluated. In a recent online survey project, we were able to clarify important issues by sorting and classifying the comments made in open questions. This proved invaluable information and gave the evaluation heightened status within the organisation.

Glenn

March 13, 2006 at 10:34 pm 21 comments

PR Measurement – New Dictionary

The USA’s Institute for Public Relations, which offers all of its research and publications free on the web, has published a new edition of the Dictionary of Public Relations Measurement and Research (pdf – 198kb), edited by Dr. Don W. Stacks of the University of Miami.

We all measure different things and we measure them all differently (and not always intelligently). Not a good idea if you want to benchmark or compare data.

Don Stacks and his colleagues have done a magnificent job in assembling, codifying and defining most of the common terms we come across when talking about measurement, from “Algorithm” to “Z-score”.

It’s a giant step forward in defining some standard methodologies which everyone can use.

The Institute’s website, http://www.instituteforpr.org also contains a large number of downloadable papers on every aspect of communications measurement and evaluation. Well worth a visit.

Richard

March 7, 2006 at 7:41 pm Leave a comment

Older Posts Newer Posts


Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 2,665 other subscribers

Categories

Feeds