linkedin
The 3 Things You Should Get Right If You Use Social Media Listening

The 3 Things You Should Get Right If You Use Social Media Listening

Social media listening has many names; the most accurate term to describe this new marketing discipline is probably Active Web Listening. “Web” is more appropriate than “social” because when people share their views about brands, organisations and people, they do so not only on the well known social media sites but also on blogs, forums, and commercial websites (such as Amazon). Sometimes we also want to listen to what is in the news – editorial content – as well. The word “active” emphasises that it is not enough to just "listen", you have to do something about it, which assumes that you understand what people are saying and what the issues are. Having said all that, the most popular term used in a Google search by people looking for solutions as such is: 'social media monitoring'.

Now that we have the nomenclature out of the way, let’s discuss how to do social media listening properly; we need to pay attention to 3 things really:

  1. Noise

  2. Sentiment Accuracy

  3. Drill-down capability
     

Let’s have a closer look at these 3 things one by one:

  1. Noise
    Any given query that will initially be used for the monitoring of a subject or product category will, almost for sure, produce posts that are not relevant to the subject . Sometimes the irrelevant posts are 80%-90% of the total posts harvested from the web. For example if we have a query with just one search term e.g. Apple (Computers), we will get lots of posts about apple – the fruit. The usual way to get rid of noise is to use a Boolean logic query, something along the lines of: Apple AND Computers OR phone OR Tablet NOT taste ….etc.

  2. Sentiment Accuracy
    This is probably the most difficult problem to solve when it comes to making sense out of social media. Most end-clients (brands) of social media monitoring and analytics have developed ways to extract value out of their existing social media monitoring dashboards, without making use of sentiment analytics. In other words, they know how many posts are talking about their brand and their competitors, but they do not know how many of these posts are negative and how many are positive. They also have no idea what their Net Sentiment Score benchmarked with their competitors is (NSS is a very useful metric and a DigitalMR trade mark). We believe the reason they chose to ignore sentiment is simply because no supplier of theirs is able to deliver a sentiment accuracy over 60%.

    negative, neutral, positive

    This ended on December 31st 2014 when DigitalMR completed the 2.5 year development of listening247®. Through the use of a unique combination of machine learning algorithms and computational linguistic methods, the DigitalMR R&D team was able to achieve sentiment accuracy over 85% in multiple languages and product categories. A machine learning model usually delivers 70% - 75% sentiment accuracy initially, and then with continuous fine tuning (for about a month) it climbs slowly but surely to 85% and even higher. 

    The key to establishing the sentiment accuracy is for a number of humans to agree with the posts processed (and the sentiment detected) by the algorithms. We use random samples and ask the end user (client) or an independent third party to go through the posts and annotate sentiment manually. We then compare the results of listening247® and those of the human annotations, and establish the degree of agreement. The caveat here is that sentiment accuracy can never be 100% since even humans do not agree 20%-30% of the time due to sarcasm and general ambiguity.

  1. Drill-down capability
    The drill-down capability depends on two things: a drill-down dashboard and an appropriate taxonomy that describes the topics discussed around a subject or product category. It is fairly easy to drill down into posts about a single brand, and then within that brand to drill down into a key term used in the discussions, and then within that term, to look at only the negative posts. What is not easy to do is look at the posts around a topic or discussion driver, then drill down to see what the sub-topics around that main topic are, and then drill down further  to see what people are saying about one attribute (of the many) within the (chosen) sub-topic. After all that, we can still have a look at a specific brand, the sentiment, and the source of the posts at the attribute level; a total of 8 drill-down levels are possible with an approach like this.
     

A delegate at the MRS Healthcare research conference last week in London said that if anyone could take thousands of posts in any language, and was able to analyse for topics and sentiment, they would consider this a superpower equal to that of super heroes such as Superman and Spiderman. Well it is quite telling that a colleague in the business of market research did not even know that this is possible and that the only superpower we need to achieve it is machine learning capability.

Here is where the magic comes in (if you get the above 3 things right): Social media listening takes unstructured text (consisting of thousands of posts), provides structure to it which allows us to see a quantitative analysis and interpretation otherwise impossible, and furthermore allows you to get to a few homogeneous posts that you can read for a qualitative analysis take and further probing.

Can Market research get any better? What do you think?

 




Share this article: