Precision and Recall in Social Listening

Precision & Recall For the last 4 years, we have been talking about the importance of sentiment accuracy in social listening. When people asked: “What is sentiment accuracy?” we responded along these lines:

• 80% sentiment accuracy means: if you are given 100 posts from the web about your brand that are annotated with positive, negative or neutral sentiment, you will agree with 80 of them and disagree with 20

OR

• 80% sentiment accuracy means: if you are given 100 positive posts from the web about your brand, only 80 will be positive; the rest will be negative, neutral or irrelevant.

We then went on to explain that 100% sentiment accuracy is not attainable because even humans do not agree among themselves. In 10%-30% of cases, there may be a lack of consensus on whether a post is positive, negative or neutral. If we can accept that ambiguity will always exist, due to sarcasm and other complex forms of expression, then how do we expect a machine learning algorithm to agree with all of the humans checking the data?

Maybe at this point we should also explain that in social listening, the most popular way to check sentiment accuracy is to extract a random sample of 1000 posts and have 2-3 humans manually annotate them with sentiment. We then compare the sentiment that the algorithm has assigned to each of the posts and determine the percent agreement between all 3 human curators and the algorithm.

As clients of social media monitoring become more sophisticated, they start asking questions like: “When you say accuracy do you mean precision or recall?” If the vendor is one of the usual suspects that offer social media monitoring tools, then chances are that they will not understand the question. For them, we share here a simple Wikipedia definition: “In simple terms, high precision means that an algorithm returned substantially more relevant results than irrelevant, while high recall means that an algorithm returned most of the relevant results.” Another more detailed definition provided on Wikipedia is this:


“In a classification task, the precision for a class is the number of true positives (i.e. the number of items correctly labelled (by the algorithm) as belonging to the positive class) divided by the total number of elements labelled (by the algorithm) as belonging to the positive class (i.e. the sum of true positives and false positives - which are items incorrectly labelled as belonging to the class). Recall, in this context, is defined as the number of true positives divided by the total number of elements that actually belong to the positive class (i.e. the sum of true positives and false negatives - which are items that were not labelled as belonging to the positive class but should have been).”


Although there were previous titles given to accuracy in the past 4 years for simplicity’s sake, we know it really was “precision”. Now that the consumer insights managers started getting involved in social listening, we need to adapt the way we vendors talk and explain the new terms. Here we should add that precision and recall are not only relevant for measuring sentiment accuracy but also we can use them to measure semantic accuracy i.e. how accurately a solution can report topics and themes of online conversations.

Also, I doubt if these definitions are on the radar of ESOMAR, MRS, MRA or CASRO. If this is true, I suggest that the market research associations start defining how the accuracy of social listening data is measured for the sake of all the market research companies and clients looking for guidance. If they need help, we, the practitioners of social listening and analytics, are here to offer a helping hand in better defining the market research methods of the future.

Image source: By Walber (Own work) [CC BY-SA 4.0 (http://creativecommons.org/licenses/by-sa/4.0)], via Wikimedia Commons

The Six 3 Letter Acronyms You Should Thoroughly Understand If You Are In Business

3-Letter-Acronyms

A couple of years ago we published an eBook with the title: “The five most important social media acronyms”. When I was asked to be the keynote speaker at an event for entrepreneurs in London a few weeks ago, the notion of the 5 TLAs (Three Letter Acronyms) resurfaced, and ended up being central to the talk. What the entrepreneurs wanted to know was how to come up with and execute their own social media strategy; I was very pleased to realise that the eBook content not only was still valid, it was also somewhat predictive at the time it was published. There was the 6th TLA that had to be added though… POC (Private Online Communities)… so here are the (now) 6 TLAs that every business should understand and employ “seamlessly integrated” (I know it’s a tired cliché but very true and necessary in this case):

3 Letter Acronyms

The best way to showcase the importance of these 6 TLAs and the way they should be “seamlessly integrated” is to weave them into a short story, here goes:

“Fiona, the marketing director of Sunbucks – a chain of coffee shops – wants to look at web listening for one of her company’s brands. She googles “web listening” and DigitalMR comes up first on the first page of Google (1.SEO/2.SEM). She clicks on the link that takes her to the social listening page on the DigitalMR website. Once there, Fiona watches the video clip and reads a few lines about the benefits and differentiators of listening247. She then clicks on the call-to-action button to request a demo. She is taken to a landing page that was created by Ellen – a DigitalMR marketer who is not a scripter/programmer, but just knows how to use the intuitive and simple CMS (3.). Once on the landing page, she enters her details in order to request a demo for social media monitoring (5. SMM). Fiona is now registered as a lead in the CRM (4.) and Ellen contacts her via email in order to arrange an online demo with one of the DigitalMR consultants. During the demo, Colin – the DigitalMR consultant – demonstrates how social listening and social analytics is done, and explains the importance of discussion themes, sub-themes, and sentiment accuracy. Fiona asks about the possibility of finding influencers and using them as brand ambassadors. This prompts Colin to explain the power of integrating social listening with an online community (6. POC) for co-creation and customer advocacy.

A more generic way to explain the use and connection of the 6 TLAs is outlined in the list below:

  1. Be present at the Zero Moment of Truth (ZMOT) when prospective customers will search for products and services in your sector (SEO/SEM). Become part of the conversation.

  2. With a simple and intuitive CMS maintain full control of your website’s content, updating the latter as often as possible in order to continuously optimise your SEO. A blog on the website serves this purpose quite well.

  3. After prospects find your digital content through online search (e.g. Google) they should provide their contact details in one of your inbound/content marketing landing pages in order to access your valuable content. The lead contact details are stored in your CRM so that you can nurture them toward a sale.

  4. In order to be part of the conversation on social media you have to understand which segments of your clients/prospects are out there posting, responding or just reading posts. You also need to understand which are the hot topics so that you can produce content around those topics/themes. The only way to do this is by having access to a social media monitoring and analytics tool (SMM).

  5. It gets better even though this is already impressive enough: Creative customers/prospects or influencers in your sector can be discovered through social listening and invited to join online communities (POC) so that they can help with the creation of digital content that will resonate with their peers. Not only that, they can then share the digital content which is the result of co-creation with their friends and networks.

With the addition of online communities we complete a full circle connecting back to being part of the conversation when prospective customers ask a question or when they look for valuable content using search engines. The content created on an online community has a lot more chances to be sought after at the ZMOT since it was created by the same people that it is targeting.

With an approach like the one described above, an organisation has the possibility to reach millions of customers without having to use any of the traditional mass media. Amplified customer advocacy is definitely a lot cheaper than TV commercials! On top of that, the messaging is more believable simply because it is not an advertisement; it is shared by other customers of the product or service that can be trusted more than brand advertising.

Does this sound too good to be true?

Is Social Media Analytics Possible Without Taxonomies?

hierarchical taxonomy

The answer to this question if you are in the insights business is a definitive NO. If you are in PR, or in other adjacent marketing disciplines… then possibly your answer will be YES.

Our answer is still a NO. My advice to you Mr. PR Manager: call your colleague in consumer insights and ask for help. 


Let us first establish what 'social media analytics' and 'taxonomy' mean. According to Wikipedia:

  • Social media analytics is: “Measuring + Analysing + Interpreting interactions and associations between people, topics and ideas”. Some people use “social analytics” as a term in an interchangeable way with “social media analytics”. Wikipedia tells us otherwise: “The practise of Social Analytics is to report on tendencies of the times. It is a philosophical perspective that has been developed since the early 1980s”; in other words it is something different. Of course there are various definitions out there that make the two terms part of each other, such as the Gartner definition.

  • Taxonomy is: “the practice and science of classification. The word is also used as a count noun: a taxonomy, or taxonomic scheme, is a particular classification. Many taxonomies have a hierarchical structure, but this is not a requirement. “

Taxonomies in a social listening context are used to describe a product category, an industry vertical, or simply a subject, like the 2015 parliament elections in the UK. When used to analyse posts from social media, they act as “dictionaries” which include the words people use to discuss the subject/category online. Taxonomies can be flat or hierarchical; for market researchers there is more value in a hierarchical taxonomy because of the drill-down capability that it avails.

A taxonomy could be created to represent a logical structure of how a market research analyst sees and understands the product category, however this approach is not good enough when the taxonomy is to be used for social media analytics; it should instead be directly derived from the posts harvested from social media that need to be analysed.

In social listening, when millions of posts about a product category in a specific language need to be analysed in order to extract insights, the following disciplines and skill sets are required:

  • Machine learning - to annotate sentiment with accuracy as high as possible
  • Taxonomy in order to know the themes and sub-themes or topics of conversation

Some of these processes are automated, some are semi-automated and some are manual. The good news is that when manual work is required it is mainly done during the set-up of a social media monitoring programme.

Most social media monitoring tools - including the really popular ones - do not actually make use of a taxonomy; all a user can do is use search terms. One could speculate that this approach is equivalent to a flat taxonomy but unfortunately it is not; a taxonomy implies that multiple relevant words and phrases roll up under a topic or theme. In the case of a 'search term only' approach, analysis of the posts will be shown only for posts that contain the specific search term. So if a user wanted to look at social media sentiment within topics and sub-topics, that would not be possible without a hierarchical taxonomy.

One scenario remains to be investigated in answering the question in the title of this blog post:

What if a user only wants to analyse sentiment in social media? Well, if the point of the research was about one specific notion or keyword, or high level term, then perhaps it would be the one exception when a taxonomy would not be necessary for social media analytics; however, if a whole product category was monitored (without using a taxonomy) then this would be a lost opportunity because the user would not know what subject the sentiment was about.

Taxonomies are a very broad topic on which we will probably need to dedicate a number of blog posts similar to this one. They are also very necessary for social media listening and analytics; for example, a huge opportunity lies out there for the company that will own detailed hierarchical taxonomies of all the major product categories sold in supermarkets.

Like we always say, ‘there will be a day when marketing directors will not be able to perform their jobs without social media analytics dashboard on their computer, or tablet for that matter’. Tell us what you think about taxonomies; do you think they represent an opportunity, or rather an insurmountable challenge? After all, it takes refined computing processes and a considerable effort from a small army of experienced and smart people to create one!


Social Listening and Online Communities: 1+1=3?

1+1=3

Two out of the top 3 trends in market research repeatedly reported by Greenbook’s GRIT report are social listening and online communities. The third is mobile research which, being a method of collecting data for surveys, can be part of online communities anyway.

We have written about private online communities and social media listening separately many times before, but this blog post is dedicated to the power of integrating the two disciplines.

Back in February, the CEO of Kantar Research Eric Salama spoke at the Insight Innovation Exchange conference in Amsterdam, about his view of the future of market research. One of the concepts that stuck with me was that in the future, market research will be divided in “learning applications” and “action applications”. My interpretation of these two types of apps is that the former is pure market research as we know it, and the latter are adjacent marketing activities that today are not governed by the ESOMAR or the MRS code of conduct. Examples of action applications are programmatic advertising, customer advocacy, and agile customer engagement.

Two of the following three ways to integrate social listening and online community platforms are action applications, and one is a learning application. Let’s see if you agree that 1+1 will equal more than 2 in these three cases:

  1.  Member recruitment for online communities
    For the first time in the history of marketing and market research, we can now find respondents for ad-hoc research or members of communities based on their perceptions, without having to use a screener questionnaire. We can use social listening to gather all the posts from the web that: are aligned with an idea, agree with a concept or express love for a brand. Because the expressed opinions on social media posts are unsolicited, they are of better quality than those expressed in a screener questionnaire used with people from a consumer panel. The panelists have an interest to figure out how to answer “right” so that they will be invited to participate in a survey (expert respondents).
     
  2. Listen-probe-listen-probe
    A virtuous circle can be created by integrating listening and communities. A brand or organisation can first “listen” to what people say on the web about the subjects of interest, and then engage with the members of their private online communities to ask questions (probe) about what they learnt from the harvesting and analysis of online posts. Through the probing they are bound to discover information that will improve the way they do their social media monitoring. And so on and so forth… Every time they complete a listen-probe-listen cycle, new valuable insights can be extracted that were never attainable before.
     
  3. Amplified customer advocacy
    Product category influencers can be identified through the content of their online posts and the size of their networks. They can then be invited to join an exclusive private online community for co-creation of digital content and customer advocacy amplification i.e. the sharing of the digital content with their friends and network.

Connecting the dots is a very powerful notion in market research. As shared on this blog several times, we firmly believe that a true business insight is more likely to be the result of synthesizing data from multiple sources as opposed to analysing a (small) data-set to death. The insights expert is a necessary part of this equation (1+1=3). There is also a new breed of a human skill-set that is becoming more and more an integral part of those market research agencies that “get it”; it is the data scientist who is among other things a machine learning specialist not daunted by tera-, peta-, hexa or zeta-bytes. Thoughts?


The 3 Things You Should Get Right If You Use Social Media Listening

Are you Listening?

Social media listening has many names; the most accurate term to describe this new marketing discipline is probably Active Web Listening. “Web” is more appropriate than “social” because when people share their views about brands, organisations and people, they do so not only on the well known social media sites but also on blogs, forums, and commercial websites (such as Amazon). Sometimes we also want to listen to what is in the news – editorial content – as well. The word “active” emphasises that it is not enough to just "listen", you have to do something about it, which assumes that you understand what people are saying and what the issues are. Having said all that, the most popular term used in a Google search by people looking for solutions as such is: 'social media monitoring'.

Now that we have the nomenclature out of the way, let’s discuss how to do social media listening properly; we need to pay attention to 3 things really:

  1. Noise

  2. Sentiment Accuracy

  3. Drill-down capability
     

Let’s have a closer look at these 3 things one by one:

  1. Noise
    Any given query that will initially be used for the monitoring of a subject or product category will, almost for sure, produce posts that are not relevant to the subject . Sometimes the irrelevant posts are 80%-90% of the total posts harvested from the web. For example if we have a query with just one search term e.g. Apple (Computers), we will get lots of posts about apple – the fruit. The usual way to get rid of noise is to use a Boolean logic query, something along the lines of: Apple AND Computers OR phone OR Tablet NOT taste ….etc.

  2. Sentiment Accuracy
    This is probably the most difficult problem to solve when it comes to making sense out of social media. Most end-clients (brands) of social media monitoring and analytics have developed ways to extract value out of their existing social media monitoring dashboards, without making use of sentiment analytics. In other words, they know how many posts are talking about their brand and their competitors, but they do not know how many of these posts are negative and how many are positive. They also have no idea what their Net Sentiment Score benchmarked with their competitors is (NSS is a very useful metric and a DigitalMR trade mark). We believe the reason they chose to ignore sentiment is simply because no supplier of theirs is able to deliver a sentiment accuracy over 60%.

    negative, neutral, positive

    This ended on December 31st 2014 when DigitalMR completed the 2.5 year development of listening247. Through the use of a unique combination of machine learning algorithms and computational linguistic methods, the DigitalMR R&D team was able to achieve sentiment accuracy over 85% in multiple languages and product categories. A machine learning model usually delivers 70% - 75% sentiment accuracy initially, and then with continuous fine tuning (for about a month) it climbs slowly but surely to 85% and even higher. 

    The key to establishing the sentiment accuracy is for a number of humans to agree with the posts processed (and the sentiment detected) by the algorithms. We use random samples and ask the end user (client) or an independent third party to go through the posts and annotate sentiment manually. We then compare the results of listening247 and those of the human annotations, and establish the degree of agreement. The caveat here is that sentiment accuracy can never be 100% since even humans do not agree 20%-30% of the time due to sarcasm and general ambiguity.

  1. Drill-down capability
    The drill-down capability depends on two things: a drill-down dashboard and an appropriate taxonomy that describes the topics discussed around a subject or product category. It is fairly easy to drill down into posts about a single brand, and then within that brand to drill down into a key term used in the discussions, and then within that term, to look at only the negative posts. What is not easy to do is look at the posts around a topic or discussion driver, then drill down to see what the sub-topics around that main topic are, and then drill down further  to see what people are saying about one attribute (of the many) within the (chosen) sub-topic. After all that, we can still have a look at a specific brand, the sentiment, and the source of the posts at the attribute level; a total of 8 drill-down levels are possible with an approach like this.

A delegate at the MRS Healthcare research conference last week in London said that if anyone could take thousands of posts in any language, and was able to analyse for topics and sentiment, they would consider this a superpower equal to that of super heroes such as Superman and Spiderman. Well it is quite telling that a colleague in the business of market research did not even know that this is possible and that the only superpower we need to achieve it is machine learning capability.

Here is where the magic comes in (if you get the above 3 things right): Social media listening takes unstructured text (consisting of thousands of posts), provides structure to it which allows us to see a quantitative analysis and interpretation otherwise impossible, and furthermore allows you to get to a few homogeneous posts that you can read for a qualitative analysis take and further probing.

Can Market research get any better? What do you think?