A blog post written at 300 km/h
As promised last week, here I am writing another blog post as I travel at 300 km/hour on the Eurostar towards Brussels - from London - on a Sunday afternoon. I am heading to my second consecutive participation as a speaker at LT-Accelerate; a conference about language technologies, not the usual market research conferences that I attend.
Last year at LT-Accelerate I spoke about rich analytics for social listening and stressed the importance of semantic analysis and accuracy; this year I will be describing what differentiates the more than 1,000 social media monitoring tools currently available out there.
Looking at this from a market research and customer insight perspective, we categorised the social listening tools into three generations:
- GEN 1: sentiment accuracy less than 60%, search based topic analysis, limited attention to noise elimination, automated sentiment analysis in usually one or two languages only
- GEN 2: sentiment and semantic accuracy over 75% in any language, inductive approach to report topics of conversation, significantly reduced noise (less than 5% irrelevant posts)
- GEN 3: In addition to what Gen 2 social listening tools can do, those few that can be classified as GEN 3 can also detect emotions, analyse images in an automated way for brands in terms of theme and possibly sentiment, and they offer guidance for integration with consumer tracking surveys and other data sources and profile users.
If you want to know what generation your current social media monitoring tool belongs to, all you need to do is ask your vendor what is their sentiment and semantic accuracy and whether they can detect emotions and analyse images for insights.
The main reason I go to conferences such as this one is to demonstrate thought leadership in the field of market research and customer insights, with the hope that prospective clients, partners, and vendors will come forward and initiate conversations that could develop to become mutually beneficial deals.
Last year only half of the conference delegates showed up because of the terrorist attack that had happened in Paris. Brussels was on a high terrorist alert that started the Sunday before the conference; the prudent thing to do was to stay at home and switch to a skype presentation as some speakers did. My take on the situation was that a city is at its safest when it is on high alert, so I decided not to change my plans. Indeed as I arrived at the train station last year and on the way to my hotel the streets were deserted, apart from armed soldiers. It was eerie but funnily enough it felt quite safe.
So here I am again this year on my way to the Brussels Central station and in the absence of a red alert due to terrorist threats I sort of feel less safe. I am making a mental note to remain vigilant and pay attention to what is going on around me; look out for any suspicious behaviour in other words.
Enough reminiscence, back to the essence of this post: I am sure there are other meaningful ways to categorise social listening tools and I would be very interested to find out how other people classify them. Maybe a plausible way to classify them is according to the use case of each one. Maybe another is the target customer/department the tool was created for, such as:
- Customer Service
- New Product Development
- Customer Insights
In my opinion around 98% of the current tools on the market belong to Gen 1, around 1% belong to Gen 2, and only a handful belong to Gen 3. I would not be the least surprised if the only social listening tool that meets all Gen 3 criteria is listening247®. Clearly, only Gen 2 and 3 tools are suitable and can be used for market research and customer insights. Gen 1 tools would be disqualified from the get-go, if nothing else, due to the noise (irrelevant posts) that is analysed and included in what is reported to the user as relevant.
How do you classify social listening tools? Please feel free to share your approach with me on Twitter @DigitalMR_CEO.
Share this article: