Natural Language Processing (NLP): all you need to know about sentiment analysis#BoostYourBusiness
In Artificial Intelligence, Natural Language Processing (NLP) is a field designed to facilitate machines’ understanding of natural language. It is a crucial stage in identifying the range of human emotions expressed in a text. From the first lexical approaches to the most recent deep learning methods, sentiment analysis will soon hold no more mysteries for you.
- What is sentiment analysis?
- The polarity and granularity of opinions
- What are the applications of sentiment analysis?
- What are the classification methods used?
- The future of sentiment analysis
What is sentiment analysis?
Sentiment analysis, also known as opinion mining, is the study through machine learning of the emotions expressed by individuals concerning a product, a service, a brand or more generally a subject on which their attention is focused. Sentiment analysis is one of the applications of Natural Language Processing (NLP), which, according to François Yvon, consists of « all research and development aimed at modelling and reproducing, with the help of machines, the human ability to produce and understand linguistic statements for communication purposes. ».
Historically, sentiment analysis has mainly relied on a body of dematerialized texts extracted from the web, social media or internal company resources (mail, emails, chats, transcripts of telephone exchanges, etc.). However, with the development of video and voice technologies, the study of emotions now applies as much to semantics (the meaning of words) as to vocal intonation or facial expressions. We then speak more broadly of “affective computing“: machines’ ability to recognize, express, synthesize and model human emotions.
The polarity and granularity of opinions
The challenge with sentiment analysis is to successfully determine in a text the orientation of opinions expressed with regard to an “entity” (the object, product, event, brand or individual targeted by the opinion) or one of its “aspects” (a specific element or subdivision of the entity). The opinion can thus have a positive, negative or neutral polarity, and can be studied at different levels of granularity:
- document level : sentiment analysis is applied to the entire text and highlights the general opinion.
- sentence level: the analysis focuses on the polarity of the opinion expressed in each sentence in the text.
- entity level: some texts may express opinions on different products, services, brands or topics. Sentiment analysis will highlight the corresponding polarity for each target entity.
- aspect level : this involves analysing an individual’s perception of the different characteristics (or elements) of the same entity.
For example, take a film buff’s comment on Dave Franco‘s The Rental (2020): « “In The Rental, there is a commendable focus on atmosphere, casting and characters. But ultimately, the film is a flop because it merely recycles old slasher movie formulas, with no originality of its own.”
At document level, the feelings expressed about The Rental are somewhat negative. The syntax of the commentary, articulated around a central “but”, emphasises the sense of “flop” stated in the second part. On the other hand, at sentence level, the opinion in the first part has a positive polarity, while the polarity of the second is negative. Finally, if we look at the polarity of the entity aspects, the opinion expressed on the atmosphere, the casting and the characters is positive, while the opinion on the screenplay is negative.
What are the applications of sentiment analysis?
The detection and classification of opinions through machine learning has many fields of application. Sentiment analysis has now spread from computer science to management and social sciences like marketing, finance, political science, communications, health science and even history, because of its importance for companies and society as a whole. In many contexts, a rapid picture of what others think enables much better decisions to be made.
Here are some common applications of sentiment analysis by companies:
Machine learning can map out the product characteristics (“aspect analysis”) most popular with consumers, and alert the Marketing Department to any negative criticism of a particular aspect of a product or service.
In an informational world, the public’s opinion of a brand is one of a company’s most important assets. A positive reputation reassures investors and significantly increases commercial margins, but an image crisis can have serious consequences for a company’s economic and financial stability.
Sentiment analysis is a valuable tool for employees in contact with customers (Support, After-Sales Service, Customer Relations, etc.). Machine learning enables real-time analysis of the nature and intensity of emotions expressed by consumers in their interactions with the company, and can guide employees on the use of a particular feedback response model.
In some cases, individuals’ opinions may have a suggestive value. The classification made by machine learning can then facilitate the emergence of “good ideas”, the detection of weak signals or the anticipation of market trends that could give companies a competitive edge.
What are the classification methods used?
The colossal amount of incoming data makes analysis, categorisation and the generation of insights increasingly complex. The quality of the results obtained also closely reflects research advances in NLP. From the first lexical approaches to the most recent Artificial Intelligence, let’s take a look at the different learning methods for sentiment analysis.
The lexical approach deduces the polarity of the opinions expressed in a sentence or document by analysing the meaning of the words. This implies that examples of sentences and words have been qualified and classified beforehand in the form of dictionaries listing the terms, their polarity and the context in which it is valid. The creation of lexicons can be manual or automatic, via annotated bodies of texts and a statistical view of the words appearing most often in positive or negative documents.
This method, in vogue in the early years of semantic analysis, has the disadvantage of needing enormous “human” resources when creating reference dictionaries. By acting at “word” level, the lexical approach soon reaches its limits when the authors of the messages analyzed take liberties with the language (contractions, emoticons, spelling mistakes, etc.) or express themselves in a type of language very different from the Latin alphabet.
In a “supervised” approach, we ask a machine learning algorithm to predict the polarity of a document, a sentence or an aspect from manually annotated or automatically collected examples. In particular, good use can be made of the IMDb dataset associating film reviews with user ratings. The volume of annotated documents makes it a particularly valuable dataset for training the model on a large number of examples.
This type of model usually learns to associate word combinations with a polarity, using a vocabulary weighting method like TF-IDF (Term frequency – inverse document frequency). This method is based on the assumption that each word has a certain level of rarity in a document. The more frequently this word appears, the more important it is. By relating the weight of each word to the overall score given by the user, the model can then predict the polarity of a new document from its vocabulary.
But supervised learning reaches its limits with complex sentence structures, or when the polarity depends on a notion of “context”. A deep learning-based approach then becomes necessary.
Deep learning factors in the order of words and the context of their use in the sentences of a document to determine their meaning and polarity. This is known as unsupervised learning, because data are not labelled (or annotated) as in supervised learning. However, the model does require a certain knowledge of the language. The algorithm can be trained beforehand by NLP models such as GTP, BERT, XLNet or Word2Vec in order to acquire overall knowledge of the use of the language.
It is then possible to fine-tune the model by focusing its learning on a specific task like sentiment analysis, with the additional study of an annotated IMDb-type dataset. The model is then highly efficient, as it uses the knowledge it has acquired through unsupervised learning to understand the links between words and sentences, and thus predict the polarity of the most complex text structures.
Determining the polarity of opinions is one thing; it is quite another to present analysis results in a synthetic, clear and relevant way with a view to learning from them. This is what data visualisation (or Data Viz) is all about. This visual representation of information can take various forms: pie charts, computer graphics, video animation, etc.
Data Viz requires prior work on harmonising data formats, and then the use of specific tools to model large data volumes graphically. It facilitates the comprehension of complex data, the presentation of results to an audience, and decision-making. However, this simplified form of data handling should not let us lose sight of an essential aspect: to obtain relevant results, the model used must be appropriate to the form and domain of the documents analysed. For example, a model trained on an IMDb dataset will be less accurate for predicting the polarity of a tweet (different form) on a smartphone (different domain).
Sentiment analysis is still far from being an exact science, but recent advances in learning models are gradually reducing misinterpretations by machines. With the arrival of broader-based language knowledge models, a milestone has been reached. The future of NLP lies in deep learning, with a two-step approach: firstly, the unsupervised learning of natural language structures, then the fine-tuning of artificial intelligence pre-trained on annotated datasets to perfect its learning.