Have you ever asked: “Hey Siri, how’s the weather today?” or “Ok Google, what is lottery results today”. If so, then you already use Natural Language Understanding (NLU). Just how can it understand that you are interested in knowing the weather in a specific location or lottery result of a specific day.
Indeed, NLU is a branch of artificial intelligence (AI) that uses computer software to understand input made in the form of sentences in text or speech format. In other words, NLU directly enables human-computer interaction. It enables computers to understand commands without the formalized syntax of computer languages. Then, the computers can communicate back to humans in their own languages.
The NLU field is an important and challenging subset of natural language processing (NLP). NLU can interact with untrained individuals and understand their intent. In addition, it can comprehend sense given common human errors such as mispronunciations of transposed letters or words.
Below there are some tasks of Natural Language Understanding:
Natural Language Understanding usage areas can be listed as follows:
- In social media interactions
- During the course of product recommendation
- In creating profiles of content
- In financial estimates
Impact of Natural Language Understanding in customer Experience:
Nowadays, NLU is becoming more widely used for customer communication. This gives customers the choice to use their natural language to navigate menus and collect information, which is faster, easier, and creates a better experience.
Here are some areas where NLU is being used in applications that interact with human language:
Interactive Voice Response (IVR) and Message Routing
Turn nested phone trees into a simple “what can I help you with” voice prompts. Analyze answers to “What can I help you with?” and determine the best way to route the call.
Automate data capture to improve lead qualification, support escalations, and find new business opportunities. For example, ask customers questions and capture their answers using Access Service Requests (ASRs) to fill out forms and qualify leads.
Build fully-integrated bots, trained within the context of your business, with the intelligence to understand human language and help customers without human oversight. For example, allow customers to dial into a knowledge base and get the answers they need.
Future of Natural Language Understanding:
Pursuing the goal of developing a chatbot that could communicate with human beings in a human-like way — and eventually passing the Turing test, business and academia are investing more in NLP and NLU techniques. The product they have in mind is aimed at being effortless, unsupervised and able to communicate directly with people in a convenient and effective way.
To achieve this, the research is carried out on three levels:
- Syntax — understanding the grammar of the text
- Semantics — understanding the literal meaning of the text
- Pragmatics — understanding what the text is trying to achieve
Applying NLU in chatbot system:
To give you a better idea of how NLU works in practice, I will apply NLU to the chatbot system.
The basic architecture of a chatbot system:
Chatbot systems communicate with humans by voice (such as Siri, Google Assistant, Alexa …) or in writing (such as Facebook Messenger Bot, Hangout Chat Bot …). Whatever the form of communication, it is necessary to understand the text so that it can give appropriate answers to users. The component responsible for this task in the chatbot system is called NLU (Natural Language Understanding), in which a lot of natural language processing (NLP) techniques are applied. There are 3 main issues including:
- Determining / classifying user intent (intent classification / intent detection);
- Information extractor;
- Dialogue management
Define user intent:
Normally, users often access the chatbot system with the expectation that the system will take actions to help themselves on a certain issue. For example, the chatbot system that supports flight bookings may make your booking request at the start of a conversation. In order to provide accurate support, the chatbot needs to determine the intent of the user. Determining the user’s intent will determine how the next conversation between the person and the bot will take place. Therefore, if the user intent is misidentified, the bot will give incorrect, inappropriate context. At that time, users may feel disgusted and no longer return to use. So determining user intent is a very important problem in building a chatbot system.
For some systems serving only a specific field or job, we can limit the number of user intentions within a finite set of intentions that defined, related to the business operations such as bots serving information related to employee/weather/hospital/bank data…. With this limit, the problem of determining intent can reduce as follows: Input is a communication statement of the user, the classification system will determine the intent corresponding to that sentence in the set of intentions that have been defined.
In order to build the intent classification model, we need a training data set that includes different expressions for each intent that we predict users will often use (training phase). For example, for the same purpose of asking about the weather in Danang today, users could use the following expressions:
- How is the weather today in Da Nang?
- Is Danang rainy today?
- What is Da Nang’s temperature today?
- May I ask, do I need to carry a raincoat today?
It can be said that the training data generation step for the intent classification problem is one of the most important jobs when developing the chatbot system and greatly influences the quality of the chatbot system in the future. This job also requires a lot of time and effort of the developer.
The intent categorization system has some basic components:
- Preprocessing data
- Extract feature
- Model training
In the preprocessing data step, we will perform data cleaning operations such as eliminating redundant information, normalizing data such as turning misspelt words into correct spelling words, standardization of abbreviations … Pre-processing of data plays an important role in the chatbot system due to the characteristics of the chat language, saying: abbreviations, misspellings …
After the data preprocessing and data collection have been cleaned, we will extract the features from this data. In machine learning (ML), they call this step feature extraction or feature extraction (feature extraction/feature engineering/entity extraction).
The training step for the input model is the entities that have been extracted and applied Machine Learning algorithms to learn a classification model. Classification models can be classification rules (using decision trees) or weight vectors corresponding to extracted entities (as in logistic regression models, SVM, Neural networks, etc.).
After an intent classification model is available, we can use it to classify a new conversation. This conversation also goes through the preprocessing and extracting entities, then the classification model determines the “confidence” for each intent in the set of intents and gives the intent with a score. highest confidence. The following sentences use different expressions for the same telecom company customer question about slow network status:
- Why is your network so slow these days?
- The network is lagging, I am so frustrating?
In the example above, the system needs to recognize the words “slow” and “lag” having the same meaning.
Besides determining the intent in the user’s conversation, we need to extract the necessary information in it. The information to be extracted in a conversation is usually entities of a certain type. For example, when a customer books a flight, the system needs to know the departure location and destination, the date and time of departure. The NLU component of the chatbot system must support the following entity types:
The input of an information extraction module is a conversation. The information extraction module needs to determine the location of entities in the sentence (the starting and ending position of the entity). The following example illustrates a conversation and the entities extracted from it.
Conversation: I want to book a flight to Ho Chi Minh City from Danang airport at 8 pm tonight.
Sentence with defined entities: I want to book a flight to LOCATION [HCMC] from the airport LOCATION [Da Nang] at TIME [8 pm tonight].
The information extraction problem is often modelled into the Sequence Labeling model. In it, we learn from the data a labelling model for a series of words. The algorithm most often used for string labelling is Conditional Random Fields with popular tools like CRFsuite, CRF ++ or Mallet.
In long conversations between people and bots, bots will need to remember information about the context (context) or manage the state of dialogue (dialogue state). The dialogue management issue is then important to ensure smooth communication between people and machines. Now imagine a situation:
Users want to ask about the current state of Corona disease (a general question). To date (00:00 on 6/4/2020) worldwide, there are 1,253,072 people infected with COVID-19, 68,154 deaths and 257,202 people recover and users asked the following:
As you can see, the bot can remember your question and then you ask another place without repeating the question type.
→ If not using context, you must create intent and training sentences like “How many people have COVID-19 in the world?”. The other intent is: How many people are infected with COVID-19 in Vietnam … Until the user asks a question with location and type of question, this intent can be activated and the bot can answer. The context was born to help resolve this inconvenience
The article above has told you about NLU’s basic concepts, and what NLU can do. We hope we’ve helped you know about NLU after this post, and encourage you to learn more about it.
- Enouvo Activities (26)
- Enouvo Tech (23)
- Enouvo Grow (14)
- Human of Enouvo (4)
- Awards (3)