NLP AT THE INTERFACE OF HUMAN AND MACHINES
Natural Language Processing (NLP) is the AI discipline that manages the linguistic interface amongst the human and the machine. When the algorithm is interpreting the information received from humans, we refer to Natural Language Understanding (NLU). Vice versa, when the machine creates a flow of information for the user, we call it Natural Language Generation (NLG). Overall, NLP is at the core of many AI applications and it interestingly mixes computer science skills with linguistic expertise.
Being the key engine for most of the AI solutions, the global spending on NLP software, hardware, and services will reach $43.3 billion by 2025, according to the analyst firm Tractica. The potential market comprises applications and use cases in disparate fields such as financial and legal services, in the education system, for medical and healthcare contexts, and for customer services management. Even for sentiment analysis in social media or for fake news detection, almost all the applications currently available are based on natural language processing.
As such, NLP tools are used everywhere: for chatbots, digital assistants, real-time competitive intelligence, contract analysis, or tax filing and processing. In the last few years, we have acknowledged an important evolution of this technology, and thanks to the latest research development, NLP-based solutions achieved performances that are almost comparable to human abilities.
ENGAGING WITH R&D MEMBERS: FROM THEORY TO PRACTICE
As summarised by KDnuggets, there are three main groups of approaches to solving NLP tasks:
- Rule-based are the oldest approaches to NLP. They have proven to be reliable and offer a transparent way to treat the data. Usually, they guarantee high performances in specific use cases, but their results’ quality decreases when generalized.
- Traditional Machine Learning (ML) include probabilistic modeling, likelihood maximization, and linear classifiers. These approaches can be described by the following sequence of activities:
- Preparing a training data set that requires the difficult task of labeling
- Facing the feature engineering step, during which with the appropriate domain knowledge you can extract features from raw data.
- Starting to test in the course of a phase during which a model is trained on parameters and subsequently tested on a sample.
- Eventually, during the inference phase, you apply your model to test real data.
- Neural Networks (NN) approaches differ from Traditional ML methods because they usually do not require the feature engineering phase; on the other hand, they require a huge amount of labeled training data to feed the Neural Network.
Since 2018, NN methods have revolutionized the field of NLP, significantly improving the performances for a variety of tasks especially in natural language understanding. One of these models, called BERT (Bidirectional Encoder Representations from Transformers) is characterized by applying a bidirectional training to language modeling: in other words, the model can learn the context of a word based on all of its surroundings.
THE FUTURE OF NLP
However, similarly to what happens in other AI contexts, the more a tool relies on neural network approaches, the more it needs large labeled datasets for training and the more it behaves like a black box whose mechanisms of computation are not known by the user, as discussed in this presentation by Liad Magen and shown in Figure 1. Moreover, it is important to notice that the labeling activity is demanding and in several cases requires strong linguistic competencies, making it very difficult to have big datasets that are coherently labeled for languages that are less commonly used than English. That’s why, recently, some researchers have proposed solutions that combine more traditional methods with NN approaches, to achieve transparent and debuggable models obtained with the support of Machine Learning or Deep Learning for creating augmented rules.
Figure 1 – What will be the future of NLP? – Source: https://www2.slideshare.net/LiadMagen/state-of-the-art-in-natural-language-processing-march-2019
AUTOMATIC RECOGNITION OF EVENTS AND TASKS IN TEXTS
Following this idea, Konica Minolta Global R&D is focusing on the use of NLP to develop Cognitive Services that are capable to automatically recognise events within a textual document, and consequently create tasks for the user. In our Brno laboratory, Michal Starý, who is a student from the Faculty of Informatics of Masaryk University, has worked with our colleagues Jakub Valcik and Zuzana Neverilova to develop a novel multilingual approach to Temporal Expression Recognition (TER) (such as ‘next Monday’ or ‘third quarter of financial year’). They developed a solution starting from a combination of existing methodologies and demonstrating its good performances for the lesser common languages.
Recognising temporal expressions is pivotal for understanding documents, because these parts of the sentence are usually associated with assignments for which an action is required and they allow to filter information and infer temporal flows of events. However, TER is not easy when these expressions do not have defined formatting or use different combinations of words. For instance, the sentence “in 15 days” doesn’t have an absolute value, and it needs to be positioned at the right time and in the right context if we want to associate it with a task. A tool that automatically scans a document, such as a contract for instance, and then is able to identify the associated deadline and contemporaneously set up the right deadline in your calendar, would be extremely helpful.
Within the paper presented at Raslan 2020, a workshop on NLP research activities in Slavonic languages, the authors demonstrate how combining rule-based systems with data-driven models such as BERT is a valid approach to improve the overall performance in TER. There are many possible contexts for applying such a methodology: documents used in every office, legal documents, financial reports, contracts, or even scientific articles.
NLP TO THE RESCUE DURING THE PANDEMIC
For instance, in response to the Covid-19 pandemic, research on Coronavirus has exploded with more than 60,000 papers published only in the first three months of 2020. How to manage such a deluge of scientific materials? With a Natural Language Processing engine, it is possible to scan the whole Covid-19 dataset (CORD-19) to shortlist the top-ranked papers and identify passages that answer to any specific question from a user. Moreover, it is even possible to create a summary with the most relevant passages and up-to-date information from the papers.
Thinking about the Covid-19 pandemic makes it evident how processing a large amount of information on a specific context in a short timeframe is important to detect crucial information within a sea of other irrelevant observations. This task is at the core of epidemic intelligence. However, it is just one use case where NLP are making a huge impact to manage information.
The goal for Konica Minolta Global R&D is to keep developing innovative approaches for such problems, to make Cognitive Services that can support users with understanding their work and take better decisions about their activities. Beyond our research focusing on documents from offices, or legal and administrative contexts, within our laboratories we are developing solutions in partnership with our customers to enhance Mobotix cameras for the detection of defects in boxes or the identification of suspicious behaviour in restricted areas.
However, the core message is still the same, and it aligns with what I think is the main objective of science communication in the corporate context, or with the issue of customer centricity for an AI project: focus on the needs of your audience. Listen to your customers before you start talking, and build your activities on the values that will make your job meaningful and impactful for them.
ON DEMAND AI AND INNOVATION WORKSHOP
Following a customer centric approach, our European laboratories can organise for you an Innovation Day, a virtual workshop about AI and Digital Transformation with researchers and customers, to together explore:
- the objectives of potential use cases and the priorities for your business
- which data and IT stakeholders are relevant for your digital transformation process
- how to develop an intelligent data strategy.
Get in contact with us: plan your workshop and transform today your digital workplace.
Photo by Patrick Tomasso on Unsplash