Microsoft AI school - Processing Language like a Robot
Introduction
Natural Language Processing (NLP) is the branch of artificial intelligence (AI) that gives computer programs the ability to see, hear, speak with, and understand human language the way it is written and spoken - referred to as natural language. Some common examples of NLP in practice today are digital assistants, voice-operated GPS systems, speech-to-text dictation softwares, and customer service chatbots.
Microsoft Azure makes it extremely easy to build applications that support natural language processing by providing numerous text analytics, translation, and language understanding services. One such Azure service is the Language Understanding service that allows you to build language models for your applications. You can learn the ins and outs of the language understanding service by completing the Create a Language Understanding solution learning path in MS AI School.
Prerequisites
Before proceeding with this learning path, you should fulfill the following prerequisites:
- You should be familiar with Microsoft Azure and must be able to navigate the Azure portal.
- You should have a minimal experience in programming with C# or Python. If you do not have any prior experience in programming, please feel free to complete the Take your first steps with C# or Take your first steps with Python learning path first.
This learning path has three modules. Read ahead to get a quick peek at each module of this learning path.
Microsoft Azure features the Language Understanding service that enables applications to extract semantic meaning from natural language. It allows developers to build applications based on language models. The best part is that the models can be trained efficiently with a relatively small number of samples to recognize the meaning intended by the user.
Thus the first module aims to teach you:
- How to provision Azure resources for the Language Understanding service.
- Define utterances and intents.
- Define entities.
- How to use patterns to differentiate similar utterances.
- Use pre-built models.
- How to train, test, publish and review a Language Understanding application.
Here's a quick overview of some of the terms mentioned above:
- Utterances are the phrases entered by a user while interacting with the application that uses your Language Understanding model.
- Intent refers to a task or action that the user wishes to perform. In simple words, the intent is the meaning of an utterance. Thus, when you develop a model, you have to define intents and associate them with one or more utterances.
- Entities are used to add specific contents to an intent.
The following table lists the units covered in the first module:
Once you have learned and created a language understanding application, you need to publish it and consume it from client applications. Thus, after completing the first module, you can move on to the second module, which aims to teach you how to:
- Set publishing configuration options for your Language Understanding application. Here you will learn about Publishing slots and settings to enable specific behavior identification within your app.
- Describe Language Understanding prediction results.
- Deploy the Language Understanding Service as a container like a local Docker host, an Azure Container Instance (ACI), or an Azure Kubernetes Service (AKS) cluster.
The following table lists the units covered in the first module:
Generally, the Language Understanding service is used in applications that work with text-based natural language; however, it can also be used for applications to support speech; for example - digital assistants, home automation devices, etc. You can accomplish this by integrating the Language Understanding and Speech services in Azure. The Speech SDK is normally used along with the Speech service, but when it is integrated with the Language Understanding service, it enables you to use a language model that can predict intents from spoken inputs. To integrate the Speech SDK with a Language Understanding model, you have to enable the Speech priming publishing setting. Then you can use the Speech SDK to write code that uses the Language Understanding prediction resource.
Thus, you will learn all of these in the final module, wherein you will learn to integrate the Language Understanding and Speech service and then perform intent recognition with the help of these services.
The following table lists the units covered in the first module:
Conclusion
Thus, if you are looking to master the concepts and implement Natural Language Processing solutions to your applications so that they can gain insights into written or spoken language, then this learning path will serve as the perfect roadmap for you. You will learn how to implement Azure's Language Understanding service to language models with the help of real-time examples and scenarios.