Preface
This article series is a complete end to end tutorial that will explain the concept of face recognition and face detection using modern AI based Azure cognitive service; i.e. Azure’s Face API service.
Introduction
In the last article on learning Azure Face API Cognitive service, we learned how to setup Azure account, create Face API on Azure portal and test the services created. In this article we’ll explore the Face API ADK and do some code. Please follow the first part before moving on to this one.
Tutorial Series
The entire series on learning Face API cognitive services is divided into four parts. The first part focuses on Azure Functions, serverless computing, and creating and testing Face API on Azure Portal.
The second part explains the use of Face API SDK. The third part will be focused on face identification, where person groups will be created, and identification of faces would be done via training the models; i.e. via machine learning techniques. The fourth part is the most interesting part that gives a walkthrough of Face classification applications which performs face detection, identification, grouping and finding lookalike faces. Following is the four-part series.
- Face API Cognitive Service Day 1: Face API on Azure Portal
- Face API Cognitive Service Day 2: Exploring Face API SDK.
- Face API Cognitive Service Day 3: Face Identification using Face API.
- Face API Cognitive Service Day 4: Face classification using Face API.
Face API SDK
C# provides a rich SDK for the Face API which you can use to write C# code and perform all the endpoint operations. Let’s go step by step to see how this could be done.
- Open your visual studio. I am using VS 2017 professional and create a console application name it FaceApiSdk or the name of your choice.
- Right click on the project in Visual Studio and add a Nuget Package named ProjectOxford.Face.DotNetStandard
- Once done, we can add the code in Program.cs class as follows.
- using Microsoft.ProjectOxford.Face;
- using System;
- using System.Threading.Tasks;
-
- namespace FaceApiSdk
- {
- class Program
- {
- static async Task Main(string[] args)
- {
- IFaceServiceClient faceServiceClient = new FaceServiceClient("<put ypur key here>",
- "https://centralindia.api.cognitive.microsoft.com/face/v1.0");
- var detectedFaces = await faceServiceClient.DetectAsync("https://www.codeproject.com/script/Membership/Uploads/7869570/Faces.png");
- foreach (var detectedFace in detectedFaces)
- {
- Console.WriteLine($"{detectedFace.FaceId}");
- }
- }
- }
- }
Go through the following article to enable async Main method if you work on .Net Framework lesser than 4.7: https://www.c-sharpcorner.com/article/enabling-c-sharp-7-compilation-with-visual-studio-2017/
In the above-mentioned code, we make first an instance of IFaceServiceClient and provide the key and the FaceAPI URL in the constructor as a parameter. Then we await the DetectAsync method of that instance and pass the URL of the image for which we want the faces to be detected as a parameter to the DetctAsync method. Then we write the face ID’s of all the faces returned.
Compile the code and run it by pressing F5. In the following image that shows the output, we see that the faces of all the detected faces are returned.
- Time to test face attributes as well. Add an array called faceAttributes mentioning what attributes you want to get returned in the response. In the DetectAsync method pass that as a parameter to the returnFaceAttributes as shown below.
Code
- using Microsoft.ProjectOxford.Face;
- using System;
- using System.Collections.Generic;
- using System.Linq;
- using System.Text;
- using System.Threading.Tasks;
-
- namespace FaceApiSdk
- {
- class Program
- {
-
-
-
-
-
-
- static async Task Main(string[] args)
- {
- IFaceServiceClient faceServiceClient = new FaceServiceClient("<Provide your key here>",
- "https://centralindia.api.cognitive.microsoft.com/face/v1.0");
-
- var faceAttributes = new[] { FaceAttributeType.Emotion, FaceAttributeType.Age };
-
- var detectedFaces = await faceServiceClient.DetectAsync("https://www.codeproject.com/script/Membership/Uploads/7869570/Faces.png",
- returnFaceAttributes:faceAttributes);
-
- foreach (var detectedFace in detectedFaces)
- {
- Console.WriteLine($"{detectedFace.FaceId}");
- Console.WriteLine($"Age = {detectedFace.FaceAttributes.Age}, Happiness = {detectedFace.FaceAttributes.Emotion.Happiness}");
-
- }
- Console.ReadLine();
- }
- }
- }
Output
We see the following output that also returns the face attributes we asked for. Happiness = 0.983 means that scale upon which the API think a person is happy. So, 0.983 means that there is a huge possibility that a person is smiling or laughing in the picture.
Conclusion
This was the continuation of the first part pf learning Azure cognitive service named Face API. In this article we explored how to leverage the capabilities of Face API SDK and perform operations via code using .Net, C# and Visual Studio.
References
- https://www.nuget.org/packages/Microsoft.Azure.CognitiveServices.Vision.Face/
- https://azure.microsoft.com/en-in/services/cognitive-services/face/
- https://centralindia.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236
Code
- SDK Code
<<Prev Part