Introduction
Xamarin.Forms code runs on multiple platforms - each of which has its own filesystem. This means that reading and writing files is most easily done using the native file APIs on each platform. Alternatively, embedded resources are a simpler solution to distribute data files with an app.
Cognitive Services
Infuse your apps, websites, and bots with intelligent algorithms to see, hear, speak, understand and interpret your user needs through natural methods of communication. Transform your business with AI today.
Use AI to solve business problems
- Vision
- Speech
- Knowledge
- Search
- Language
Face API
- Face API provides developers with access to advanced face algorithms. Microsoft Face algorithms enable face attribute detection and face recognition.
- Face API is a cognitive service that provides algorithms for detecting, recognizing, and analyzing human faces in images.
- Face API can detect human faces in an image and return the rectangle coordinates of their locations. Optionally, face detection can extract a series of face-related attributes such as pose, gender, age, head pose, facial hair, and glasses.
Prerequisites
- Visual Studio 2017(Windows or Mac)
- Face API Key
Setting up a Xamarin.Forms Project
Start by creating a new Xamarin.Forms project. you’ll learn more by going through the steps yourself.
Choose the Xamarin.Forms App Project type under Cross-platform/App in the New Project dialog.
Name your app, select “Use .NET Standard” for shared code, and target both Android and iOS.
You probably want your project and solution to use the same name as your app. Put it in your preferred folder for projects and click Create.
You now have a basic Xamarin.Forms app. Click the play button to try it out.
Get Face API Key
In this step, get Face API Key. Go to the following link.
https://azure.microsoft.com/en-in/services/cognitive-services/
Click "Try Cognitive Services for free".
Now, you can choose Face under Vision APIs. Afterward, click "Get API Key".
Read the terms, and select your country/region. Afterward, click "Next".
Now, log in using your preferred account.
Now, the API Key is activated. You can use it now.
The trial key is available only 7 days. If you want a permanent key, refer to the following article.
Setting up the User Interface
Go to MainPage.Xaml and write the following code.
MainPage.xaml
- <?xml version="1.0" encoding="utf-8"?>
- <ContentPage xmlns="http://xamarin.com/schemas/2014/forms" xmlns:x="http://schemas.microsoft.com/winfx/2009/xaml" xmlns:local="clr-namespace:XamarinCognitive" x:Class="XamarinCognitive.MainPage">
- <StackLayout>
-
- <StackLayout>
- <StackLayout HorizontalOptions="Center" VerticalOptions="Start">
- <Image Margin="0,50,0,0" x:Name="imgBanner" Source="banner.png" ></Image>
- <Image Margin="0,0,0,10" x:Name="imgEmail" HeightRequest="100" Source="cognitiveservice.png" ></Image>
- <Label Margin="0,0,0,10" Text="Emotion Recognition" FontAttributes="Bold" FontSize="Large" TextColor="Gray" HorizontalTextAlignment="Center" ></Label>
- <Image Margin="0,0,0,10" x:Name="imgSelected" HeightRequest="150" Source="defaultimage.png" ></Image>
- <Button x:Name="btnPick" Text="Pick" Clicked="btnPick_Clicked" />
- <StackLayout HorizontalOptions="CenterAndExpand" Margin="10,0,0,10">
- <Label x:Name="lblTotalFace" ></Label>
- <Label x:Name="lblGender"></Label>
- <Label x:Name="lblAge"></Label>
- </StackLayout>
- </StackLayout>
- </StackLayout>
- </StackLayout>
- </ContentPage>
Click the Play button to try it out.
NuGet Packages
Now, add the following NuGet Packages.
- Xam.Plugin.Media
- Newtonsoft.Json
Add Xam.Plugin.Media NuGet
In this step, add Xam.Plugin.Media to your project. You can install Xam.Plugin.Media via
NuGet or you can browse the source code on
GitHub.
Go to Solution Explorer and select your solution. Right-click and select "Manage NuGet Packages for Solution". Search "Xam.Plugin.Media" and add Package. Remember to install it for each project (PCL, Android, iOS, and UWP).
Permissions
In this step, give the following required permissions to your app.
Permissions - for Android
- CAMERA
- READ_EXTERNAL_STORAGE
- WRITE_EXTERNAL_STORAGE
Permissions - for iOS
- NSCameraUsageDescription
- NSPhotoLibraryUsageDescription
- NSMicrophoneUsageDescription
- NSPhotoLibraryAddUsageDescription
Create a Model
In this step, you can create a model for deserializing your response.
ResponseModel.cs
Face Detection
In this step, write the following code for face detection.
MainPage.xaml.cs
- using Plugin.Media;
- using Xamarin.Forms;
- using XamarinCognitive.Models;
- using Newtonsoft.Json;
- namespace XamarinCognitive
- {
- public partial class MainPage : ContentPage
- {
- public string subscriptionKey = "7c85c822a**********4886209ccbb3fb";
-
- public string uriBase = "https://westcentralus.api.cognitive.microsoft.com/face/v1.0/detect";
-
- public MainPage()
- {
- InitializeComponent();
- }
-
- async void btnPick_Clicked(object sender, System.EventArgs e)
- {
- await CrossMedia.Current.Initialize();
- try
- {
- var file = await CrossMedia.Current.PickPhotoAsync(new Plugin.Media.Abstractions.PickMediaOptions
- {
- PhotoSize = Plugin.Media.Abstractions.PhotoSize.Medium
- });
- if (file == null) return;
- imgSelected.Source = ImageSource.FromStream(() => {
- var stream = file.GetStream();
- return stream;
- });
- MakeAnalysisRequest(file.Path);
- }
- catch (Exception ex)
- {
- string test = ex.Message;
- }
-
- }
-
-
- public async void MakeAnalysisRequest(string imageFilePath)
- {
- HttpClient client = new HttpClient();
- client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", subscriptionKey);
-
- string requestParameters = "returnFaceId=true&returnFaceLandmarks=false" +
- "&returnFaceAttributes=age,gender,headPose,smile,facialHair,glasses," +
- "emotion,hair,makeup,occlusion,accessories,blur,exposure,noise";
-
- string uri = uriBase + "?" + requestParameters;
- HttpResponseMessage response;
- byte[] byteData = GetImageAsByteArray(imageFilePath);
-
- using (ByteArrayContent content = new ByteArrayContent(byteData))
- {
- content.Headers.ContentType = new MediaTypeHeaderValue("application/octet-stream");
- response = await client.PostAsync(uri, content);
-
- string contentString = await response.Content.ReadAsStringAsync();
-
- List<ResponseModel> faceDetails = JsonConvert.DeserializeObject<List<ResponseModel>>(contentString);
- if(faceDetails.Count!=0)
- {
- lblTotalFace.Text = "Total Faces : " + faceDetails.Count;
- lblGender.Text = "Gender : " + faceDetails[0].faceAttributes.gender;
- lblAge.Text = "Total Faces : " + faceDetails[0].faceAttributes.age;
- }
-
- }
- }
- public byte[] GetImageAsByteArray(string imageFilePath)
- {
- using (FileStream fileStream =
- new FileStream(imageFilePath, FileMode.Open, FileAccess.Read))
- {
- BinaryReader binaryReader = new BinaryReader(fileStream);
- return binaryReader.ReadBytes((int)fileStream.Length);
- }
- }
- }
- }
Click the Play button to try it out.
Happy Coding...!
I hope you have understood how to detect human faces using Cognitive Service in Xamarin.Forms.
Thanks for reading. Please share comments and feedback.