In this article, we are going to learn how to Use Google Vision API with ASP.NET MVC in a step by step way.
Message it’s for Content Editors
Make this code available for download [the size of the source code is large that's why I cannot upload it from the portal].
Make this link available for download:
https://www.dropbox.com/s/wism6v175hm6tg2/WebCamAppVision.rar?dl=0
Icons made by Freepik from www.flaticon.com is licensed by CC 3.0 BY
Why do we use Google Vision Face API?
Cloud Vision API allows developers to easily integrate vision detection features within applications, including image labeling, face and landmark detection, optical character recognition (OCR), and tagging of explicit content.
Referenced from
https://cloud.google.com/vision/docs/
There are all giant techie companies and they compete with each other.
As you can see I have already written an article on Azure Face API.
http://www.c-sharpcorner.com/article/using-azure-face-api-with-asp-net-mvc/
Note
After downloading the source code, just change the “Key” to make it work.
Process Flow
- Creating an ASP.NET MVC project.
- Get an API key for using CLOUD VISION API.
- Using HTML Canvas to capture photo and detect labels
- Using HTML Canvas to capture photo and detect faces
- Finally, displaying Google CLOUD VISION API Response
Creating an ASP.NET MVC project
After opening IDE, next, we are going to create an ASP.NET MVC project. For doing that, just click File - New - Project.
After choosing a project, a new dialog will pop up with the name "New Project". In that, we are going to choose Visual C# Project Templates Web ASP.NET Web Application. Then, we are going to name the project “WebCamAppVision".
After naming the project, click on the OK button to create the project. A new dialog will pop up with the name “New ASP.NET Project”; from that, we are going to choose “MVC” templates for creating "MVC" applications and we are not going to use any authentication in this application. For that, we are going to choose “No Authentication”. After that, finally, click on the OK button to create the project.
After completing creating the project ,next, we are going to Get API Key from Google Cloud portal.
Getting an API key for using Google Vision API
For getting an API key, you must register at Google Cloud portal.
Google Cloud is also free for 1 year with rupees credits: 19,060.50
After logging into Google Cloud portal, click on the link below to start with Vision API.
https://cloud.google.com/vision/docs/before-you-begin
After accessing it, click on “ENABLE THE API” button and it will take you to where you can enable the API page where you need to create a project, or select an already-created a project to register your Google Cloud Vision API.
We are using Google API for the first time, that’s why we are going to create a project first.
Creating a project
For creating project choose the Create a project drop-down and click on continue button.
After clicking on continue button it will create a project and it will display notification, as shown below.
Next click on “Create Project: My Project” Notification.
After clicking it will take you to Dashboard page of that API.
Next click on the go to APIS overview link.
After clicking on APIS overview link it will take you to APIs & Services Dashboard.
Next, we are going to enable “Google Cloud Vision API” and for doing that we are going to click on Library menu.
After clicking on Library Menu it will take you to the API Library page.
Next, in this API library we are going to search “Google Cloud Vision API”.
After searching just click on the “Google Cloud Vision API” panel and it will open “Google Cloud Vision API” service and on the page, you will see Enable button just click on it.
After clicking on Enable Button an alert will pop up asking for enabling billing, so just click on “Enable billing”.
After clicking on Enable billing it will ask to set a billing account.
After enabling API, next it will take you to API dashboard and here we are going to create Credentials.
Creating Credentials
On this dashboard you will see the Credentials Menu, just click on it to Create Credentials for API.
After clicking on Credentials you will see Create Credentials Dropdown and after clicing on the Dropdown, you will see various options as shown below.
Click on API Key, then it will pop up your API KEY.
After getting the keys, we are going to add “CamGoogle” Controller.
Adding CamGoogle Controller
For adding a controller, just right click on Controller folder and then choose -> Add -> inside that, choose Add New item. A new dialog will pop up for adding a new item. Inside that, choose "MVC Controller Class" and name your controller "CamGoogle" and click on the "Add" button to create a CamGoogle Controller.
After adding a controller, we are going to add "Capture Action Method" in it for handling the HTTP GET Request.
Adding Capture Action Method
After adding Capture Action Method, we are going to add Capture View.
Adding Capture View
After adding Capture view, next, I have added a CamScripts folder which contains the script for displaying html5 canvas object.
Complete Code Snippet of Capture View
- @{
- Layout = null;
- }
-
- <!DOCTYPE html>
- <html>
- <head>
- <meta charset="utf-8">
- <meta http-equiv="X-UA-Compatible" content="IE=edge">
- <meta name="viewport" content="width=device-width, initial-scale=1">
- <title>Demo: Take a Selfie With JavaScript</title>
- <link href="https://fonts.googleapis.com/icon?family=Material+Icons" rel="stylesheet">
-
- <link href="~/CamScripts/css/styles.css" rel="stylesheet" />
- <link href="~/Content/bootstrap.css" rel="stylesheet" />
- </head>
- <body>
- <h3>
- Demo: Take a Photo
- </h3>
- <div class="container">
- <div class="col-md-2"></div>
- <div class="col-md-6">
- <div class="app">
- <a href="#" id="start-camera" class="visible">Touch here to start the app.</a>
- <video id="camera-stream"></video>
- <img id="snap">
- <p id="error-message">
- </p>
- <div class="controls">
- <a href="#" id="delete-photo" title="Delete Photo" class="disabled">
- <i class="material-icons">
- delete
- </i>
- </a> <a href="#" id="take-photo" title="Take Photo">
- <i class="material-icons">
- camera_alt
- </i>
- </a> <a href="#" id="download-photo" download="selfie.png" title="Save Photo"
- class="disabled"><i class="material-icons">file_download</i></a>
- </div>
- <!-- Hidden canvas element. Used for taking snapshot of video. -->
- <canvas width="300" height="400"></canvas>
- </div>
- </div>
- <div class="col-md-2"></div>
- </div>
- <div class="container">
- <div class="row">
- <div class="col-md-12">
- <div id="ResponseTable">
- </div>
- </div>
- </div>
- </div>
- <script src="~/CamScripts/js/CanvasGoogleScript.js"></script>
- <script src="https://code.jquery.com/jquery-3.2.1.min.js"></script>
- <style>
- .contant {
- border: 1px solid #ddd;
- border-radius: 4px;
- width: 500px;
- padding: 20px;
- margin: 0 auto;
- text-align: center;
- }
- </style>
- </body>
- </html>
After adding view, next we are going save the entire application and test capture view by accessing it.
Snapshot of Capture View
After accessing Capture View, next, we are going to add code for capturing a photo on clicking capture button, on delete we are going to delete a photo which we have captured and on download, you can download your photo.
After adding View, next, we are going to add “Mainrequests” model to send API request.
Adding Mainrequest Model
In this part, we going to add Mainrequest model which we are going to use for sending the API request.
- using System.Collections.Generic;
- namespace WebCamAppVision.Models
- {
- public class Mainrequests
- {
- public List<requests> requests { get; set; }
- }
-
- public class requests
- {
- public image image { get; set; }
- public List<features> features { get; set; }
- }
-
- public class image
- {
- public string content { get; set; }
- }
- public class features
- {
- public string type { get; set; }
- }
-
- }
After adding Model, next, we are going to add [HttpPost] method Capture.
Note
Label Detection detects a broad sets of categories within an image, which range from modes of transportation to animals.
Label Detection
Adding [HttpPost] Capture Method
This method will take base64String as input and this string will be converted to bytes and these bytes will be sent for analyzing the photo.
Code Snippet of [HttpPost] Capture method.
- [HttpPost]
- public ActionResult Capture(string base64String)
- {
- var imageParts = base64String.Split(',').ToList<string>();
- byte[] imageBytes = Convert.FromBase64String(imageParts[1]);
-
- using (var client = new WebClient())
- {
- Mainrequests Mainrequests = new Mainrequests()
- {
- requests = new List<requests>()
- {
- new requests()
- {
- image = new image()
- {
- content = imageParts[1]
- },
-
- features = new List<features>()
- {
- new features()
- {
- type = "LABEL_DETECTION",
- }
-
- }
-
- }
-
- }
-
- };
-
-
- var uri = "https://vision.googleapis.com/v1/images:annotate?key=" + "AIzaSyCaB0QqD0APn7l1uMYZH8Kj#############";
- client.Headers.Add("Content-Type:application/json");
- client.Headers.Add("Accept:application/json");
- var response = client.UploadString(uri, JsonConvert.SerializeObject(Mainrequests));
- return Json(data: response);
- }
-
- }
For sending requests we are going to use web client and mainrequests model, to mainrequests model we are going to assign bytes which we have created from base64String, next we are going to send request to API which is https://vision.googleapis.com/v1/images:annotate?key= . To this API we need to pass the Key which we have generated from Google cloud portal, and in the final step we are going to add headers and then we are going to serialize model and send request.
After adding [HttpPost] Capture method next we are going have a look at CanvasGoogleScript.js which we are using to capture a photo. After capturing it we get base64String. We are going to post that base64String to Capture method.
Capturing photo code is written in CanvasGoogleScript.js
Complete Code Snippet of Ajax Post Request
- function takeSnapshot()
- {
-
-
- var hidden_canvas = document.querySelector('canvas'),
- context = hidden_canvas.getContext('2d');
-
- var width = video.videoWidth,
- height = video.videoHeight;
-
- if (width && height) {
-
-
- hidden_canvas.width = width;
- hidden_canvas.height = height;
-
-
- context.drawImage(video, 0, 0, width, height);
-
-
- var datacaptured = hidden_canvas.toDataURL('image/jpeg');
-
-
- Uploadsubmit(datacaptured);
-
- return datacaptured;
- }
- }
-
- function Uploadsubmit(datacaptured)
- {
-
- if (datacaptured != "") {
- $.ajax({
- type: 'POST',
- url: ("/CamGoogle/Capture"),
- dataType: 'json',
- data: { base64String: datacaptured },
- success: function (data) {
- if (data == false) {
-
- alert("Photo Captured is not Proper!");
- $('#ResponseTable').empty();
- }
- else {
-
- if (data.length == 9) {
- $('#ResponseTable').empty();
- alert("Its not a Face!");
- } else {
- $('#ResponseTable').empty();
- var _faceAttributes = JSON.parse(data);
- var _responsetable = "";
- var _emotiontable = "";
- _responsetable += '<div class="panel panel-default"><div class="panel-heading">Google Face API Response</div>';
- _responsetable += "<div class='panel-body'>"
- _responsetable += '<table class="table table-bordered"><thead><tr> <th>Description</th> <th>score</th></tr></thead>';
-
- for (var i = 0; i < _faceAttributes.responses[0].labelAnnotations.length; i++) {
-
- _responsetable += '<tr><td>' +
- _faceAttributes.responses[0].labelAnnotations[i].description +
- '</td><td>' +
- _faceAttributes.responses[0].labelAnnotations[i].score +
- '</td></tr>';
-
-
- }
-
- _responsetable += "</table></div></div>"
- $('#ResponseTable').append(_responsetable);
- }
- }
- }
- });
- }
-
- }
After getting to see the CanvasGoogleScript.js code snippet, next, we are going to add a reference to this script on view.
Now we have completed adding scripts and views; next we are going to save the application and run it to see a demo.
Capturing photo and Getting response from Google Cloud vision API
- Wearing glasses.
In this part, we going to click the photo and send it to google cloud vision API for label detection.
Note
Label Detection detects a broad sets of categories within an image, which range from modes of transportation to animals.
Now you can see in the above image which we have captured that I was wearing glasses, and the response which came has the labels “eyeware”, “visioncare”.
- Wearing headphones
Now you can see in the above image which we have captured, I was wearing headphones and the response which came has the labels “electronic device”, “audio equipment”, “audio”.
- Mobile phone.
Now you can see in the above image which we have captured the response which came has the labels “black”, “mobile phone”, “electronics”, “electronic device”, “gadget” etc.
- Glasses
Now you can see the above image which we have captured was of glasses and the response which came has the labels “eyewear”, “glasses”, “vision care”, “goggles”, “sunglasses” etc.
As we saw that it detects objects very well and sends an accurate response, this was all about “LABEL_DETECTION”.
Next, we are going to have a look at Face detection, for doing that we have a little change in code.
Face Detection
In this part, we are going use the Google Cloud Vision API to detect faces in an image. To prove to yourself that the faces were detected correctly, you'll then use that data to draw a box around each face.
Reference taken from- https://cloud.google.com/vision/docs/face-tutorial
For detecting faces we are going to make little changes in the API request; we are going to change the type from “LABEL_DETECTION” to “FACE_DETECTION”.
Code Snippet of [HttpPost] Capture method.
- [HttpPost]
- public ActionResult Capture(string base64String)
- {
- var imageParts = base64String.Split(',').ToList<string>();
- byte[] imageBytes = Convert.FromBase64String(imageParts[1]);
-
-
- using (var client = new WebClient())
- {
- Mainrequests Mainrequests = new Mainrequests()
- {
-
- requests = new List<requests>()
- {
- new requests()
- {
- image = new image()
- {
- content = imageParts[1]
- },
-
- features = new List<features>()
- {
- new features()
- {
- type = "FACE_DETECTION",
- }
- }
-
- }
-
- }
-
- };
-
- client.Headers.Add("Content-Type:application/json");
- client.Headers.Add("Accept:application/json");
- var response = client.UploadString("https://vision.googleapis.com/v1/images:annotate?key=" + "AIzaSyCaB0QqD0APn7l1uMYZH8############ ", JsonConvert.SerializeObject(Mainrequests));
-
- return Json(data: response);
- }
-
- }
After changing type in post request next we are going to change the script to get facedetection response and display.
Complete Code Snippet of Ajax Post Request
- function takeSnapshot() {
-
-
- var hidden_canvas = document.querySelector('canvas'),
- context = hidden_canvas.getContext('2d');
-
- var width = video.videoWidth,
- height = video.videoHeight;
-
- if (width && height) {
-
-
- hidden_canvas.width = width;
- hidden_canvas.height = height;
-
-
- context.drawImage(video, 0, 0, width, height);
-
-
- var datacaptured = hidden_canvas.toDataURL('image/jpeg');
-
-
-
-
-
-
- UploadFaceDetection(datacaptured);
-
-
- return datacaptured;
- }
- }
- function UploadFaceDetection(datacaptured) {
-
- if (datacaptured != "") {
- $.ajax({
- type: 'POST',
- url: ("/CamGoogle/Capture"),
- dataType: 'json',
- data: { base64String: datacaptured },
- success: function (data) {
- if (data == false) {
-
- alert("Photo Captured is not Proper!");
- $('#ResponseTable').empty();
- }
- else {
-
- if (data.length == 9) {
- $('#ResponseTable').empty();
- alert("Its not a Face!");
- } else {
- var count = 1;
- var _faceAttributes = JSON.parse(data);
-
-
- var _responsetable = "";
- var _emotiontable = "";
- _responsetable += '<div class="panel panel-default"><div class="panel-heading">Google Face API Response</div>';
- _responsetable += "<div class='panel-body'>"
- _responsetable += '<table class="table table-bordered"><thead><tr> <th>Description</th> <th>score</th></tr></thead>';
-
- for (var i = 0; i < _faceAttributes.responses[0].faceAnnotations.length; i++)
- {
-
- _responsetable += '<tr><td>' + "Face" +'</td><td>' +
- count++ +
- '</td></tr>';
-
- _responsetable += '<tr><td>' + "Joy" + '</td><td>' +
- _faceAttributes.responses[i].faceAnnotations[i].joyLikelihood +
- '</td></tr>';
-
- _responsetable += '<tr><td>' + "Anger" + '</td><td>' +
- _faceAttributes.responses[i].faceAnnotations[i].angerLikelihood +
- '</td></tr>';
-
- _responsetable += '<tr><td>' + "Sorrow" + '</td><td>' +
- _faceAttributes.responses[i].faceAnnotations[i].sorrowLikelihood +
- '</td></tr>';
-
- _responsetable += '<tr><td>' + "Surprise" + '</td><td>' +
- _faceAttributes.responses[i].faceAnnotations[i].surpriseLikelihood +
- '</td></tr>';
-
- _responsetable += '<tr><td>' + "detectionConfidence" + '</td><td>' +
- _faceAttributes.responses[i].faceAnnotations[i].detectionConfidence +
- '</td></tr>';
- _responsetable += '<tr><td>' + "landmarkingConfidence" + '</td><td>' +
- _faceAttributes.responses[i].faceAnnotations[i].landmarkingConfidence +
- '</td></tr>';
-
-
- for (var j = 0; j < _faceAttributes.responses[i].faceAnnotations[i].landmarks.length; j++)
- {
- _responsetable += '<tr><td>' + "type" + '</td><td>' +
- _faceAttributes.responses[i].faceAnnotations[i].landmarks[j].type +
- '</td></tr>';
- _responsetable += '<tr><td>' + "X position" + '</td><td>' +
- _faceAttributes.responses[i].faceAnnotations[i].landmarks[j].position.x +
- '</td></tr>';
- _responsetable += '<tr><td>' + "Y position" + '</td><td>' +
- _faceAttributes.responses[i].faceAnnotations[i].landmarks[j].position.y +
- '</td></tr>';
- _responsetable += '<tr><td>' + "Z position" + '</td><td>' +
- _faceAttributes.responses[i].faceAnnotations[i].landmarks[j].position.z+
- '</td></tr>';
-
-
- }
-
- }
-
- _responsetable += "</table></div></div>"
- $('#ResponseTable').append(_responsetable);
- }
- }
- }
- });
- }
-
- }
After making changes next we are going to save the application and run it to see a demo of face detection.
Capturing photo and Getting response of facedetection using Google Cloud vision API
We got a face detection response with emotion and also got the position of organs on the face.
Debugging view of facedetection API response
The response which we got can be used for drawing a square box around the face detected as shown in the below image.
Referenced from
https://cloud.google.com/vision/docs/face-tutorial
For more details on it, you can visit here.
Conclusion
In this article, we have learned how to use Google Cloud Vision API with MVC applications in simple steps. We started with creating an MVC project then getting a Google API Key. Further, we have created Controller, Action Methods, and View and finally, captured a photo using HTML 5 canvas and sent it to capture method using Ajax POST. The base64string is sent to Google Vision API for analyzing and getting a response.