In this article, we’ll illustrate the integration of Azure Computer Vision API into a .NET Core application. Our focus will be on creating an intelligent image classifier that can accurately identify and label objects within images, leveraging the capabilities of Microsoft’s Cognitive Services within the .NET Core framework.
Use Cases and Applications
- Food: Optimized for photographs of dishes as you would see them on a restaurant menu.
- Landmarks: Optimized for recognizable landmarks, both natural and artificial.
- Retail: Automate inventory management or self-checkout systems by swiftly identifying products on store shelves.
- Agriculture: Enhance crop management practices by monitoring plant health and swiftly identifying signs of disease or pest infestation.
- Automotive: Improve manufacturing standards by promptly detecting defects or irregularities in-vehicle components during production.
- Manufacturing: Optimize production line efficiency by automating the sorting of items based on their visual attributes.
- Banking and Finance: Expedite financial transactions by automating the extraction and interpretation of information from checks.
Setting Up Azure Cognitive Services
The first step is to create an Azure Cognitive Services resource.
- Register for an Azure account if you haven’t already by signing up for free here.
- Access the Azure portal and establish a new Azure Computer Vision resource.
- After the resource is prepared, make a note of Key 1 and the Endpoint from the ‘Keys and Endpoint&rsquo tab of the resource.
Integrating Azure Speech service with .NET
Create an ASP.NET Core Web App project in Visual Studio, and then install these NuGet packages.
- Microsoft.Azure.CognitiveServices.Vision.ComputerVision
- Microsoft.Extensions.Configuration
- Microsoft.Extensions.Configuration.Json
- System.Drawing.Common
Add your key and endpoint to the appsettings.json file.
{
"ComputerVision": {
"Endpoint": "Your_Computer_Vision_Endpoint",
"SubscriptionKey": "Your_Computer_Visions_Key"
}
}
Sample Code Snippet of Upload method
public async Task<IActionResult> Upload(IFormFile imageFile)
{
if (imageFile != null && imageFile.Length > 0)
{
try
{
using (var ms = new MemoryStream())
{
await imageFile.CopyToAsync(ms);
ms.Seek(0, SeekOrigin.Begin);
var detectObjectsResults = await _computerVision.DetectObjectsInStreamAsync(ms);
ViewBag.Results = detectObjectsResults.Objects;
var processedImage = DrawRectanglesOnImage(ms.ToArray(), detectObjectsResults.Objects);
ViewBag.ProcessedImage = Convert.ToBase64String(processedImage);
}
}
catch (Exception ex)
{
return RedirectToAction("Error", "Home", new { errorMessage = ex.Message });
}
}
return View("Index");
}
private byte[] DrawRectanglesOnImage(byte[] imageBytes, IList<DetectedObject> objects)
{
// Load the image from byte array
using (var ms = new MemoryStream(imageBytes))
{
using (var originalImage = Image.FromStream(ms))
{
using (var graphics = Graphics.FromImage(originalImage))
{
foreach (var objectInfo in objects)
{
var rect = new Rectangle(objectInfo.Rectangle.X, objectInfo.Rectangle.Y, objectInfo.Rectangle.W,
objectInfo.Rectangle.H);
graphics.DrawRectangle(new Pen(Color.Red, 3), rect);
// Draw the object's name
using var font = new Font("Arial", 16);
using var brush = new SolidBrush(Color.Red);
graphics.DrawString(objectInfo.ObjectProperty, font, brush, objectInfo.Rectangle.X,
objectInfo.Rectangle.Y - 20);
}
}
using (var memoryStream = new MemoryStream())
{
originalImage.Save(memoryStream, ImageFormat.Jpeg);
return memoryStream.ToArray();
}
}
}
}
- The Upload method accepts an IFormFile parameter representing the uploaded image file.
- If the file is valid, it copies the file’s content into a memory stream and then invokes the Computer Vision service to detect objects within the image.
- Detected objects are then processed to draw rectangles around them on the image and label them with their names.
- The DrawRectanglesOnImage method takes the byte array representation of the image and the list of detected objects.
- It draws rectangles around detected objects on the image and labels them with their names.
- Finally, it converts the modified image back to a byte array and returns it.
Testing
- Built and launched the application.
- Upload an image by clicking the “Upload Image” button and selecting a file from your device.
- Once the image is uploaded, click the “Upload” button.
- If the image processing is successful, an annotated version of the image will be displayed below the upload form, showing detected objects with rectangles and labels.
- Additionally, a list of detected objects and their confidence levels will be displayed below the annotated image.
Note. Make sure the uploaded image is in a format supported by the application.
The application will only process images that are valid and contain detectable objects.
Objects detected and annotated with Azure Computer Vision
The source code is available on the following repository: https://github.com/alibenchaabene/Azure_ImageClassifier
References: https://learn.microsoft.com/en-us/azure/ai-services/computer-vision/
Thank you for reading. Please let me know your questions, thoughts, or feedback in the comments section. I appreciate your feedback and encouragement.
Happy Documenting!