Detecting Harm Content in Text and Images using Azure AI Content Safety

Introduction

Social media platforms have become double-edged swords in the last few years. They provide us with worldwide connections, but they can also act as havens for dangerous content. Tech companies have resorted to artificial intelligence to protect online communities from this escalating problem. Azure AI Content Safety, a service made to recognize and flag hazardous content, is a potent tool.

Development

We created a demo application to demonstrate the capabilities of Azure AI Content Safety. An Angular frontend serves as the user interface in this demo, while a.NET backend API manages the laborious task of content analysis.

For this demo, we will use this GitHub repository where you can set the settings such as API Key, connection string, etc.

Prerequisites

  • Azure subscription
  • .NET SDK
  • Angular SDK
  • SQL Server database

First, let’s create the Azure resources. Go to the Azure portal and create the Content Safety resource.

Azure AI Services

Create Content safety

Once the resource is created, go to Keys and Endpoint under Resource Management and copy the key and endpoint.

Social Network Demo

Generate key- Show key

These credentials must be pasted into the appsettings.Development.json file in the AzureAIContentSafety.API project.

{
   "AzureAIContentSafety":{
      "Endpoint":"https://<SERVICE_NAME>.cognitiveservices.azure.com/",
      "ApiKey":"",
      "TextSeverityThreshold":{
         "Blur":3,
         "Reject":5
      },
      "ImageSeverityThreshold":{
         "Blur":2,
         "Reject":4
      }
   }
}

Now, create the Storage Account. Follow this configuration in the Basics tab.

Create a storage account

Once the resource is created, go to Configuration under Settings and enable the blob anonymous access.

Configure

Then, go to Containers under Data storage and create the container images and chose the Anonymous access level to Blob.

New Container

Now, you must copy the connection string to your Storage account. So, go to Access keys under Security + networking. Click on Show for the connection string key1.

Access Keys

In the same appsettings.Development.json file, paste it.

{
   "AzureStorage":{
      "BlobCacheControl":"max-age=21600",
      "BlobContainerName":"images",
      "ConnectionString":""
   }
}

Here's how the solution works:

  • User Interaction: A user can upload an image or enter text to interact with the Angular frontend.
  • API Request: The user-submitted content is sent to the.NET API by the Angular application.
  • Content Analysis: To check for possible harm, the .NET API then makes use of the Azure AI Content Safety service. Four main categories are covered in this analysis:
    • Hate: Content that incites hatred or discrimination against groups
    • Sexual: Content that is exploitative, explicit, or sexually suggestive
    • Violence: Information that promotes or shows violence
    • Self-Harm: Information that encourages or promotes self-harm
  • Flagging and Moderation: The AI marks content that it believes to be harmful so that human moderators can examine it further. This guarantees that suitable measures, like deletion or user suspension, can be implemented.

The severity levels for the harm categories are on a scale of 0 to 7, but for images, the classifier only returns severities 0, 2, 4, and 6.

Also, for texts, if the user specifies, it can return severities in the trimmed scale of 0, 2, 4, and 6; each of the two adjacent levels is mapped to a single level.

  • [0, 1] -> 0
  • [2, 3] -> 2
  • [4, 5] -> 4
  • [6, 7] -> 6

You can adjust the severities thresholds based on your needs in the appsettings.Development.json file.

To test the demo, you can either submit text, images, or both.

User

User

If the image content is categorized as harmful in any category, it will be blurred by default. But you can turn on/off the blur by clicking the button displayed at the top right corner for each image.

Conclusion

Even though AI-powered solutions like Azure AI Content Safety are a big improvement, it's crucial to keep in mind that they are not a panacea. To ensure accurate and moral content control, human oversight remains required. We can make the internet a safer and more welcoming place by combining human knowledge with AI's ability. We might expect even more advanced tools to appear as technology develops further, assisting us in managing the challenges of the digital age.

Thanks for reading

Thank you very much for reading. I hope you found this article interesting and may be useful in the future. If you have any questions or ideas you need to discuss, it will be a pleasure to collaborate and exchange knowledge.


Similar Articles