ASP.NET Core Web API for Abusive Comments Detection

Introduction

Detecting abusive comments in a web application involves using Natural Language Processing (NLP) techniques and machine learning models to analyze the comments and identify inappropriate or offensive content. Here's a complete implementation of abusive comments detection in an ASP.NET Core Web API using the Perspective API from Google as an example.

Note. The Perspective API is just one option, and there are other NLP libraries and models that you can use for this purpose. Also, since the Perspective API might have usage limitations or pricing policies, make sure to check the latest documentation and terms of use before implementing it in a production environment.

Step 1. Set up the ASP.NET Core Web API project

Create a new ASP.NET Core Web API project using Visual Studio or the .NET CLI.

Install the required NuGet packages

  • Microsoft.AspNetCore.Mvc.NewtonsoftJson (for using Newtonsoft.Json for serialization).
  • FluentValidation (optional but recommended for validating input).

Step 2. Register for Perspective API

Go to the Perspective API website (https://www.perspectiveapi.com/) and sign up for an API key.

Step 3. Implement the Comment model

Create a model to represent the comments that will be sent to the Perspective API for analysis.

Author: Sardar Mudassar Ali Khan
public class Comment
{
    public string Text { get; set; }
}

Step 4. Set up the Perspective API client

Create a service to interact with the Perspective API using your API key.

using System;
using System.Net.Http;
using System.Text.Json;
using System.Threading.Tasks;

Author: Sardar Mudassar Ali Khan
public interface IPerspectiveApiClient
{
    Task<double> AnalyzeToxicityAsync(string text);
}

public class PerspectiveApiClient : IPerspectiveApiClient
{
    private readonly HttpClient _httpClient;
    private readonly string _apiKey;

    public PerspectiveApiClient(HttpClient httpClient, string apiKey)
    {
        _httpClient = httpClient;
        _apiKey = apiKey;
    }

    public async Task<double> AnalyzeToxicityAsync(string text)
    {
        var request = new
        {
            comment = new
            {
                text = text
            },
            languages = new[] { "en" },
            requestedAttributes = new
            {
                TOXICITY = new { }
            }
        };

        var json = JsonSerializer.Serialize(request);
        var content = new StringContent(json, System.Text.Encoding.UTF8, "application/json");

        var response = await _httpClient.PostAsync($"https://commentanalyzer.googleapis.com/v1alpha1/comments:analyze?key={_apiKey}", content);

        if (!response.IsSuccessStatusCode)
        {
            throw new Exception($"Perspective API request failed with status code {response.StatusCode}.");
        }

        var responseContent = await response.Content.ReadAsStringAsync();
        var perspectiveApiResponse = JsonSerializer.Deserialize<PerspectiveApiResponse>(responseContent);

        return perspectiveApiResponse?.AttributeScores?.TOXICITY?.SummaryScore?.Value ?? 0.0;
    }
}

public class PerspectiveApiResponse
{
    public AttributeScores AttributeScores { get; set; }
}

public class AttributeScores
{
    public ToxicityAttribute TOXICITY { get; set; }
}

public class ToxicityAttribute
{
    public ScoreSummary SummaryScore { get; set; }
}

public class ScoreSummary
{
    public double Value { get; set; }
}

Step 5. Add the Perspective API key to appsettings.json

Open the appsettings.json file and add your Perspective API key.

{
  "Logging": {
    "LogLevel": {
      "Default": "Information",
      "Microsoft": "Warning",
      "Microsoft.Hosting.Lifetime": "Information"
    }
  },
  "AllowedHosts": "*",
  "PerspectiveApiKey": "YOUR_PERSPECTIVE_API_KEY"
}

Step 6. Register services in Startup.cs

Open the Startup.cs file and configure the services.

using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.DependencyInjection;

Author: Sardar Mudassar Ali Khan
public class Startup
{
    public Startup(IConfiguration configuration)
    {
        Configuration = configuration;
    }

    public IConfiguration Configuration { get; }

    public void ConfigureServices(IServiceCollection services)
    {
        services.AddControllers()
            .AddNewtonsoftJson();

        services.AddHttpClient<IPerspectiveApiClient, PerspectiveApiClient>(client =>
        {
            client.BaseAddress = new Uri("https://commentanalyzer.googleapis.com");
        });

        services.Configure<PerspectiveSettings>(Configuration.GetSection("PerspectiveSettings"));
    }

    public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
    {
        // ...
        app.UseRouting();
        app.UseAuthorization();
        app.UseEndpoints(endpoints =>
        {
            endpoints.MapControllers();
        });
    }
}

Step. Create a controller to handle comments

Create a controller that will handle incoming comment requests and utilize the Perspective API for abuse detection.

using Microsoft.AspNetCore.Mvc;
using System.Threading.Tasks;

Author: Sardar Mudassar Ali Khan
[ApiController]
[Route("api/[controller]")]
public class CommentController : ControllerBase
{
    private readonly IPerspectiveApiClient _perspectiveApiClient;

    public CommentController(IPerspectiveApiClient perspectiveApiClient)
    {
        _perspectiveApiClient = perspectiveApiClient;
    }

    [HttpPost("analyze")]
    public async Task<IActionResult> AnalyzeComment([FromBody] Comment comment)
    {
        if (string.IsNullOrEmpty(comment.Text))
        {
            return BadRequest("The 'Text' field cannot be empty.");
        }

        double toxicityScore = await _perspectiveApiClient.AnalyzeToxicityAsync(comment.Text);

        return Ok(new { ToxicityScore = toxicityScore });
    }
}

Step 8. Test the API

Run the application and use Postman or Swagger to send POST requests to the /api/comment/analyze endpoint with a JSON body containing the Text field. The API will respond with the toxicity score for the comment.

Remember that the Perspective API returns a toxicity score, and you can use this score to decide how to handle the comments based on your application's requirements.

Conclusion

Implementing abusive comments detection in an ASP.NET Core Web API involves utilizing Natural Language Processing (NLP) techniques and external APIs, such as the Perspective API from Google. This implementation aims to protect online platforms from harmful or offensive content. Here's a summary of the steps involved:

  1. Set up the ASP.NET Core Web API project: Create a new ASP.NET Core Web API project and install necessary NuGet packages, such as `Microsoft.AspNetCore.Mvc.NewtonsoftJson` and `FluentValidation` for validation.
  2. Register for Perspective API: Obtain an API key from the Perspective API website to use their service for analyzing the toxicity of comments.
  3. Implement the Comment model: Create a model to represent the comments that will be analyzed.
  4. Set up the Perspective API client: Create a service that interacts with the Perspective API using your API key. This service handles sending comments for analysis and receiving toxicity scores.
  5. Add the Perspective API key to appsettings.json: Store the API key in the configuration file.
  6. Register services in Startup.cs: Configure services in the Startup.cs file, including registering the Perspective API client.
  7. Create a controller to handle comments: Implement a controller that handles incoming comment requests, uses the Perspective API for abuse detection, and returns the toxicity score.
  8. Test the API: Run the application and use tools like Postman or curl to send POST requests to the /api/comment/analyze endpoint, providing a JSON body with the Text field. The API responds with the toxicity score for the comment.

It's essential to remember that this implementation is a basic starting point and needs further refinement for production use. Additional considerations include handling more complex scenarios, implementing rate limiting, addressing potential false positives/negatives, and ensuring scalability and security.

As you develop this feature, prioritize inclusivity and avoid promoting any form of discrimination or harmful behavior, ensuring that the system is fair and respects users' rights while maintaining a safe environment.