In this tutorial, we are going to discuss about Rate limiting in .NET 6.0 and see the different ways of implementing it.
What is Rate Limiting?
Rate Limiting is the process of limiting how much a resource can be accessed within a specific time window. The limitation on the number of requests to each API endpoint can be applied to each unique user/IP address/client.
Purpose of Rate Limiting?
Generate Revenue
Public APIs use rate limiting for commercial purposes. In market, there are a lot of public APIs available to generate revenues. I would say, a certain subscription amount needs to be paid for leveraging the APIs.
Prevent malicious Attack (DoS attack)
For an instance, a hacker can use bots (I hope all would be familiar about bots) to make repeated calls to an API endpoint to let the service become unable to anyone else. This is known as Denial of Service (DoS) attack.
Regulate traffic to APIs based on infrastructure availability
This is more applicable to Cloud providers who would utilize “Pay as you go” IaaS (Infrastructure as a Service) strategy.
Note
ASP.NET Core does not support Rate Limiting out of the box in .NET 6.0. ASP.NET Core framework provides HTTP Middleware extensibility options for this purpose.
The entire source code is available in the Repository
Here, we are going to check two approaches.
- Using custom middleware approach
- Use the Nuget package AspNetCoreRateLimit
Let us look at the demo API application to understand the implementation.
Tools which I have used for this Demo
- VS 2022 Community Edition -Version 17.3.0 Preview 6.0
- .NET 6.0
- Swagger/Post Man
Demo Web API
In our Demo, we will have two action methods – one will return the student details by Student Id and the another will return the entire students. As this article focuses on the Rate Limiting, I’m keeping our project structure very simple and short.
I’m using in-memory dictionary for simplicity as a persistence store for students. For enterprise applications, obviously we would be using relational or non-relational data sources.
The Controller contains the below two action methods
[HttpGet("id")]
public IActionResult GetStudentById(Guid id) {
var student = _studentRepository.GetStudentById(id);
return student is not null ? Ok(student) : NotFound();
}
[HttpGet("")]
public IActionResult GetAllStudents() {
return Ok(_studentRepository.GetAllStudents());
}
Depending on the requirements, the API may apply throttling to all the endpoints or certain specific endpoint. To achieve this, I would say, the best approach would be to create a decorator and use it.
Custom Middleware Approach
Let us go ahead and create a Decorator.
namespace CustomMiddleware.Decorators {
[AttributeUsage(AttributeTargets.Method)]
public class LimitRequest: Attribute {
public int TimeWindow {
get;
set;
}
public int MaxRequests {
get;
set;
}
}
}
This attribute applies only to methods. Two properties in this attribute indicates the maximum requests allowed within a specific time window. This approach gives us the flexibility to configure the Rate limit attributes.
Apply Rate Limit to the endpoints.
Let us apply the LimitRequest attribute to the endpoints in StudentController. The updated endpoints are below
[HttpGet("id")]
[LimitRequest(MaxRequests = 2, TimeWindow = 5)]
public IActionResult GetStudentById(Guid id) {
var student = _studentRepository.GetStudentById(id);
return student is not null ? Ok(student) : NotFound();
}
[HttpGet("")]
[LimitRequest(MaxRequests = 2, TimeWindow = 5)]
public IActionResult GetAllStudents() {
return Ok(_studentRepository.GetAllStudents());
}
Here we have configured to allow maximum of two requests for window of five seconds. Whenever there is a third request within the windows of five seconds, there will not be any successful response.
Now let us go ahead and create the custom middleware.
Custom Middleware to read LimitRequest attribute
Now we are going to create a middleware called RateLimitingMiddleware which contains the logic for rate limiting.
The source code is available in the repository.
Logic for Rate limiting as below,
public async Task InvokeAsync(HttpContext context) {
var endpoint = context.GetEndpoint();
// read the LimitRequest attribute from the endpoint
var rateLimitDecorator = endpoint?.Metadata.GetMetadata < LimitRequest > ();
if (rateLimitDecorator is null) {
await _next(context);
return;
}
var key = GenerateClientKey(context);
var _clientStatistics = GetClientStatisticsByKey(key).Result;
// Check whether the request violates the rate limit policy
if (_clientStatistics != null && DateTime.Now < _clientStatistics.LastSuccessfulResponseTime.AddSeconds(rateLimitDecorator.TimeWindow) && _clientStatistics.NumberofRequestsCompletedSuccessfully == rateLimitDecorator.MaxRequests) {
context.Response.StatusCode = (int) HttpStatusCode.TooManyRequests;
return;
}
await UpdateClientStatisticsAsync(key, rateLimitDecorator.MaxRequests);
await _next(context);
}
Briefly, let me explain the logic,
- Check whether the endpoints contain the decorator “LimitRequest”
- If the decorator does not exist, pass the request to next middleware in the pipeline
- If the decorator exists, generate a unique key – which is a combination of endpoint path and IP address of the client
private static string GenerateClientKey(HttpContext context)
=> $"{context.Request.Path}_{context.Connection.RemoteIpAddress}";
If the decorator exists, generate a unique key – which is a combination of endpoint path and IP address of the client. Here we have chosen IP address as the client identifier.
Now, create the key to get the instance of ClientStatistics class from cache.
private async Task < ClientStatistics > GetClientStatisticsByKey(string key) => await _cache.GetCachedValueAsyn < ClientStatistics > (key);
public class ClientStatistics {
public DateTime LastSuccessfulResponseTime {
get;
set;
}
public int NumberofRequestsCompletedSuccessfully {
get;
set;
}
}
Here ClientStatistics instance is a record of the number of times the specific client received response and the time of the last successful response. Here for simplicity, I have used an in-memory cache for storing the client statistics. In Program.cs class we have the below line of code.
builder.Services.AddDistributedMemoryCache();
For load-balanced API, ideally, we would be using distributed cache such as Redis, Memcached.
- Use the client statistics to check if the current request reached the maximum request limit within the time window for the endpoint. If reaches the maximum limit, the client receives a status code of 429.
- If the request does not violate the request limit, client receives list of students with status code 200.
Let us go ahead and execute the Web API and see how we are getting the response
Within 5 seconds- if I try the third time. We will get the below response
Integrate AspNetCoreRateLimit NuGet Approach
The below definition copied from the Nuget Package
“AspNetCoreRateLimit is an ASP.NET Core rate limiting solution designed to control the rate of requests that clients can make to a Web API or MVC app based on IP address or client ID. The AspNetCoreRateLimit package contains an IpRateLimitMiddleware and a ClientRateLimitMiddleware, with each middleware you can set multiple limits for different scenarios like allowing an IP or Client to make a maximum number of calls in a time interval like per second, 15 minutes, etc. You can define these limits to address all requests made to an API or you can scope the limits to each API URL or HTTP verb and path”
The source code is available in the repository.
Here we are going to use IP address rate-limiting strategy in this example. Let us go ahead and install the NuGet package.
Once the installation complete, please add the below configuration in Program.cs
// AspNetCoreRateLimit
builder.Services.AddMemoryCache();
builder.Services.Configure < IpRateLimitOptions > (builder.Configuration.GetSection("IpRateLimiting"));
builder.Services.AddSingleton < IIpPolicyStore, MemoryCacheIpPolicyStore > ();
builder.Services.AddSingleton < IRateLimitCounterStore, MemoryCacheRateLimitCounterStore > ();
builder.Services.AddSingleton < IRateLimitConfiguration, RateLimitConfiguration > ();
builder.Services.AddSingleton < IProcessingStrategy, AsyncKeyLockProcessingStrategy > ();
builder.Services.AddInMemoryRateLimiting();
For simplicity, I have used in-memory cache. For large scale applications, this can be easily replaced with AspNetCoreRateLimit.Redis package to implement distributed caching.
Now the below lines needs to be included in appsettings.json
"IpRateLimiting": {
"EnableEndpointRateLimiting": true,
"StackBlockedRequests": false,
"RealIPHeader": "X-Real-IP",
"ClientIdHeader": "X-ClientId",
"HttpStatusCode": 429,
"GeneralRules": [{
"Endpoint": "GET:/students",
"Period": "5s",
"Limit": 2
}]
}
Let us take a quick review on these appsettings.
EnableEndpointRateLimiting is set to true to ensure that the throttling is applied to a specific endpoint rather than all endpoints.
GeneralRules – the rule specifies that the endpoint /students with HTTP verb GET. Here the only 2 requests allow in a time window of 5 seconds.
Note
EndPoint - the format for the endpoint supports pattern matching. For an instance, *:/students/* will apply the rate-limiting rule to all the endpoints irrespective of the HTTP verb and having the term “students” in its route.
Let us run the web API and see the responses
There is no violation of rules.
Try again and see what the response would be. We would get the below response.
So far we discussed on the possible implementation approaches of Rate Limiting in .NET 6.0. AspNetCoreRateLimit package is very flexible and provide configuration options. It is easy to plug in into web API without writing any lines of code. There may be custom requirements which does not support by AspNetCoreRateLimit package. In that case, custom middleware would be the best approach and it helps the developers complete control over the rate-limiting mechanism that configured on the endpoints.
Thank you for reading my article. Please leave your comments in the below comment box.