When working with Azure Functions, itβs common to have different triggers that handle different jobs. For example:
A common mistake is to duplicate logic across these functions. Instead, we can share code through services/helpers. This way, we keep functions focused on their triggers but reuse common business logic.
1. Architecture
The Blob Trigger fires when a new file is uploaded.
The Timer Trigger fires every hour (or your chosen schedule).
Both functions call a shared service class that contains the actual logic for processing and cleanup.
2. Create the Project
Create a new .NET 9 isolated worker Azure Functions project
func init FileProcessorApp --worker-runtime dotnet-isolated --target-framework net9.0
cd FileProcessorApp
Add dependencies
dotnet add package Azure.Storage.Blobs
dotnet add package Microsoft.Azure.Functions.Worker.Extensions.Storage.Blobs
3. The Shared Service
We put our main logic in a service class.
File: Services/FileProcessorService.cs
using System;
using System.IO;
using System.Threading.Tasks;
using Azure.Storage.Blobs;
using Azure.Storage.Blobs.Models;
using Microsoft.Extensions.Logging;
public class FileProcessorService
{
private readonly BlobServiceClient _blobServiceClient;
private readonly ILogger<FileProcessorService> _logger;
public FileProcessorService(BlobServiceClient blobServiceClient, ILogger<FileProcessorService> logger)
{
_blobServiceClient = blobServiceClient;
_logger = logger;
}
// Process uploaded files
public async Task ProcessFileAsync(Stream fileStream, string fileName)
{
_logger.LogInformation($"Processing file: {fileName}");
// Example: Check if CSV
if (!fileName.EndsWith(".csv", StringComparison.OrdinalIgnoreCase))
{
_logger.LogWarning($"Skipping non-CSV file: {fileName}");
return;
}
using var reader = new StreamReader(fileStream);
int lineNum = 0;
while (!reader.EndOfStream)
{
var line = await reader.ReadLineAsync();
lineNum++;
_logger.LogInformation($"Line {lineNum}: {line}");
}
// Move file to "processed" container
var processedContainer = _blobServiceClient.GetBlobContainerClient("processed");
await processedContainer.CreateIfNotExistsAsync(PublicAccessType.None);
var processedBlob = processedContainer.GetBlobClient(fileName);
fileStream.Position = 0; // reset stream
await processedBlob.UploadAsync(fileStream, overwrite: true);
_logger.LogInformation($"File {fileName} moved to 'processed' container.");
}
// Cleanup processed files older than 7 days
public async Task CleanupOldFilesAsync()
{
var container = _blobServiceClient.GetBlobContainerClient("processed");
await container.CreateIfNotExistsAsync();
await foreach (var blobItem in container.GetBlobsAsync())
{
if (blobItem.Properties.CreatedOn < DateTimeOffset.UtcNow.AddDays(-7))
{
var blobClient = container.GetBlobClient(blobItem.Name);
await blobClient.DeleteIfExistsAsync();
_logger.LogInformation($"Deleted old file: {blobItem.Name}");
}
}
}
}
4. The Blob Trigger Function
File: BlobProcessorFunction.cs
using System.IO;
using System.Threading.Tasks;
using Microsoft.Azure.Functions.Worker;
using Microsoft.Extensions.Logging;
public class BlobProcessorFunction
{
private readonly FileProcessorService _service;
public BlobProcessorFunction(FileProcessorService service)
{
_service = service;
}
[Function("BlobProcessorFunction")]
public async Task Run(
[BlobTrigger("uploads/{name}", Connection = "AzureWebJobsStorage")] Stream myBlob,
string name,
FunctionContext context)
{
var logger = context.GetLogger("BlobProcessorFunction");
logger.LogInformation($"Blob trigger for file: {name}");
await _service.ProcessFileAsync(myBlob, name);
}
}
Configuration
{
"IsEncrypted": false,
"Values": {
"FUNCTIONS_WORKER_RUNTIME": "dotnet-isolated",
"AzureWebJobsStorage": "UseDevelopmentStorage=true"
}
}
In Azure, set AzureWebJobsStorage
to your storage account connection string.
Register BlobServiceClient
as a service in Program.cs
.
5. The Timer Trigger Function
File: ScheduledCleanupFunction.cs
using System.Threading.Tasks;
using Microsoft.Azure.Functions.Worker;
using Microsoft.Extensions.Logging;
public class ScheduledCleanupFunction
{
private readonly FileProcessorService _service;
public ScheduledCleanupFunction(FileProcessorService service)
{
_service = service;
}
[Function("ScheduledCleanupFunction")]
public async Task Run([TimerTrigger("0 0 * * * *")] TimerInfo timer, FunctionContext context)
{
var logger = context.GetLogger("ScheduledCleanupFunction");
logger.LogInformation($"Timer trigger fired at {DateTime.UtcNow}");
await _service.CleanupOldFilesAsync();
}
}
Here, "0 0 * * * *"
means run every hour at minute 0. You can change the CRON expression as needed.
6. Program.cs (Dependency Injection Setup)
File: Program.cs
using Azure.Storage.Blobs;
using Microsoft.Azure.Functions.Worker;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Hosting;
var host = new HostBuilder()
.ConfigureFunctionsWorkerDefaults()
.ConfigureServices(services =>
{
services.AddSingleton(sp =>
{
var conn = Environment.GetEnvironmentVariable("AzureWebJobsStorage");
return new BlobServiceClient(conn);
});
services.AddSingleton<FileProcessorService>();
})
.Build();
host.Run();
7. How It Works
Upload a file β Blob Trigger fires β FileProcessorService.ProcessFileAsync()
runs.
Every hour β Timer Trigger fires β FileProcessorService.CleanupOldFilesAsync()
runs.
Shared logic lives in FileProcessorService
, avoiding duplication.
Conclusion
By keeping two functions (Blob Trigger + Timer Trigger) but moving business logic into a shared service class, we get the best of both worlds:
Each function is focused only on its trigger.
The logic is reusable and easy to maintain.
The project is ready for real-world workloads like file validation, ETL, and automated cleanup.
This design pattern (Triggers + Shared Services) is the recommended way to structure production-ready Azure Functions apps.