DevOps  

Simplifying CI/CD for .NET Docker Apps

Introduction

When you're working with .NET applications in containers, setting up a clean and reliable CI/CD pipeline can feel a bit more complicated than expected. You start out thinking it’ll be straightforward—just build the app, wrap it in a Docker image, push it somewhere, and deploy. But once you get into the weeds, things like Docker dependencies, environment mismatches, and build performance start to make the process feel much heavier than it should.

Simplifying CI/CD for .NET Docker Apps: Best Practices

If you've ever asked yourself, "Why is this Docker setup slowing everything down?" or "Is there a simpler way to do this?" you’re not alone. I’ve had my fair share of trial and error getting these pipelines to behave, and over time, I’ve learned some practical ways to keep things clean, fast, and reliable.

In this article, we'll walk through some best practices to simplify your CI/CD pipeline for .NET containerized apps. Whether you're just starting with cloud-native development or you're looking to refine an existing process, the goal here is to make things easier, without cutting corners. We’ll look at real examples, lessons learned, and strategies that actually work in real-world projects.

Why Docker Dependencies Complicate Cloud-Native Builds

Before diving into solutions, let’s understand the problem. In a typical CI/CD setup for a .NET containerized app, you might:

  1. Restore NuGet packages
  2. Build the application
  3. Publish the app to a folder
  4. Build a Docker image (usually via a multi-stage Dockerfile)
  5. Push the image to a container registry
  6. Deploy to Kubernetes, Azure App Service, or another platform

Each step brings potential points of failure:

  • Docker environment setup. Your build agents need Docker installed and properly configured (e.g., daemon running, correct permissions).
  • Image size and build time. Large images with unnecessary layers slow down builds and prolong feedback loops.
  • Base image drift. When the base image (e.g., mcr.microsoft.com/dotnet/aspnet:7.0) gets updated, you may unknowingly inherit breaking changes.
  • Credential management. Authenticating to private registries can involve fiddly service principals, tokens, or environment variables.
  • Platform differences. Windows vs. Linux containers, ARM vs. x64—mismatches here can break your pipeline without obvious errors.

Given these challenges, how can we simplify our CI/CD pipeline so that we spend time delivering value, not debugging build servers?

Best Practice 1. Decouple Builds with Multi-Stage Dockerfiles

One of the most powerful tools in your toolbox is the multi-stage Dockerfile. By splitting build and runtime concerns into distinct stages, you:

  • Isolate the build environment. Use the full .NET SDK image (sdk) for compiling and testing.
  • Keep runtime images slim. Use the lightweight ASP.NET or .NET runtime image (aspnet or runtime) for your final artifact.
  • Leverage layer caching. By carefully ordering stages (e.g., restoring NuGet packages first), Docker can reuse layers across builds.

Here’s a simplified example:

# Stage 1: build
FROM mcr.microsoft.com/dotnet/sdk:7.0 AS build
WORKDIR /src
COPY *.csproj .
RUN dotnet restore
COPY . .
RUN dotnet publish -c Release -o /app

# Stage 2: runtime
FROM mcr.microsoft.com/dotnet/aspnet:7.0
WORKDIR /app
COPY --from=build /app .
ENTRYPOINT ["dotnet", "MyApp.dll"]

This pattern keeps your final image lean and ensures the build dependencies (the full SDK) never end up in production containers. It also makes incremental builds faster, because as long as your *.csproj files haven’t changed, Docker reuses the NuGet restore layer.

Best Practice 2. Use Hosted Build Services or Buildpacks

Installing and maintaining Docker on your build agents can be a headache. Instead, consider:

  • Cloud-hosted builders such as GitHub Actions’ setup-buildx-action or Azure DevOps’ Docker task. These managed services handle Docker installation, BuildKit setup, and even caching for you.
  • Cloud Native Buildpacks (CNB) via Paketo Buildpacks or Google’s Cloud Buildpacks. Buildpacks detect your .NET app, restore, build, and package it into a container—without a Dockerfile. All you need to do is point to your project folder:
    steps:
      - uses: buildpacks/pack-action@v2
        with:
          builder: paketobuildpacks/builder:base
          path: .
          image: myregistry.azurecr.io/myapp:latest
    

Buildpacks abstract away the Docker layer, automatically pick up the correct .NET version, apply best practices (such as trimming symbols, handling environment variables), and produce optimized container images.

Best Practice 3. Leverage Container Registry Caching

Long build times often come from downloading base images and NuGet packages. You can speed up builds significantly by:

  1. Using a private, geo-distributed container registry (e.g., Azure Container Registry, GitHub Container Registry). Many registries offer built-in caching of upstream images.
  2. Enabling build cache in your pipeline. In GitHub Actions, for example, you can persist Docker layers between runs:
    - name: Cache Docker layers
      uses: actions/cache@v3
      with:
        path: /tmp/.buildx-cache
        key: ${{ runner.os }}-buildx-${{ github.sha }}
        restore-keys: |
          ${{ runner.os }}-buildx-
    - name: Build with BuildKit
      uses: docker/build-push-action@v5
      with:
        builder: ${{ steps.buildx.outputs.name }}
        cache-from: type=local,src=/tmp/.buildx-cache
        cache-to: type=local,dest=/tmp/.buildx-cache-new
        push: true
        tags: myregistry.azurecr.io/myapp:${{ github.sha }}
    

This technique saves and restores intermediate layers (including dotnet restore and SDK install), slashing build times after the initial run.

Best Practice 4. Abstract Docker with Pipeline Templates and CLI Tools

No two projects are identical, but most follow a similar flow: restore, build, test, publish, containerize, push, deploy. Define a pipeline template in your chosen CI/CD platform:

  • Azure DevOps: Use YAML templates
  • GitHub Actions: create reusable workflows
  • Jenkins: share a common Jenkinsfile library

For example, a reusable GitHub Action could look like:

name: dotnet-container-ci

on:
  push:
    branches: [ main ]

jobs:
  build-and-deploy:
    uses: my-org/.github/workflows/container-ci-template.yml@main
    with:
      project-path: ./src/MyApp
      registry: ${{ secrets.REGISTRY_URL }}
      credentials: ${{ secrets.REGISTRY_CREDENTIALS }}

Your template abstracts away Docker commands—team members simply specify the project path and destination registry. Under the covers, the template handles multi-stage Docker builds, caching, and error handling.

Additionally, consider using CLI tools like dotnet-docker tasks or community extensions (for Azure DevOps) that wrap common Docker scenarios into simple commands.

Real-World Example. From Clunky to Streamlined

A few months ago, I worked on a .NET microservice with three separate containers: an API, a worker, and a static frontend. Our Jenkins pipelines were over 300 lines of shell scripts—every change risked breaking something.

By migrating to GitHub Actions and applying the above practices:

  1. Switched to multi-stage Dockerfiles, reducing final image sizes by 60% and build times by 25%.
  2. Enabled GitHub’s build cache, which cut cold-start builds from 12 minutes down to 4 minutes.
  3. Adopted a reusable workflow template, so spinning up a new microservice repo now takes 10 minutes instead of days.

The result? Fewer late-night pipeline emergency calls, faster feedback during pull requests, and a happier team. The best part is that developers could focus on writing code instead of wrestling with YAML.

Putting It All Together. Sample GitHub Actions Workflow

name: CI/CD Pipeline

on:
  push:
    branches: [ main ]
  pull_request:
    branches: [ main ]

jobs:
  build:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3

      # Set up .NET SDK
      - name: Setup .NET SDK
        uses: actions/setup-dotnet@v3
        with:
          dotnet-version: '7.0.x'

      # Restore & test
      - name: Restore dependencies
        run: dotnet restore src/MyApp/MyApp.csproj

      - name: Run tests
        run: dotnet test src/MyApp.Tests/MyApp.Tests.csproj --no-restore --verbosity normal

      # Build & push container
      - name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v2

      - name: Cache Docker layers
        uses: actions/cache@v3
        with:
          path: /tmp/.buildx-cache
          key: ${{ runner.os }}-buildx-${{ github.sha }}
          restore-keys: |
            ${{ runner.os }}-buildx-

      - name: Login to registry
        uses: docker/login-action@v2
        with:
          registry: myregistry.azurecr.io
          username: ${{ secrets.REGISTRY_USERNAME }}
          password: ${{ secrets.REGISTRY_PASSWORD }}

      - name: Build and push image
        uses: docker/build-push-action@v5
        with:
          context: .
          file: src/MyApp/Dockerfile
          push: true
          tags: myregistry.azurecr.io/myapp:${{ github.sha }}
          cache-from: type=local,src=/tmp/.buildx-cache
          cache-to: type=local,dest=/tmp/.buildx-cache-new

  deploy:
    needs: build
    runs-on: ubuntu-latest
    steps:
      - name: Deploy to Kubernetes
        run: |
          kubectl set image deployment/myapp myapp=myregistry.azurecr.io/myapp:${{ github.sha }}
          kubectl rollout status deployment/myapp
        env:
          KUBECONFIG: ${{ secrets.KUBE_CONFIG }}

This workflow showcases a clear separation of concerns:

  • Build job: compiles, tests, builds the container, and pushes it.
  • Deploy job: updates the Kubernetes deployment.

All Docker-specific steps are modeled via GitHub Actions, eliminating manual Docker installs on the runner.

Conclusion

Simplifying your CI/CD pipeline for .NET containerized applications boils down to three core principles:

  1. Isolate and optimize your Docker builds with multi-stage Dockerfiles.
  2. Leverage managed build services or Buildpacks to reduce infrastructure maintenance.
  3. Abstract and reuse pipeline definitions so that developers focus on features, not YAML.

By embracing these best practices, you’ll shrink build times, lower your operational burden, and give your team confidence that every commit, pull request, and release follows a reliable, repeatable process. After all, in modern cloud-native development, fast and dependable pipelines are just as crucial as clean code, and they deserve the same level of care.