Modern applications are increasingly built using a microservices architecture—a design approach where an application is composed of small, independently deployable services. Each service owns its own functionality, communicates over lightweight protocols such as HTTP or gRPC, and can be scaled or updated without impacting the entire system. For .NET developers, ASP.NET Core 10 provides a powerful and lightweight foundation for building high-performance microservices that run efficiently in containers.
In this tutorial, you will learn how to containerize ASP.NET Core 10 microservices using Docker and deploy them to a Kubernetes cluster using both raw manifests and Helm charts. You will also explore essential cloud-native concepts such as service discovery, autoscaling, health checks, configuration, and continuous delivery pipelines.
By the end of this guide, you will be able to:
-
Build and structure a simple microservices solution using ASP.NET Core 10
-
Create optimized Docker images using multi-stage builds
-
Run multiple services locally with Docker Compose
-
Deploy microservices to Kubernetes using Deployments, Services, and Ingress
-
Add readiness/liveness probes, resource limits, and Horizontal Pod Autoscaling
-
Package deployments with Helm for repeatable environments
-
Set up a basic GitHub Actions CI/CD pipeline to automate builds & deployments
-
Implement logging, monitoring, and tracing for production-ready observability
This tutorial is designed to be hands-on and beginner-friendly, even if you have minimal Kubernetes experience. All examples are kept simple so you can focus on the deployment workflow rather than complex business logic.
Let’s begin by reviewing the tools and prerequisites you need to follow along.
Prerequisites
Before we build and deploy ASP.NET Core 10 microservices, make sure your environment includes the following tools. These will allow you to develop locally, create container images, and deploy to a Kubernetes cluster.
1. .NET SDK 10
You will need the latest .NET SDK 10 installed to build and run the microservices.
Download from:
https://dotnet.microsoft.com/download
Verify installation:
dotnet --version
2. Docker Desktop
Docker is required to build container images and run services locally. If you are on macOS or Windows, install Docker Desktop. On Linux, install Docker Engine.
Download from:
https://www.docker.com/products/docker-desktop
Check version:
docker --version
3. Kubernetes (local cluster)
You can use any Kubernetes distribution. For local development, recommended options include:
-
minikube — simple & widely supported
-
kind (Kubernetes IN Docker) — lightweight and great for CI
-
k3d — fast K3s-based cluster inside Docker
Install at least one. Example for kind:
brew install kind
kind create cluster --name djamware-k8s
Or for minikube:
minikube start
4. kubectl (Kubernetes CLI)
kubectl is the official CLI for interacting with Kubernetes clusters.
Installation:
https://kubernetes.io/docs/tasks/tools/
Check installation:
kubectl version --client
5. Helm 3 (optional but recommended)
We will package services using Helm charts.
Install:
brew install helm
Verify:
helm version
6. A Container Registry
You need a registry to push Docker images for Kubernetes to pull.
Supported options:
-
Docker Hub
-
GHCR (GitHub Container Registry)
-
GitLab Registry
-
Azure Container Registry
-
AWS ECR
-
Google Artifact Registry
For Docker Hub:
docker login
For GHCR:
echo $CR_PAT | docker login ghcr.io -u USERNAME --password-stdin
7. Git & GitHub Account (for CI/CD section)
A GitHub repository will be used for automating builds, tests, image pushes, and Kubernetes deployments.
Install Git:
git --version
8. Basic Understanding of ASP.NET Core & Docker
This tutorial assumes you have a basic understanding of:
-
Creating minimal APIs in ASP.NET Core
-
Writing Dockerfiles
-
Running containers locally
If not, don’t worry—each step is simple and explained clearly.
With the tools installed, we are ready to set up the project and create our microservice structure.
Project Structure
To keep the microservices easy to manage, each service will have its own source code, Dockerfile, and deployment artifacts. We’ll also separate shared libraries and infrastructure (Docker Compose, Kubernetes manifests, and Helm charts) into their own folders.
Below is the recommended folder structure for this tutorial:
aspnetcore10-microservices/
├─ services/
│ ├─ catalog/
│ │ ├─ src/
│ │ │ ├─ Catalog.csproj
│ │ │ └─ Program.cs
│ │ ├─ Dockerfile
│ │ └─ README.md
│ ├─ orders/
│ │ ├─ src/
│ │ └─ Dockerfile
│ └─ auth/
│ ├─ src/
│ └─ Dockerfile
│
├─ libs/
│ └─ shared/
│ ├─ Shared.csproj
│ └─ (common DTOs, interfaces, utilities)
│
├─ infra/
│ ├─ docker-compose.yml
│ ├─ k8s/
│ │ ├─ catalog-deployment.yaml
│ │ ├─ catalog-service.yaml
│ │ ├─ orders-deployment.yaml
│ │ ├─ orders-service.yaml
│ │ ├─ ingress.yaml
│ │ └─ hpa.yaml
│ └─ helm/
│ └─ catalog/
│ ├─ Chart.yaml
│ ├─ values.yaml
│ └─ templates/
│ ├─ deployment.yaml
│ ├─ service.yaml
│ ├─ ingress.yaml
│ └─ hpa.yaml
│
├─ .github/workflows/
│ └─ ci-cd.yml
│
└─ README.md
Why This Structure?
1. Clear Separation of Services
Each microservice lives inside its own folder (catalog, orders, auth).
This makes it easier to:
-
Build each service independently
-
Version and deploy services separately
-
Scale each service according to traffic
2. Shared Library for Reusable Code
The libs/shared directory contains reusable classes such as:
-
DTOs
-
Utility classes
-
Common response wrappers
-
Interfaces for cross-service communication
This keeps code consistent while maintaining service isolation.
3. Infrastructure as Code
The infra directory contains everything related to deployment:
-
docker-compose.yml → run all services locally
-
k8s/ → raw Kubernetes manifests
-
helm/ → Helm charts for production-ready deployments
This centralization makes it easy to switch between local Docker testing and cloud deployments.
4. GitHub Actions for CI/CD
The .github/workflows directory holds the CI/CD pipeline config:
-
Automatically build and test each service
-
Build Docker images
-
Push to a container registry
-
Deploy to Kubernetes
This aligns with standard DevOps practices.
Creating the Folder Structure
Run the following commands to scaffold the project:
mkdir aspnetcore10-microservices
cd aspnetcore10-microservices
mkdir -p services/catalog/src
mkdir -p services/orders/src
mkdir -p services/auth/src
mkdir -p libs/shared
mkdir -p infra/k8s
mkdir -p infra/helm/catalog/templates
mkdir -p .github/workflows
Next Step
With the structure ready, the next section will walk you through creating your first microservice: the Catalog Service built with ASP.NET Core 10 minimal APIs.
Build the Catalog Microservice
In this section, we will create our first microservice: the Catalog Service. This service will expose a minimal API that returns a list of products. The goal is to keep the logic simple so we can focus on containerization and deployment later.
We’ll build the service using:
-
ASP.NET Core 10 Minimal APIs
-
Built-in dependency injection
-
OpenAPI/Swagger
-
Health checks (later used by Kubernetes readiness/liveness probes)
Let’s begin.
1. Create the Catalog Project
Navigate to the catalog/src folder:
cd services/catalog/src
Create a new ASP.NET Core 10 project:
dotnet new webapi -n Catalog
Remove unnecessary WeatherForecast boilerplate:
rm -rf Controllers WeatherForecast.cs
Your directory now looks like:
services/catalog/src/Catalog
└── Catalog.csproj
└── Program.cs
2. Add a Simple Product Model
Create a new file Models/Product.cs:
namespace Catalog.Models;
public record Product(int Id, string Name, double Price);
This simple model is enough to simulate a real catalog without adding database complexity.
3. Implement the Catalog API
Open Program.cs and replace its contents with the following:
using Catalog.Models;
var builder = WebApplication.CreateBuilder(args);
// Add services to the container.
// Learn more about configuring OpenAPI at https://aka.ms/aspnet/openapi
builder.Services.AddOpenApi();
var app = builder.Build();
// Configure the HTTP request pipeline.
if (app.Environment.IsDevelopment())
{
app.MapOpenApi();
}
// Health endpoint (for Kubernetes)
app.MapGet("/health", () =>
{
return Results.Ok(new { status = "Healthy", time = DateTime.UtcNow });
});
// Get all products
app.MapGet("/products", () =>
{
var products = new List<Product>
{
new(1, "Laptop Pro", 1599.99),
new(2, "Mechanical Keyboard", 129.99),
new(3, "Noise Canceling Headphones", 299.50),
};
return Results.Ok(products);
});
app.Run();
4. Run the Catalog Service Locally
From inside the services/catalog/src folder, run:
dotnet run
Open in your browser:
-
Open API: http://localhost:5106/openapi/v1.json
-
Products List: http://localhost:5106/products
-
Health Check: http://localhost:5106/health
If everything works, you’re ready to containerize the service.
5. Add a Catalog Dockerfile
Create services/catalog/Dockerfile:
# Build stage
FROM mcr.microsoft.com/dotnet/sdk:10.0 AS build
WORKDIR /src
COPY src/*.csproj .
RUN dotnet restore
COPY src/. .
RUN dotnet publish -c Release -o /app --no-restore
# Runtime stage
FROM mcr.microsoft.com/dotnet/aspnet:10.0
WORKDIR /app
ENV ASPNETCORE_URLS=http://+:80
COPY --from=build /app ./
EXPOSE 80
ENTRYPOINT ["dotnet", "Catalog.dll"]
6. Build and Test the Docker Image
Navigate to services/catalog:
cd services/catalog
Build the image:
docker build -t catalog-service:dev .
Run the container:
docker run -p 5001:80 catalog-service:dev
Test endpoints:
-
http://localhost:5001/products
-
http://localhost:5001/health
Your microservice is now successfully built and containerized.
7. Summary
You now have:
-
A clean ASP.NET Core 10 minimal API
-
A Dockerfile ready for deployment
-
Health endpoints for Kubernetes probes
-
Swagger documentation for easy testing
This pattern will be reused for the other microservices.
Dockerize and Run with Docker Compose
Now that the Catalog microservice is working and has a proper Dockerfile, the next step is to run it together with other services using Docker Compose. This makes it easy to simulate a multi-service environment locally before deploying to Kubernetes.
In this section, you will:
-
Create a Docker Compose file
-
Configure services and networking
-
Build & run the Catalog service with a single command
-
Test all endpoints through exposed ports
Later, we will add Orders and Auth services to the same Compose stack.
1. Create the Docker Compose File
Go to the infra directory:
cd infra
Create a new file named docker-compose.yml:
version: '3.9'
services:
catalog:
build:
context: ../services/catalog
dockerfile: Dockerfile
image: catalog-service:dev
container_name: catalog-service
ports:
- "5001:80"
environment:
ASPNETCORE_ENVIRONMENT: Development
What this does:
-
context tells Docker where the source code & Dockerfile are located
-
build builds the Catalog Docker image
-
ports exposes port 5001 on your machine
-
ASPNETCORE_ENVIRONMENT enables Swagger in development
When we add more microservices later, they will also appear in this file.
2. Build and Start All Services
From inside the infra directory, run:
docker compose up --build
This will:
-
Build the Catalog microservice image
-
Start the container
-
Attach logs to your terminal
You should see output similar to:
Now listening on: http://0.0.0.0:80
Application started. Press Ctrl+C to shut down.
3. Test the Catalog Service in Docker
Open your browser or use curl:
Products API
http://localhost:5001/products
You should see the product list returned from your minimal API.
Health Check
http://localhost:5001/health
Expected response:
{
"status": "Healthy",
"time": "2025-xx-xxTxx:xx:xxZ"
}
Everything working here confirms that your Dockerfile and Compose setup are correct.
4. Stop the Compose Stack
Press Ctrl+C, then run:
docker compose down
This stops and removes all containers in the stack.
5. Summary
At this point, you have:
-
A Dockerized ASP.NET Core 10 microservice
-
Docker Compose managing your local multi-service environment
-
Automatic image building & container networking
-
Confirmed that the Catalog service works inside Docker
This setup forms the foundation for your Kubernetes deployment in the next sections.
Kubernetes Deployment with Manifests
With the Catalog microservice running successfully in Docker, it's time to deploy it to a Kubernetes cluster. In this section, you will generate Kubernetes manifests for:
-
Deployment (runs your Pods)
-
Service (exposes the Pod inside the cluster)
-
Ingress (optional external access)
-
Namespace (optional isolation)
We'll use plain YAML files first to understand the fundamentals before moving to Helm in later sections.
1. Create the Kubernetes Directory
Inside the infra folder:
mkdir -p infra/k8s
Your path will be:
infra/k8s/
2. Create a Namespace (Optional but Recommended)
Create infra/k8s/namespace.yaml:
apiVersion: v1
kind: Namespace
metadata:
name: microservices
Apply:
kubectl apply -f infra/k8s/namespace.yaml
3. Create the Deployment
Create infra/k8s/catalog-deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: catalog
namespace: microservices
labels:
app: catalog
spec:
replicas: 2
selector:
matchLabels:
app: catalog
template:
metadata:
labels:
app: catalog
spec:
containers:
- name: catalog
image: catalog-service:dev
ports:
- containerPort: 80
readinessProbe:
httpGet:
path: /health
port: 80
initialDelaySeconds: 3
periodSeconds: 10
livenessProbe:
httpGet:
path: /health
port: 80
initialDelaySeconds: 15
periodSeconds: 20
Key Points
-
replicas: 2 → runs two Pods for resilience
-
image: catalog-service:dev → this must exist in your local cluster
-
readinessProbe → Kubernetes only routes traffic when ready
-
livenessProbe → restarts the container if it becomes unhealthy
4. Create the Service
Create infra/k8s/catalog-service.yaml:
apiVersion: v1
kind: Service
metadata:
name: catalog
namespace: microservices
spec:
selector:
app: catalog
ports:
- protocol: TCP
port: 80
targetPort: 80
type: ClusterIP
This allows other services in Kubernetes to access the Catalog service via:
http://catalog.microservices.svc.cluster.local
5. (Optional) Create an Ingress
If your cluster has an Ingress controller (nginx, traefik, etc.), create:
infra/k8s/ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: micro-ingress
namespace: microservices
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: micro.local
http:
paths:
- path: /catalog
pathType: Prefix
backend:
service:
name: catalog
port:
number: 80
To test locally with /etc/hosts:
127.0.0.1 micro.local
Then:
http://micro.local/catalog/products
6. Load the Docker Image into the Cluster (Kind/Minikube)
If using kind:
kind load docker-image catalog-service:dev --name djamware-k8s
If using minikube:
minikube image load catalog-service:dev
7. Apply All Resources
kubectl apply -f infra/k8s -n microservices
Verify:
Pods
kubectl get pods -n microservices
Expected:
catalog-xxxx Running
catalog-xxxx Running
Service
kubectl get svc -n microservices
Logs
kubectl logs -n microservices -l app=catalog
8. Test Inside the Cluster (Port Forward)
If you don’t use Ingress, test with port-forward:
kubectl port-forward svc/catalog 5001:80 -n microservices
Now visit:
-
http://localhost:5001/products
-
http://localhost:5001/health
-
http://localhost:5001/swagger
9. Summary
You now have:
-
A fully deployed ASP.NET Core 10 microservice running in Kubernetes
-
Deployment with probes and scaling support
-
ClusterIP Service for inter-service communication
-
Optional Ingress for external traffic
-
A working port-forward test for development
This forms the foundation for deploying the entire microservices stack.
Next, we’ll add Horizontal Pod Autoscaling (HPA) and resource best practices.
Autoscaling with HPA (Horizontal Pod Autoscaler)
Kubernetes allows your microservices to scale up or down automatically based on resource usage, such as CPU or memory. This is a critical capability for microservices in production, ensuring your application can handle traffic spikes without unnecessary resource costs.
In this section, you will:
-
Enable metrics support
-
Apply resource requests and limits (required for HPA)
-
Create an HPA manifest for the Catalog service
-
Test autoscaling behavior
1. Ensure the Metrics Server Is Installed
Most local Kubernetes clusters (minikube, k3d, Docker Desktop) include the Metrics Server by default.
To verify:
kubectl get deployment metrics-server -n kube-system
If not installed, install it:
kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
Verify metrics are working:
kubectl top nodes
kubectl top pods -n microservices
2. Add Resource Requests and Limits
HPA requires CPU/memory requests in the Deployment.
Edit infra/k8s/catalog-deployment.yaml and update the container spec:
resources:
requests:
cpu: "100m"
memory: "128Mi"
limits:
cpu: "500m"
memory: "512Mi"
Full container block now looks like:
containers:
- name: catalog
image: catalog-service:dev
ports:
- containerPort: 80
readinessProbe:
httpGet:
path: /health
port: 80
livenessProbe:
httpGet:
path: /health
port: 80
resources:
requests:
cpu: "100m"
memory: "128Mi"
limits:
cpu: "500m"
memory: "512Mi"
Apply changes:
kubectl apply -f infra/k8s/catalog-deployment.yaml
3. Create the Horizontal Pod Autoscaler
Create infra/k8s/catalog-hpa.yaml:
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: catalog-hpa
namespace: microservices
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: catalog
minReplicas: 2
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 50
Explanation
-
minReplicas: 2 → ensure the service always has redundancy
-
maxReplicas: 10 → upper limit during high traffic
-
averageUtilization: 50% → If CPU usage per Pod exceeds 50%, HPA triggers scaling
Apply the HPA:
kubectl apply -f infra/k8s/catalog-hpa.yaml
4. Check HPA Status
Run:
kubectl get hpa -n microservices
Sample output:
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS
catalog-hpa Deployment/catalog 20%/50% 2 10 2
5. Test Autoscaling
You can simulate a load with a simple tool like hey or wrk.
Install hey (macOS):
brew install hey
Generate load through port-forward:
kubectl port-forward svc/catalog 5001:80 -n microservices
Run load:
hey -z 30s -c 50 http://localhost:5001/products
Now check the HPA again:
kubectl get hpa -n microservices
You should see something like:
TARGETS: 85%/50%
REPLICAS: 4
Pods scale up accordingly:
kubectl get pods -n microservices
6. Summary
You have now implemented:
-
Metrics Server for autoscaling
-
Resource requests and limits
-
Horizontal Pod Autoscaler (HPA)
-
Load testing to verify scaling
Your Catalog microservice can now automatically scale based on CPU usage—one of the most important production features in Kubernetes.
Advanced Kubernetes Features
As your microservices grow, you’ll rely on more than just basic Deployments and Services. Kubernetes provides several powerful features that help you manage configuration, sensitive data, rollouts, networking, and communication between services.
In this section, you will learn how to use:
-
ConfigMaps for environment configuration
-
Secrets for sensitive values
-
Rolling updates & rollbacks
-
NetworkPolicies for internal service security
-
Service-to-service communication patterns
-
Resource management & pod disruption control
These concepts prepare your microservices for real production workloads.
1. Add Application Configuration with ConfigMap
If your service needs non-sensitive configuration (API URLs, feature flags, environment values), Kubernetes recommends using ConfigMaps.
Create infra/k8s/catalog-configmap.yaml:
apiVersion: v1
kind: ConfigMap
metadata:
name: catalog-config
namespace: microservices
data:
ENVIRONMENT: "Production"
FEATURE_ENABLE_LOGGING: "true"
Mount as environment variables by updating catalog-deployment.yaml:
env:
- name: ENVIRONMENT
valueFrom:
configMapKeyRef:
name: catalog-config
key: ENVIRONMENT
- name: FEATURE_ENABLE_LOGGING
valueFrom:
configMapKeyRef:
name: catalog-config
key: FEATURE_ENABLE_LOGGING
Apply:
kubectl apply -f infra/k8s/catalog-configmap.yaml
kubectl apply -f infra/k8s/catalog-deployment.yaml
2. Secure Sensitive Data with Secrets
Use Secrets for database passwords, API keys, OAuth tokens, etc.
1. Create a Secret
kubectl create secret generic catalog-secret \
--namespace microservices \
--from-literal=API_KEY=supersecret123
alog-secret \ --namespace microservices \ --from-literal=API_KEY=supersecret123
2. Load it into Deployment
In catalog-deployment.yaml:
env:
- name: API_KEY
valueFrom:
secretKeyRef:
name: catalog-secret
key: API_KEY
Check:
kubectl describe secret catalog-secret -n microservices
Secrets are base64-encoded—not "encrypted"—but far safer than storing them in files.
3. Rolling Updates & Rollbacks
Kubernetes performs rolling updates by default. To simulate:
Edit your Deployment:
vim infra/k8s/catalog-deployment.yaml
Change something simple (image tag, environment variable…), then apply:
kubectl apply -f infra/k8s/catalog-deployment.yaml
Check rollout status:
kubectl rollout status deployment/catalog -n microservices
If something breaks, rollback:
kubectl rollout undo deployment/catalog -n microservices
Rolling updates ensure zero downtime deployments.
4. Control Pod Disruptions with PodDisruptionBudget (PDB)
Use a PDB to keep a minimum replicas available during maintenance or autoscaling.
Create infra/k8s/catalog-pdb.yaml:
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: catalog-pdb
namespace: microservices
spec:
minAvailable: 1
selector:
matchLabels:
app: catalog
Apply:
kubectl apply -f infra/k8s/catalog-pdb.yaml
This ensures at least 1 Pod stays running during disruptions.
5. Secure Traffic with NetworkPolicies
By default, any Pod can talk to any other Pod.
To restrict traffic (important in microservices), create a NetworkPolicy.
Example: allow only services in the namespace to communicate with the Catalog.
infra/k8s/catalog-networkpolicy.yaml:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: catalog-allow-namespace
namespace: microservices
spec:
podSelector:
matchLabels:
app: catalog
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels: {}
Apply:
kubectl apply -f infra/k8s/catalog-networkpolicy.yaml
NetworkPolicies are crucial for Zero Trust networking.
6. Service-to-Service Communication
Inside a Kubernetes cluster, services can communicate via DNS:
http://catalog.microservices.svc.cluster.local
Example (Orders service calling Catalog):
var client = new HttpClient();
var response = await client.GetAsync("http://catalog.microservices.svc.cluster.local/products");
Best practices:
-
Use Retry + Circuit Breaker (via Polly or YARP)
-
Use Shared DTOs (located in
libs/shared) -
Use typed HttpClient with DI
Later, you can add API Gateways like YARP, Ocelot, or Envoy.
7. Resource Management & Quality of Service (QoS)
Earlier, you added resource limits. Kubernetes uses these to determine scheduling:
-
Pods with requests only → Guaranteed QoS
-
Pods with limits only → BestEffort
-
Pods with both (recommended) → Burstable QoS
Catalog service is now Burstable, allowing better scheduling during high load.
8. Summary
You have now incorporated essential production features:
-
Configuration with ConfigMaps
-
Secret management
-
Rolling updates and easy rollbacks
-
Pod disruption protection
-
Network security with NetworkPolicy
-
Strong service-to-service communication patterns
-
Resource & QoS management
These are the advanced Kubernetes features used in most real-world microservice deployments.
Packaging Microservices with Helm
Managing multiple YAML files for Deployments, Services, HPA, ConfigMaps, and Ingress quickly becomes repetitive and hard to maintain. Helm solves this problem by letting you package Kubernetes manifests into reusable, configurable charts.
In this section, you will:
-
Create a Helm chart for the Catalog microservice
-
Configure
values.yamlfor flexible deployments -
Template Deployments, Services, HPA, and Ingress
-
Install and upgrade your chart
-
Understand how Helm simplifies multi-environment deployments
1. Create a Helm Chart
Inside the infra/helm directory:
cd infra/helm
helm create catalog
This generates:
catalog/
├── Chart.yaml
├── values.yaml
├── charts/
└── templates/
├── deployment.yaml
├── service.yaml
├── ingress.yaml
├── hpa.yaml
├── _helpers.tpl
└── tests/
We will now modify these templates for our microservice.
2. Update Chart Metadata
Open catalog/Chart.yaml and update it:
apiVersion: v2
name: catalog
description: A Helm chart for the Catalog microservice
type: application
version: 0.1.0
appVersion: "1.0.0"
3. Configure values.yaml
Open catalog/values.yaml and clean it up to:
replicaCount: 2
image:
repository: catalog-service
tag: dev
pullPolicy: IfNotPresent
service:
type: ClusterIP
port: 80
resources:
requests:
cpu: "100m"
memory: "128Mi"
limits:
cpu: "500m"
memory: "512Mi"
ingress:
enabled: true
className: nginx
hosts:
- host: micro.local
paths:
- path: /catalog
pathType: Prefix
autoscaling:
enabled: true
minReplicas: 2
maxReplicas: 10
targetCPU: 50
These values can be overridden for staging, production, etc.
4. Template the Deployment
Edit templates/deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "catalog.fullname" . }}
labels:
app: {{ include "catalog.name" . }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app: {{ include "catalog.name" . }}
template:
metadata:
labels:
app: {{ include "catalog.name" . }}
spec:
containers:
- name: catalog
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- containerPort: 80
readinessProbe:
httpGet:
path: /health
port: 80
livenessProbe:
httpGet:
path: /health
port: 80
resources:
{{- toYaml .Values.resources | nindent 12 }}
5. Template the Service
Edit templates/service.yaml:
apiVersion: v1
kind: Service
metadata:
name: {{ include "catalog.fullname" . }}
spec:
type: {{ .Values.service.type }}
ports:
- port: {{ .Values.service.port }}
targetPort: 80
selector:
app: {{ include "catalog.name" . }}
6. Template the Ingress
Edit templates/ingress.yaml:
{{- if .Values.ingress.enabled -}}
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: {{ include "catalog.fullname" . }}
annotations:
kubernetes.io/ingress.class: {{ .Values.ingress.className }}
spec:
rules:
{{- range .Values.ingress.hosts }}
- host: {{ .host }}
http:
paths:
{{- range .paths }}
- path: {{ .path }}
pathType: {{ .pathType }}
backend:
service:
name: {{ include "catalog.fullname" $ }}
port:
number: {{ $.Values.service.port }}
{{- end }}
{{- end }}
{{- end }}
7. Template the Horizontal Pod Autoscaler (HPA)
Edit templates/hpa.yaml:
{{- if .Values.autoscaling.enabled }}
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: {{ include "catalog.fullname" . }}
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: {{ include "catalog.fullname" . }}
minReplicas: {{ .Values.autoscaling.minReplicas }}
maxReplicas: {{ .Values.autoscaling.maxReplicas }}
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: {{ .Values.autoscaling.targetCPU }}
{{ end }}
8. Install the Helm Chart
From inside infra/helm:
helm install catalog ./catalog -n microservices
Verify:
kubectl get pods -n microservices
kubectl get svc -n microservices
kubectl get ingress -n microservices
kubectl get hpa -n microservices
9. Upgrade the Release
If you change values:
helm upgrade catalog ./catalog -n microservices
Helm performs:
-
Rolling updates
-
Configuration diffs
-
Version tracking
10. Uninstall the Chart
helm uninstall catalog -n microservices
11. Summary
You now have:
-
A complete Helm chart for the Catalog microservice
-
Configurable values for images, replicas, ingress, and autoscaling
-
Templated Deployment, Service, and HPA
-
Ability to install, upgrade, and roll back releases easily
Helm dramatically simplifies deploying microservices—especially when you have multiple environments (dev, staging, production).
CI/CD with GitHub Actions
To complete our deployment workflow, we’re going to automate everything using GitHub Actions. This ensures that every change to your microservices is:
-
Built
-
Tested
-
Packaged into a Docker image
-
Pushed to a container registry
-
Deployed to your Kubernetes cluster via Helm
This is the final step that makes your ASP.NET Core 10 microservices production-ready.
We’ll focus on the Catalog service, but the same workflow can be reused for Orders, Auth, and any future microservices.
1. Prerequisites for CI/CD
Before creating the pipeline, make sure you have:
1. A GitHub Repository
Push your project to GitHub.
2. A Container Registry
You may use any registry:
-
GHCR (GitHub Container Registry → recommended)
-
Docker Hub
-
Azure Container Registry
-
AWS ECR
-
Google Artifact Registry
3. Kubernetes Access in CI
Your cluster must be accessible from GitHub Actions. Common options:
-
Upload your kubeconfig to GitHub Secrets
-
Use a cloud provider's managed identity
-
Use
kubectlwith token-based auth
For this tutorial, we'll use kubeconfig.
4. GitHub Secrets
Go to:
GitHub repo → Settings → Secrets → Actions
Create the following secrets:
| Secret Name | Description |
|---|---|
CR_USERNAME |
Registry username |
CR_PASSWORD |
Registry token/pat |
KUBECONFIG_FILE |
Base64-encoded kubeconfig |
CR_REGISTRY |
Registry hostname (e.g., ghcr.io) |
Encode kubeconfig:
cat ~/.kube/config | base64
Paste into KUBECONFIG_FILE secret.
2. Create the CI/CD Workflow File
Create:
.github/workflows/ci-cd.yml
Add the following:
name: CI/CD Pipeline
on:
push:
branches:
- main
env:
IMAGE_NAME: catalog-service
CHART_NAME: catalog
jobs:
build-and-push:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Setup .NET 10
uses: actions/setup-dotnet@v4
with:
dotnet-version: '10.0.x'
- name: Restore dependencies
run: dotnet restore services/catalog/src
- name: Build
run: dotnet build services/catalog/src -c Release --no-restore
- name: Run tests (if any)
run: dotnet test --no-build --verbosity normal || true
- name: Build Docker image
run: |
docker build \
-t ${{ secrets.CR_REGISTRY }}/${{ github.repository_owner }}/${{ env.IMAGE_NAME }}:latest \
services/catalog
- name: Log in to container registry
uses: docker/login-action@v3
with:
registry: ${{ secrets.CR_REGISTRY }}
username: ${{ secrets.CR_USERNAME }}
password: ${{ secrets.CR_PASSWORD }}
- name: Push Docker image
run: |
docker push ${{ secrets.CR_REGISTRY }}/${{ github.repository_owner }}/${{ env.IMAGE_NAME }}:latest
outputs:
image: ${{ secrets.CR_REGISTRY }}/${{ github.repository_owner }}/${{ env.IMAGE_NAME }}:latest
deploy:
needs: build-and-push
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Set up Kubernetes config
run: |
echo "${{ secrets.KUBECONFIG_FILE }}" | base64 --decode > kubeconfig
shell: bash
- name: Setup kubectl
uses: azure/setup-kubectl@v4
with:
version: 'latest'
- name: Install Helm
uses: azure/setup-helm@v3
with:
version: 'latest'
- name: Deploy with Helm
env:
IMAGE: ${{ needs.build-and-push.outputs.image }}
run: |
KUBECONFIG=./kubeconfig \
helm upgrade --install ${{ env.CHART_NAME }} \
infra/helm/catalog \
--namespace microservices \
--create-namespace \
--set image.repository=${{ secrets.CR_REGISTRY }}/${{ github.repository_owner }}/${{ env.IMAGE_NAME }} \
--set image.tag=latest
3. What the Pipeline Does
✔ CI Phase (build-and-push job)
-
Restores packages
-
Builds the Catalog microservice
-
Runs tests
-
Builds a Docker image
-
Logs into your container registry
-
Pushes the image to the registry
-
Outputs the full image name
✔ CD Phase (deploy job)
-
Loads the kubeconfig
-
Installs kubectl
-
Installs Helm
-
Deploys/updates the Catalog service Helm chart
-
Passes image repository & tag as Helm values
Every Git push to main automatically triggers:
Build → Test → Dockerize → Push → Deploy
4. Verify the Deployment
Check Pods:
kubectl get pods -n microservices
Check image:
kubectl describe deployment catalog -n microservices | grep Image
Check Helm releases:
helm list -n microservices
5. Override Values Per Environment
For staging:
helm upgrade catalog infra/helm/catalog -n microservices \
-f infra/helm/staging-values.yaml
For production:
helm upgrade catalog infra/helm/catalog -n microservices \
-f infra/helm/prod-values.yaml
This makes multi-environment pipelines easy.
6. Summary
You now have a complete, automated CI/CD pipeline that:
-
Builds your ASP.NET Core 10 microservice
-
Tests it
-
Builds and pushes Docker images
-
Deploys with Helm into Kubernetes
-
Supports multiple environments
-
Delivers zero-downtime rolling updates
Your microservices are now fully automated from commit to deployment.
Conclusion
In this tutorial, you learned how to build, containerize, and deploy an ASP.NET Core 10 microservices application using Docker and Kubernetes. Starting from a simple Catalog microservice, you progressively added essential cloud-native components including Deployments, Services, Ingress, health probes, autoscaling with HPA, and advanced Kubernetes features like ConfigMaps, Secrets, NetworkPolicies, and PodDisruptionBudgets.
You also packaged your microservice using Helm, giving you repeatable and configurable deployments for multiple environments such as local development, staging, or production. With GitHub Actions, you automate the entire pipeline—from building and testing your services to publishing container images and deploying to your Kubernetes cluster.
By following these steps, you now have a solid foundation for building fully containerized microservices that are scalable, resilient, secure, and production-ready. You can extend this setup by:
-
Adding more microservices (Orders, Auth, Gateway)
-
Integrating distributed tracing with OpenTelemetry
-
Using a service mesh like Istio or Linkerd for traffic management
-
Implementing canary or blue/green deployments
-
Deploying to a managed Kubernetes service such as AKS, EKS, or GKE
This architecture is flexible enough to grow with your application’s needs while maintaining simplicity, scalability, and maintainability.
You can find the full source code on our GitHub.
That's just the basics. If you need more deep learning about ASP.Net Core, you can take the following cheap course:
- Asp.Net Core 10 (.NET 10) | True Ultimate Guide
- .NET Core MVC - The Complete Guide 2025 [E-commerce]
- Learn C# Full Stack Development with Angular and ASP.NET
- Complete ASP.NET Core and Entity Framework Development
- Full Stack Web Development with C# OOP, MS SQL & ASP.NET MVC
- Build a complete distributed app using .Net Aspire
- .NET Microservices with Azure DevOps & AKS | Basic to Master
- ASP.NET Core - SOLID and Clean Architecture
- ANGULAR 20 and ASP.NET Core Web API - Real World Application
- .NET/C# Interview Masterclass- Top 500 Questions (PDF)(2025)
Congratulations — your ASP.NET Core 10 microservices are now ready for real-world deployment! 🚀
