Deploying a Python application to production is very different from running it locally with python main.py or uvicorn --reload. In the real world, your app must be fast, reliable, secure, and able to handle multiple users at the same time. This is where Docker, Gunicorn, and PostgreSQL come in.
Docker allows you to package your Python app and all of its dependencies into a portable container that runs the same way on your laptop, a cloud server, or a production VPS. This eliminates the classic “it works on my machine” problem and makes deployments predictable and repeatable.
Gunicorn is a production-grade WSGI/ASGI server that sits in front of your Python app and manages multiple worker processes. Instead of a single Python process handling all requests, Gunicorn runs several workers in parallel, which allows your application to handle more traffic and remain responsive under load. When combined with FastAPI or Flask, Gunicorn provides a rock-solid production runtime.
PostgreSQL is used as the database layer in this tutorial because it is reliable, fast, and widely supported in production environments. Unlike SQLite, PostgreSQL is designed for concurrent access, data integrity, and scalability, making it ideal for real applications.
In this tutorial, you will build and deploy a production-ready Python API using:
-
FastAPI for the web framework
-
Gunicorn as the application server
-
PostgreSQL as the database
-
Docker for containerization
-
Docker Compose to run everything together
By the end of this guide, you will have a complete stack running in containers, ready to be deployed to any Linux server or cloud provider. This setup closely mirrors how modern Python backends are run in real-world production environments, making it perfect for startups, SaaS apps, and backend services.
Architecture Overview
Before we start writing code and Docker files, it’s important to understand how all the pieces in our stack work together. This will help you debug issues, scale your app later, and deploy it with confidence.
Our application will use the following architecture:
Client (Browser / API Client)
│
▼
Docker Host (Server)
│
▼
Gunicorn (Web Server)
│
▼
FastAPI Application
│
▼
PostgreSQL Database
All of these components will run inside Docker containers and will be connected using Docker Compose.
How the Request Flow Works
When a user sends a request (for example, to fetch data from an API endpoint):
-
The request arrives at the Docker host through port
8000. -
Docker forwards the request to the web container.
-
Gunicorn receives the request and assigns it to one of its worker processes.
-
The worker forwards the request to the FastAPI app.
-
If data is needed, FastAPI queries PostgreSQL.
-
PostgreSQL returns the data to FastAPI.
-
FastAPI sends the response back through Gunicorn to the client.
This design ensures:
-
Multiple requests can be handled at the same time
-
The database is isolated and secure
-
Each service can be scaled independently
Why Use Docker for This Setup
Docker allows us to package:
-
Python
-
FastAPI
-
Gunicorn
-
PostgreSQL
-
All dependencies
into isolated, reproducible containers. That means:
-
No system-level conflicts
-
Easy upgrades
-
One-command startup (
docker compose up)
Your local development environment will behave the same way as production.
Why Gunicorn Instead of the Built-in Server
FastAPI includes Uvicorn, which is great for development, but in production:
-
You need multiple worker processes
-
You need automatic request handling and restarts
-
You need better performance and stability
Gunicorn provides all of this, acting as a robust application server for FastAPI.
Why PostgreSQL in a Container
PostgreSQL will run in its own container with:
-
Persistent data storage (Docker volumes)
-
Automatic startup with Docker Compose
-
Isolation from the application container
This gives you a real production database that behaves just like it would on a cloud server.
Creating the Python (FastAPI) Application
Now that we understand how the system will work, let’s build the actual Python application that will run inside Docker and talk to PostgreSQL. We will use FastAPI because it is fast, modern, and widely used for production APIs.
Our app will be a simple REST API with:
-
A health check endpoint
-
A PostgreSQL connection
-
A small
userstable for demonstration
This gives us a realistic backend to deploy.
Project Structure
Create a new project folder:
python3 -m venv venv
source venv/bin/activate # macOS / Linux
# venv\Scripts\activate # Windows
mkdir dockerized-python-app
cd dockerized-python-app
Inside it, create this structure:
dockerized-python-app/
│
├── app/
│ ├── main.py
│ ├── database.py
│ └── models.py
│
├── requirements.txt
└── .env
Installing Dependencies
Open requirements.txt and add:
fastapi
uvicorn
gunicorn
sqlalchemy
psycopg2-binary
python-dotenv
These libraries give us:
-
FastAPI for the API
-
Gunicorn + Uvicorn worker for production
-
SQLAlchemy for ORM
-
psycopg2 for PostgreSQL
-
dotenv for environment variables
Database Configuration
Create app/database.py:
import os
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker, declarative_base
DATABASE_URL = os.getenv(
"DATABASE_URL",
"postgresql://postgres:postgres@localhost:5432/appdb"
)
engine = create_engine(DATABASE_URL)
SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine)
Base = declarative_base()
This allows our app to connect to PostgreSQL using an environment variable that will later come from Docker.
Creating a Model
Create app/models.py:
from sqlalchemy import Column, Integer, String
from .database import Base
class User(Base):
__tablename__ = "users"
id = Column(Integer, primary_key=True, index=True)
name = Column(String, nullable=False)
email = Column(String, unique=True, index=True)
Main FastAPI App
Create app/main.py:
from fastapi import FastAPI, Depends
from sqlalchemy.orm import Session
from .database import Base, engine, SessionLocal
from .models import User
Base.metadata.create_all(bind=engine)
app = FastAPI()
def get_db():
db = SessionLocal()
try:
yield db
finally:
db.close()
@app.get("/health")
def health():
return {"status": "ok"}
@app.post("/users")
def create_user(name: str, email: str, db: Session = Depends(get_db)):
user = User(name=name, email=email)
db.add(user)
db.commit()
db.refresh(user)
return user
@app.get("/users")
def get_users(db: Session = Depends(get_db)):
return db.query(User).all()
This API lets us:
-
Check if the service is running
-
Create users
-
Read users from PostgreSQL
pip install -r requirements.txt
At this point, we have a real Python backend.
Connecting FastAPI to PostgreSQL
Now that we have a working FastAPI app, we need to make sure it can connect to PostgreSQL in a way that works both locally and inside Docker containers.
In containerized environments, we never hardcode database hosts, usernames, or passwords. Instead, we use environment variables, which Docker Compose will provide later.
Using Environment Variables
We already prepared our app for this in database.py:
DATABASE_URL = os.getenv(
"DATABASE_URL",
"postgresql://postgres:postgres@localhost:5432/appdb"
)
This means:
-
When running locally, it connects to
localhost -
When running inside Docker, it will use a value provided by Docker
Create a .env File
In the project root, create:
.env
Add:
DATABASE_URL=postgresql://postgres:postgres@db:5432/appdb
Notice the hostname db instead of localhost.
This will match the PostgreSQL service name in Docker Compose.
Testing PostgreSQL Locally (Optional)
If you have PostgreSQL installed, you can create the database:
psql postgres -U djamware
CREATE DATABASE appdb;
\q
Then run the app:
uvicorn app.main:app --reload
Test:
curl http://localhost:8000/health
If PostgreSQL is running, you can also test:
curl -X POST "http://localhost:8000/users?name=John&[email protected]"
Why This Setup Works in Docker
When we run Docker Compose later:
-
PostgreSQL will run in a container called
db -
FastAPI will run in a container called
web -
Docker’s internal network will allow
webto reachdbusing the hostnamedb
That’s why we use:
postgresql://postgres:postgres@db:5432/appdb
instead of localhost.
Running FastAPI with Gunicorn
So far, we’ve been running our app with Uvicorn in development mode. That’s great for local testing, but in production, you need something more powerful and stable. This is where Gunicorn comes in.
Gunicorn is a process manager for Python web applications. It starts multiple worker processes and distributes incoming requests between them, allowing your API to handle more traffic and stay responsive even under load.
Why Not Use uvicorn --reload in Production?
The Uvicorn development server:
-
Runs only one worker
-
Is not optimized for heavy traffic
-
Automatically reloads code (bad for production)
Gunicorn:
-
Runs multiple workers
-
Manages crashes and restarts
-
It is optimized for performance and stability
Gunicorn + Uvicorn Worker for FastAPI
FastAPI is an ASGI application, so we combine Gunicorn with the Uvicorn worker class.
The general command looks like this:
gunicorn app.main:app \
-k uvicorn.workers.UvicornWorker \
-w 4 \
-b 0.0.0.0:8000
Explanation:
-
app.main:app→ Path to the FastAPI app -
-k uvicorn.workers.UvicornWorker→ ASGI worker -
-w 4→ 4 worker processes -
-b 0.0.0.0:8000→ Listen on all network interfaces
Testing Gunicorn Locally
Install Gunicorn if you haven’t already:
pip install gunicorn
Run:
gunicorn app.main:app -k uvicorn.workers.UvicornWorker -w 2 -b 0.0.0.0:8000
Open:
http://localhost:8000/health
If you see:
{"status":"ok"}
then Gunicorn is working correctly.
Why This Matters in Docker
Later, Docker will run this exact Gunicorn command inside the container. That means:
-
Your API will already be production-ready
-
The same setup will work locally and on a cloud server
Creating the Dockerfile
Now that our FastAPI application runs correctly with Gunicorn, it’s time to package everything into a Docker container. The Dockerfile defines how our Python app is built and how it runs inside a container.
This file will:
-
Install Python dependencies
-
Copy our application code
-
Run Gunicorn as the main process
Create the Dockerfile
In the project root (dockerized-python-app/), create a file named:
Dockerfile
Add the following:
FROM python:3.12-slim
# Set working directory
WORKDIR /app
# Install system dependencies (for psycopg2)
RUN apt-get update && apt-get install -y gcc libpq-dev && rm -rf /var/lib/apt/lists/*
# Copy requirements and install dependencies
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
# Copy application code
COPY . .
# Expose the app port
EXPOSE 8000
# Run Gunicorn
CMD ["gunicorn", "app.main:app", "-k", "uvicorn.workers.UvicornWorker", "-w", "4", "-b", "0.0.0.0:8000"]
What Each Part Does
-
FROM python:3.12-slim
Uses a lightweight, modern Python image. -
WORKDIR /app
Sets/appas the working directory inside the container. -
RUN apt-get install ...
Installs PostgreSQL client libraries needed bypsycopg2. -
COPY requirements.txt .+pip install
Installs all Python dependencies. -
COPY . .
Copies your FastAPI project into the container. -
EXPOSE 8000
Documents the port used by the application. -
CMD [...]
Runs Gunicorn when the container starts.
Why We Use Gunicorn in the Dockerfile
This makes the container:
-
Production-ready
-
Able to handle multiple requests
-
Identical in development and production
No extra setup is needed later.
Docker Compose (FastAPI + PostgreSQL)
So far, we have a Docker image for our FastAPI application, but we still need a PostgreSQL database and a way to run both containers together. This is exactly what Docker Compose is for.
Docker Compose lets us define multiple services (containers), connect them on a private network, and start everything with a single command.
Create docker-compose.yml
In the project root, create a file named:
docker-compose.yml
Add the following:
version: "3.9"
services:
web:
build: .
container_name: fastapi_app
ports:
- "8000:8000"
env_file:
- .env
depends_on:
- db
db:
image: postgres:16
container_name: postgres_db
environment:
POSTGRES_DB: appdb
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
volumes:
- pgdata:/var/lib/postgresql/data
ports:
- "5432:5432"
volumes:
pgdata:
What This Does
This file defines two services:
1. web (FastAPI + Gunicorn)
-
Builds from the Dockerfile
-
Exposes port
8000 -
Loads environment variables from
.env -
Waits for PostgreSQL before starting
2. db (PostgreSQL)
-
Uses the official PostgreSQL 16 image
-
Creates database
appdb -
Stores data in a Docker volume so it is not lost when containers stop
How the Containers Communicate
Docker Compose creates a private network where:
-
The FastAPI container can reach PostgreSQL at
db:5432 -
That’s why our
DATABASE_URLusesdbas the hostname
Environment Variables
Your .env file:
DATABASE_URL=postgresql://postgres:postgres@db:5432/appdb
Docker Compose will inject this into the web container automatically.
Running the Stack Locally
Now that we have the FastAPI app, PostgreSQL, Dockerfile, and Docker Compose configuration, we can run the entire stack with a single command.
This will start:
-
The FastAPI + Gunicorn container
-
The PostgreSQL database
-
The internal Docker network between them
Build and Run the Containers
From the project root, run:
docker compose up --build
Docker will:
-
Build the FastAPI image
-
Pull the PostgreSQL image
-
Start both containers
-
Connect them together
After a few seconds, you should see logs from both Gunicorn and PostgreSQL.
Test the API
Open a browser or use curl:
http://localhost:8000/health
You should see:
{"status":"ok"}
Create a User
Test writing to PostgreSQL:
curl -X POST "http://localhost:8000/users?name=Alice&[email protected]"
You should get a response with the user’s ID.
Fetch Users
curl http://localhost:8000/users
You should see a list of users coming from PostgreSQL.
This confirms:
-
FastAPI is running in Docker
-
Gunicorn is handling requests
-
PostgreSQL is storing data
-
The containers are correctly connected
Stopping the Stack
Press:
CTRL + C
To stop, then run:
docker compose down
Your PostgreSQL data will remain safe in the pgdata volume.
Deploying to a Production Server
One of the biggest advantages of Docker is that the exact same setup you ran locally can be deployed to a real server with almost no changes. Whether you use AWS EC2, DigitalOcean, Linode, or a VPS, the process is the same.
In this section, we’ll deploy the FastAPI + Gunicorn + PostgreSQL stack to an Ubuntu server.
Step 1 – Prepare the Server
SSH into your server:
ssh ubuntu@your_server_ip
Update packages and install Docker:
sudo apt update
sudo apt install -y docker.io docker-compose-plugin
sudo systemctl enable docker
sudo systemctl start docker
Add your user to the Docker group:
sudo usermod -aG docker $USER
newgrp docker
Step 2 – Upload Your Project
On your local machine:
scp -r dockerized-python-app ubuntu@your_server_ip:/home/ubuntu
On the server:
cd dockerized-python-app
Step 3 – Configure Environment Variables
Edit .env:
nano .env
Use a secure password:
DATABASE_URL=postgresql://postgres:strongpassword@db:5432/appdb
Update docker-compose.yml to match the password:
POSTGRES_PASSWORD: strongpassword
Step 4 – Start the App
docker compose up -d --build
Check status:
docker compose ps
Step 5 – Open the Firewall
Allow traffic:
sudo ufw allow 8000
sudo ufw enable
Now open:
http://your_server_ip:8000/health
Your API is now running in production 🎉
(Optional) Use Nginx + HTTPS
For real-world apps, you would normally:
-
Put Nginx in front
-
Add SSL with Let’s Encrypt
-
Use port 80/443
This stack is already compatible with that setup.
Best Practices and What to Do Next
You now have a fully working production-ready Python stack running with FastAPI, Gunicorn, PostgreSQL, and Docker. This setup is very close to what is used in real-world SaaS platforms and backend services. To make it even more robust and secure, here are some important best practices to follow.
Use Environment Variables for Secrets
Never hardcode credentials in your source code or Docker files. Always use:
-
.envfiles for development -
Server environment variables for production
For example:
-
Database passwords
-
API keys
-
JWT secrets
Use Docker Volumes for PostgreSQL
Your database data should never live inside the container itself. We already solved this by using:
volumes:
- pgdata:/var/lib/postgresql/data
This ensures:
-
Data survives container restarts
-
You can safely upgrade or redeploy
Tune Gunicorn Workers
A common rule:
workers = 2 × CPU cores + 1
For a 2-core server:
-w 5
This helps you get maximum performance without wasting memory.
Add Health Checks
You already created:
/health
You can use this to:
-
Monitor uptime
-
Configure load balancers
-
Auto-restart containers
Add Nginx and HTTPS
For production:
-
Use Nginx as a reverse proxy
-
Add SSL with Let’s Encrypt
-
Enable gzip and caching
This protects your users and improves performance.
Add CI/CD
You can automate:
-
Docker image builds
-
Tests
-
Server deployment
Using GitHub Actions, GitLab CI, or similar tools.
What to Learn Next
To go further:
-
Add authentication (JWT or OAuth)
-
Add database migrations (Alembic)
-
Add Redis for caching
-
Add monitoring (Prometheus + Grafana)
You now have a complete, professional deployment pipeline for Python APIs.
This same architecture can scale from a small personal project to a high-traffic production system 🚀
Conclusion
In this tutorial, you built a complete, production-ready Python backend using FastAPI, Gunicorn, PostgreSQL, and Docker. Instead of running your application with a simple development server, you now have a setup that mirrors how modern Python APIs are deployed in real-world environments.
You started by creating a FastAPI application with a proper database layer using SQLAlchemy and PostgreSQL. You then introduced Gunicorn to run your application with multiple workers, making it capable of handling real traffic. After that, you packaged everything into a Docker image and used Docker Compose to orchestrate both the application and the database.
With just one command, you were able to:
-
Run the entire stack locally
-
Test real database operations
-
Deploy the same setup to a cloud server
This approach gives you:
-
Consistent environments from development to production
-
Easy scaling and redeployment
-
A clean separation between your app and its infrastructure
Docker removes friction, Gunicorn adds performance and stability, and PostgreSQL gives you a powerful, reliable database — together they form a solid foundation for any serious Python application.
From here, you can extend this stack with features like Nginx, HTTPS, CI/CD pipelines, background workers, or message queues. But even in its current form, what you have built is already suitable for running real applications in production.
You now have everything you need to confidently deploy Python APIs like a professional DevOps engineer 🚀
You can find the full source code on our GitHub.
We know that building beautifully designed Mobile and Web Apps from scratch can be frustrating and very time-consuming. Check Envato unlimited downloads and save development and design time.
That's just the basics. If you need more deep learning about Python, Django, FastAPI, Flask, and related, you can take the following cheap course:
- 100 Days of Code: The Complete Python Pro Bootcamp
- Python Mega Course: Build 20 Real-World Apps and AI Agents
- Python for Data Science and Machine Learning Bootcamp
- Python for Absolute Beginners
- Complete Python With DSA Bootcamp + LEETCODE Exercises
- Python Django - The Practical Guide
- Django Masterclass : Build 9 Real World Django Projects
- Full Stack Web Development with Django 5, TailwindCSS, HTMX
- Django - The Complete Course 2025 (Beginner + Advance + AI)
- Ultimate Guide to FastAPI and Backend Development
- Complete FastAPI masterclass from scratch
- Mastering REST APIs with FastAPI
- REST APIs with Flask and Python in 2025
- Python and Flask Bootcamp: Create Websites using Flask!
- The Ultimate Flask Course
Thanks!
