Deploying a Django Application on a Linux Server: PostgreSQL, Gunicorn, Supervisor, Redis, and Nginx

DevOps

You've built your Django app locally. It runs fine. Now you need to get it onto a production Linux server, properly, without duct tape. That means a dedicated database, a real WSGI server, a process manager that survives reboots, a caching layer, and a battle-tested reverse proxy in front of everything.

This guide walks through every step: creating a server user, setting up PostgreSQL, deploying via Git hooks, managing processes with Supervisor, wiring up Redis, configuring Nginx with security headers, and terminating SSL with Certbot. Fill in your values below — every command in this article updates automatically.


Set Your Variables

Enter your server details below. Every code block in this article updates instantly to reflect your values, making every command ready to copy and paste directly into your terminal.

Interactive Variables

Enter your values below — every code block in this article updates instantly.


1. Prerequisites

Before starting, make sure you have:

  • A fresh Ubuntu or Debian server with root access
  • SSH access (via password or key)
  • A domain pointed at your server's IP (for SSL setup)
  • Your Django project in a local Git repository

All commands assume Ubuntu/Debian. Adapt the package manager calls for other distros.


2. Create a Dedicated Server User

Running your application as ubuntu is a significant security risk. Create a dedicated user that owns your app, nothing more.

# SSH into your server as ubuntu
ssh ubuntu@$SERVER_IP -p $SSH_PORT

# Create the user with a home directory
useradd -m $SERVER_USER

# Set the password
echo "$SERVER_USER:$SERVER_PASSWORD" | chpasswd

# Grant sudo privileges
usermod -aG sudo $SERVER_USER

# Set bash as the default shell
chsh --shell /bin/bash $SERVER_USER

# Switch to the new user
su - $SERVER_USER

SSH AllowUsers (if applicable)

If your /etc/ssh/sshd_config uses the AllowUsers directive to restrict which accounts can log in via SSH, you must add your new user explicitly — otherwise SSH will reject them entirely, even with valid credentials.

sudo nano /etc/ssh/sshd_config

Find or add:

AllowUsers $SERVER_USER

Restart SSH after saving:

sudo systemctl restart ssh

3. Set Up PostgreSQL

Install and enable

sudo apt-get update
sudo apt-get install postgresql postgresql-contrib -y
sudo systemctl enable postgresql
sudo systemctl start postgresql

Create the database and role

PostgreSQL uses a postgres system user to manage the database engine. Switch to it, then create your database and role:

sudo su - postgres
createdb $DB_NAME

echo "CREATE ROLE $DB_USER WITH PASSWORD '$DB_PASSWORD';" | psql
echo "ALTER ROLE $DB_USER WITH LOGIN;" | psql
echo "GRANT ALL PRIVILEGES ON DATABASE \"$DB_NAME\" TO $DB_USER;" | psql

exit

Note: The ALTER ROLE ... WITH LOGIN step is separate because CREATE ROLE doesn't grant login capability by default. Without it, Django's database connection will be refused.

Update your Django settings.py to match:

DATABASES = {
    'default': {
        'ENGINE': 'django.db.backends.postgresql',
        'NAME': '$DB_NAME',
        'USER': '$DB_USER',
        'PASSWORD': '$DB_PASSWORD',
        'HOST': '127.0.0.1',
        'PORT': '5432',
    }
}

4. Git-Based Deployment

This approach sets up a bare Git repository on the server. When you push from your local machine, the server automatically checks out the code and runs a deploy script. No manual file transfers, no FTP.

Create the directory structure

SSH in as $SERVER_USER and create the required directories:

cd ~
mkdir repo.git app conf logs media static
DirectoryPurpose
repo.gitBare Git repo that receives your pushes
appWorking copy of your project (checked out by Git hook)
confNginx and Supervisor config files
logsApplication and server logs
mediaUser-uploaded files
staticCollected static files

Initialize the bare repository

cd ~/repo.git
git init --bare
git --bare update-server-info
git config core.bare false
git config receive.denycurrentbranch ignore
git config core.worktree /home/$SERVER_USER/app/

The core.worktree setting is what makes this work: even though repo.git is a bare repository, it instructs Git to check files out into the app/ directory instead. This is the bridge between the bare repo (which receives pushes) and the running code directory.

Create the post-receive hook

cat > hooks/post-receive <<EOF
#!/bin/sh
git checkout -f
cd ../app/
./deploy.sh
EOF

chmod +x hooks/post-receive

The post-receive hook fires automatically after every successful push. It:

  1. Checks out the latest code into core.worktree (app/)
  2. Changes into the app directory
  3. Runs deploy.sh — a script in your project root that handles migrations, static file collection, or anything else the deployment needs

A minimal deploy.sh looks like:

#!/bin/bash
set -e

source /home/$SERVER_USER/env/bin/activate

pip install -r requirements/prod.txt
python manage.py migrate --noinput
python manage.py collectstatic --noinput

sudo supervisorctl restart $PROJECT_NAME

Make it executable in your local repo:

chmod +x deploy.sh
git add deploy.sh
git commit -m "Add deploy script"

Connect your local repo and push

Back on your local machine:

# Add the server as a remote
git remote add server $SERVER_USER@$SERVER_IP:/home/$SERVER_USER/repo.git/

# Copy your SSH key to the server (avoids password prompts on push)
ssh-copy-id -p $SSH_PORT $SERVER_USER@$SERVER_IP

# Push all branches
git push server --all

From this point, every git push server main deploys your application automatically.

Extensibility: The deploy.sh script is intentionally simple. You can extend it to run cache warmup commands, send deployment notifications to Slack, push static files to a CDN, or trigger smoke tests — anything that should happen on every deploy.


5. Python Environment and Project Setup

Install pip and virtualenv

sudo apt install python3-pip -y
pip3 install virtualenv

Create a virtual environment and install dependencies

cd ~
virtualenv env -p python3
source env/bin/activate

cd ~/app
pip install -r requirements/prod.txt
pip install gunicorn

Run migrations and collect static files

python manage.py migrate
python manage.py collectstatic --noinput

Sanity check

Before handing things off to Gunicorn and Supervisor, confirm the app itself is healthy:

python manage.py runserver 0.0.0.0:8000

Hit http://$SERVER_IP:8000 in your browser. If it loads, the app and database are wired up correctly. Kill the server with Ctrl+C when done — runserver is not for production use.


6. Supervisor for Process Management

Django itself has no concept of "stay running." If the process dies — due to an error, a reboot, or anything else — nothing restarts it. Supervisor is the daemon that owns that responsibility.

Install and enable

sudo apt install supervisor -y
sudo systemctl enable supervisor
sudo systemctl start supervisor

Create the Supervisor program config

nano ~/conf/supervisor.conf
[program:$PROJECT_NAME]
command=/home/$SERVER_USER/env/bin/gunicorn $PROJECT_NAME.wsgi:application --workers 3 --bind 127.0.0.1:8000
user=$SERVER_USER
directory=/home/$SERVER_USER/app/
stdout_logfile=/home/$SERVER_USER/logs/django.log
stderr_logfile=/home/$SERVER_USER/logs/django_err.log
autostart=true
autorestart=true

A few things worth noting:

  • --workers 3: A common starting point is (2 × CPU cores) + 1. Adjust based on your server's resources.
  • --bind 127.0.0.1:8000: Gunicorn listens on localhost only. Nginx is the only entry point from the outside.
  • autostart and autorestart: Supervisor will start the process on boot and restart it if it dies unexpectedly.

If you prefer a gunicorn.conf.py file for more granular control, you can replace the inline command with:

command=/home/$SERVER_USER/env/bin/gunicorn $PROJECT_NAME.wsgi:application -c /home/$SERVER_USER/app/gunicorn.conf.py

Activate the config

sudo ln -s /home/$SERVER_USER/conf/supervisor.conf /etc/supervisor/conf.d/$PROJECT_NAME.conf
sudo supervisorctl reload

Allow the deploy user to restart without a password

Your deploy.sh script calls supervisorctl restart on every push. To avoid a password prompt during automated deployments, add a sudoers rule:

sudo visudo -f /etc/sudoers.d/supervisor_$PROJECT_NAME

Add:

$SERVER_USER ALL = (root) NOPASSWD:/usr/bin/supervisorctl restart $PROJECT_NAME

7. Redis

Redis handles caching, session storage, and acts as the message broker for background task queues like Celery or Django Q. Install it and leave it running on its default port (6379).

sudo apt install redis-server -y
sudo systemctl enable redis-server
sudo systemctl start redis-server

Verify it's alive:

redis-cli ping
# Expected output: PONG

Update your Django settings to point at Redis where needed (e.g., for caching or Celery):

CACHES = {
    'default': {
        'BACKEND': 'django.core.cache.backends.redis.RedisCache',
        'LOCATION': 'redis://127.0.0.1:6379/1',
    }
}

8. Nginx Configuration

Nginx sits in front of Gunicorn as a reverse proxy. It handles SSL termination, static and media file serving, security headers, and connection management — all of which Gunicorn was never designed to handle efficiently.

Install Nginx

sudo apt install nginx -y
sudo systemctl enable nginx

# Remove the default config to avoid conflicts
sudo rm /etc/nginx/sites-enabled/default

Write the Nginx config

nano ~/conf/nginx.conf
upstream $PROJECT_NAME {
    server 127.0.0.1:8000;
}

# Redirect www to non-www
server {
    listen 80;
    server_name www.$DOMAIN_NAME;
    return 301 https://$DOMAIN_NAME$request_uri;
}

server {
    listen 80;
    server_name $DOMAIN_NAME;

    error_log /home/$SERVER_USER/logs/nginx.error.log;

    location /robots.txt {
        alias /home/$SERVER_USER/static/robots.txt;
    }

    location /favicon.ico {
        alias /home/$SERVER_USER/static/img/favicon.ico;
    }

    location ~ ^/(media|static)/ {
        root /home/$SERVER_USER/;
        expires 30d;
    }

    location / {
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header Host $http_host;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_redirect off;
        proxy_pass http://$PROJECT_NAME;
        client_max_body_size 50m;

        # Security headers
        add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload" always;
        add_header X-Content-Type-Options nosniff;
        add_header X-XSS-Protection "1; mode=block";
        add_header X-Frame-Options SAMEORIGIN;
    }

    # Block hidden files (.env, .git, etc.)
    location ~ /\. {
        access_log off;
        log_not_found off;
        deny all;
    }
}

A few things to understand here:

  • upstream $PROJECT_NAME: Defines the Gunicorn backend. The name matches the Supervisor program and is used in proxy_pass.
  • www redirect: Forces all www.$DOMAIN_NAME traffic to $DOMAIN_NAME. Certbot will convert these to HTTPS redirects once the certificate is issued.
  • expires 30d: Static and media files are cached by the browser for 30 days, reducing server load significantly.
  • client_max_body_size 50m: Allows file uploads up to 50MB. Adjust for your use case.
  • Security headers: The four add_header directives handle HSTS, MIME-type sniffing prevention, XSS protection, and clickjacking prevention.
  • Dotfile block: Prevents .env, .git, and similar files from being accidentally served.

Activate the config

sudo ln -s /home/$SERVER_USER/conf/nginx.conf /etc/nginx/conf.d/$PROJECT_NAME.conf

9. SSL with Certbot

Never serve a production application over plain HTTP. Certbot automates certificate issuance from Let's Encrypt and handles Nginx config modification automatically.

sudo apt install certbot python3-certbot-nginx -y
sudo certbot --nginx -d $DOMAIN_NAME -d www.$DOMAIN_NAME

Certbot will issue the certificate, update your Nginx config to listen on port 443, and set up HTTP-to-HTTPS redirects automatically.

Auto-renewal: Let's Encrypt certificates expire every 90 days. Certbot installs a systemd timer that runs certbot renew automatically. Verify it's active with:

sudo systemctl status certbot.timer

10. Final Validation and Restart

Test that your Nginx configuration is syntactically valid before applying it:

sudo nginx -t

Expected output:

nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful

If there are errors, fix them before restarting. Then restart both services:

sudo systemctl restart nginx
sudo supervisorctl restart $PROJECT_NAME

Open https://$DOMAIN_NAME in your browser. You should see your Django application running over HTTPS.


Final State Overview

After completing all steps, here is what is running on your server:

ComponentRoleStatus
PostgreSQLPrimary databaseRunning as system service
GunicornWSGI application serverManaged by Supervisor
SupervisorProcess managerStarts on boot, restarts on crash
RedisCache and task brokerRunning as system service
NginxReverse proxy, SSL, static filesRunning as system service
CertbotSSL certificate managementAuto-renewal via systemd timer

Every push to git push server main triggers the post-receive hook, runs deploy.sh, applies migrations, collects static files, and restarts Gunicorn — all without manual intervention.


Common Pitfalls

1. 502 Bad Gateway from Nginx

This almost always means Gunicorn isn't running or isn't listening on port 8000. Check:

sudo supervisorctl status $PROJECT_NAME
tail -50 /home/$SERVER_USER/logs/django_err.log

2. Static files returning 404

Run python manage.py collectstatic and confirm that STATIC_ROOT in your settings points to /home/$SERVER_USER/static/. Nginx serves from there — not from inside your project directory.

3. Permission errors in logs

The Supervisor process runs as $SERVER_USER. Make sure the logs/, static/, and media/ directories are owned by that user:

chown -R $SERVER_USER:$SERVER_USER /home/$SERVER_USER/logs /home/$SERVER_USER/static /home/$SERVER_USER/media

4. deploy.sh not executable

Git preserves file permissions, but if the executable bit is missing on the server, the post-receive hook will fail silently. Fix it once on the server:

chmod +x /home/$SERVER_USER/app/deploy.sh

5. Database connection refused

If Django reports connection refused on port 5432, PostgreSQL may not be running or the role hasn't been granted login:

sudo systemctl status postgresql
sudo su - postgres -c "psql -c '\du'"

That's a production Django deployment from scratch. One server user, one database, one process manager, one cache layer, one reverse proxy, and a Git hook that ties the whole deployment pipeline together. It is not the most glamorous setup, but it is solid, auditable, and surprisingly easy to debug when something breaks at 2 AM.

Keep your deploy.sh lean, watch your Supervisor logs, and stay on top of OS updates.