Django on AWS with NGINX and Gunicorn
I spent this week moving one of my projects to AWS hosting and since I didn't find great documentation on how to do that, I wanted to share what I put together for those to come.
My deployment has an Ubuntu 16.04 EC2 Instance runnign NGINX at the front, Ubuntu 16.04 EC2 instance with my Django App and Gunicorn and behind that an AWS RDS instance of Postresql, I'm hosing my static files from S3 fronted by Cloudwatch (see djano-storages).
# +-------------------------+
# | NGINX |
# +-------------------------+
# +-------------------------+
# | Gunicorn |
# | Django |
# +-------------------------+
# +-----------+ +-----------+
# | Postgres | | S3 |
# +-----------+ +-----------+
This drawing is a little off because actually for static files, the traffic passes from browser directly to Cloudwatch\S3 but you get the point.
On my NGINX instance, I'm using Certbot to maintain an SSL cert from Let's Encrypt
My configuration directs HTTP to HTTPS and redirects www.example.com to example.com. You will noctice the spot where django_instance_ip should be placed. This will be the internal IP of the Django EC2 Instance.
Here are the nginx and gunicorn config files:
/etc/nginx/nginx.conf
worker_processes 1;
pid /var/run/nginx.pid;
error_log /var/log/nginx/error.log warn;
events {
worker_connections 1024; # increase if you have lots of clients
accept_mutex off; # set to 'on' if nginx worker_processes > 1
}
http {
include mime.types;
# fallback in case we can't determine a type
default_type application/octet-stream;
access_log /var/log/nginx/access.log combined;
sendfile on;
upstream app_server {
server <django_instance_ip>:8000 fail_timeout=0;
}
#
# Redirect all www to non-www
#
server {
server_name www.example.com;
ssl_certificate /etc/letsencrypt/live/example.com/cert.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
listen *:80;
listen *:443 ssl;
return 301 https://example.com$request_uri;
}
#
# Redirect all non-encrypted to encrypted
#
server {
server_name example.com;
listen *:80;
return 301 https://example.com$request_uri;
}
#
# example.com Setttings
#
server {
# if no Host match, close the connection to prevent host spoofing
listen 80 default_server;
# listen 443 default_server;
return 444;
}
server {
server_name example.com;
ssl_certificate /etc/letsencrypt/live/example.com/cert.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
listen *:443 ssl;
client_max_body_size 4G;
keepalive_timeout 5;
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
# we don't want nginx trying to do something clever with
# redirects, we set the Host: header above already.
proxy_redirect off;
proxy_pass http://app_server;
}
}
}
In my gunicorn configuration on the Django EC2 Instance you will notice again the django_instance_ip for the internal IP address. Also notice the Environment lines where you can set environment varibles as needed. You might instead read environment variables from a file.
/etc/systemd/system/gunicorn.service
[Unit]
Description=gunicorn daemon
After=network.target
[Service]
User=ubuntu
Group=www-data
WorkingDirectory=/home/ubuntu/Devel/your_project
Environment=SOME_DJANGO_SETTING=A_VALUE
Environment=ANOTHER_DJANGO_SETTING=ANOTHER_VALUE
ExecStart=/home/ubuntu/.virtualenvs/piscine/bin/gunicorn --timeout 300 \
--access-logfile /var/log/gunicorn/access.log \
--error-logfile /var/log/gunicorn/error.log \
--log-level warning \
--capture-output \
--workers 3 \
--bind <django_instance_ip>:8000 \
--forwarded-allow-ips="*" \
your_project.wsgi:application
[Install]
WantedBy=multi-user.target