Skip to content

Instantly share code, notes, and snippets.

@haasr
Last active October 6, 2023 12:45
Show Gist options
  • Save haasr/954fafc50f847a16690e95cc05a6bfed to your computer and use it in GitHub Desktop.
Save haasr/954fafc50f847a16690e95cc05a6bfed to your computer and use it in GitHub Desktop.
Guide: Set Up Secure Django Website using Ubuntu + Nginx, Certbot, Fail2Ban, UFW
Set Up Secure Django Website: Ubuntu + Nginx, Postgres DB, Certbot, Fail2Ban, UFW
*********************************************************************************
Using Ubuntu 22.04 deployed on DigitalOcean. Some steps (most of the SSH
key configuration) apply more to DigitalOcean. I also used Ubuntu 18.04
but Python version had to be updated to 3.9.7 or many dependencies had
to be deprecated.
In this setup process, my static files are stored in AWS. If your static
storage is configured another way, set the STATIC_ROOT in settings.py.
You may also want to consider using dj-static
(https://pypi.org/project/dj-static/) to simplify the process of
switching from development to production without changing staticfile
config.
To better understand the way my Django project is configured, see:
https://github.com/haasr/collaborative-blog/blob/main/blog/settings.py
Reference for configuring Django with S3 bucket storage:
https://www.youtube.com/watch?v=inQyZ7zFMHM
** NOTE: If server is not deployed on DigitalOcean, you may have to
disregard the SSH instructions. I am not familiar with how other hosting
platforms let you set up auth methods for remote access.
Create ssh keypair on local machine
-----------------------------------
Instead of using a password, opt to use SSH key
(see https://docs.digitalocean.com/products/droplets/how-to/add-ssh-keys/
if needed). *SAVE your tokens in more than one place.
ssh-keygen -t rsa -b 4096
cat ~/.ssh/id_rsa.pub > Paste into DigitalOcean text area, name, and save.
Login with root
---------------
ssh root@ip_address
Create regular sudo user + add groups for journalctl
-----------------------------------------------------
adduser ryan
usermod -aG sudo,adm,systemd-journal ryan
-- Check groups: groups ryan
Configure SSH Key for Regular User
-----------------------------------
Below I am switching my shell instance to use my standard ryan user
account. Then I am copying the allowed_keys file from /root/.ssh so that
I can ssh in as the ryan user without using a separate key.
sudo -u ryan bash
cd /home/ryan/
mkdir .ssh
cd .ssh
sudo cp /root/.ssh/authorized_keys .
The authorized keys file could not be read as-is (it was protected for
root user). Rather than figure out the permissions needed for SSH, I did
this:
sudo mv authorized_keys ak
touch authorized_keys
sudo cat ak > authorized_keys
Logout and test with ryan user. Assuming it works, you can now delete
out the authorized_keys file in /root/.ssh.
**NOTE: It is most convenient to define SSH stuff in config file on
local machine (~/.ssh/config). The private key referenced is from the
keypair generated earlier. The public key was already provided to
DigitalOcean and now stored in the regular user's .ssh folder.
Example local .ssh/config file:
===============================
Host 198.51.100.255
Hostname 198.51.100.255
User ryan
IdentityFile ~/.ssh/private_key_name
Host <site hostname to configure (for future use)>
Hostname 198.51.100.255
User ryan
IdentityFile ~/.ssh/private_key_name
Set Timezone
------------
sudo dpkg-reconfigure tzdata
Update and Install Needed Packages
----------------------------------
sudo apt update && sudo apt upgrade -y
sudo apt install certbot python3-certbot-nginx curl fail2ban gcc git libssl-dev libpq-dev net-tools nginx postgresql postgresql-contrib ufw
-- Also... check python version with `python3 --version`
-- And then install corresp. python3-dev and venv version:
sudo apt install python3.x-venv python3.x-dev
-- Install net-tools so I can use the dang ifconfig command
-- Reboot if necessary
Configure Git
-------------
git config --global user.name "Full Name"
git config --global user.email "gitemail@email.com"
Create github SSH key on remote server
======================================
This step applies if your Github account is configured to use SSH keys
instead of passwords or tokens.
ssh-keygen -t rsa -b 4096
-- Create .ssh/config file:
Host github.com
Hostname github.com
User git
IdentityFile ~/.ssh/private_key_name
-- Now copy the public key to your clipboard and on Github,
go to 'Settings' > 'SSH and GPG Keys' > 'New SSH key' and paste in
the value.
-- Then test and press y if prompted to trust the host:
ssh -T git@github.com > yes
Clone Django project repo
-------------------------
cd ~
git clone <repo>
Set up virtualenv
-----------------
cd <repo>
python3 -m venv venv
source venv/bin/acivate
pip3 install -r requirements.txt
** If 'x86_64-linux-gnu-gcc' is not found when running install, run
`sudo apt upgrade gcc`. For me, this happened on Ubuntu 21.10 when trying
to pip install backports.zoneinfo==0.2.1.
Set up Postgres database
------------------------
Start postgres:
sudo systemctl start postgresql.service
Switch to postgres user:
sudo -i -u postgres
Enter PostgreSQL prompt (\q to quit). Enter the following queries to
create a database with a root user:
psql <- Enters prompt
CREATE DATABASE <your_db>;
CREATE USER <your_db_user> WITH PASSWORD '<your_password>';
ALTER ROLE <your_db_user> SET client_encoding TO 'utf8';
ALTER ROLE <your_db_user> SET default_transaction_isolation TO 'read committed';
ALTER ROLE <your_db_user> SET timezone TO '<your_TZ>';
GRANT ALL PRIVILEGES ON DATABASE <your_db> TO <your_db_user>;
ALTER DATABASE <your_db> OWNER TO <your_db_user>;
In `settings.py`, you will need to specify the database host, name, user,
port, and password. I prefer to set this information in environment
variables in a .env file as such (note that the host is localhost):
BLOG_DB_HOST="localhost"
BLOG_DB_NAME="djangoblogdb"
BLOG_DB_USER="djangoblog"
BLOG_DB_PORT=""
BLOG_DB_PASS="FDjf_i0sd12*;z"
Then my DATABASES dictionary in `settings.py` looks like this:
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'NAME': getenv('BLOG_DB_NAME'),
'USER': getenv('BLOG_DB_USER'),
'PASSWORD': getenv('BLOG_DB_PASS'),
'HOST': getenv('BLOG_DB_HOST'),
'PORT': getenv('BLOG_DB_PORT')
}
}
Back up your database!
=====================
You can use the `pg-dump` command to export your database. E.g.,
sudo -i -u postgres
pg_dump -U <your_db_user> -W -F p -d <your_db> > <your_db>-<timestamp>.sql
Then if you ever need to restore your database, you can use the
`pg-restore` command. E.g.,
sudo -i -u postgres
pg_restore -c -U <your_db_user> -W -d <your_db> -f <your_db>-<timestamp>.sql
I prefer to pass an ON_ERROR_STOP option to stop importing the database
if postgres encounters an error:
pg_restore -c -U <your_db_user> -W -d <your_db> --set ON_ERROR_STOP=on -f <your_db>-<timestamp>.sql
____________________________________________________
** NOTE ON ENVIRONMENT VARIABLES
Export environment vars -- might be good to use `python-dotenv` package,
import it in settings.py + call `load_dotenv()` at the beginning of
settings.py and store config vars in .env file so that gunicorn service
file is able to export the variables.
That is the way my project works, where .env is stored in the top level
of the repository, e.g.,
drwxrwxr-x 15 ryan ryan 4096 Sep 20 17:56 .
drwxr-x--- 6 ryan ryan 4096 Sep 19 15:07 ..
-rw-r--r-- 1 ryan ryan 695 Sep 20 17:56 .env <=== .env file here
drwxrwxr-x 8 ryan ryan 4096 Sep 20 17:34 .git
-rw-rw-r-- 1 ryan ryan 99 Sep 19 05:15 .gitignore
-rw-rw-r-- 1 ryan ryan 1066 Sep 19 04:40 LICENSE
-rw-rw-r-- 1 ryan ryan 48635 Sep 19 04:40 README.rst
drwxrwxr-x 9 ryan ryan 4096 Sep 19 04:40 S3
drwxrwxr-x 4 ryan ryan 4096 Sep 19 15:03 admin_pages
drwxrwxr-x 3 ryan ryan 4096 Sep 19 05:49 blog <=== settings.py file in here
-rw-rw-r-- 1 ryan ryan 75 Sep 19 06:20 certbot-command.txt
drwxrwxr-x 3 ryan ryan 4096 Sep 19 05:20 custom_decorators
drwxrwxr-x 5 ryan ryan 4096 Sep 19 04:40 custom_template_tags
-rw-rw-r-- 1 ryan ryan 4123 Sep 19 04:40 initial_setup.py
drwxrwxr-x 3 ryan ryan 4096 Sep 19 05:17 mail_subscription
-rw-rw-r-- 1 ryan ryan 660 Sep 19 04:40 manage.py
-rw-rw-r-- 1 ryan ryan 412 Sep 19 04:52 postgres_commands.txt
drwxrwxr-x 2 ryan ryan 4096 Sep 19 04:40 readme_images
-rw-rw-r-- 1 ryan ryan 1361 Sep 19 05:21 requirements.txt
-rwxrw---- 1 ryan ryan 348 Sep 19 04:40 runserver.sh
drwxrwxr-x 3 ryan ryan 4096 Sep 19 05:17 site_pages
drwxrwxr-x 3 ryan ryan 4096 Sep 19 05:17 site_pages_forms
drwxrwxr-x 6 ryan ryan 4096 Sep 19 04:40 templates
drwxrwxr-x 3 ryan ryan 4096 Sep 19 05:17 users
drwxrwxr-x 6 ryan ryan 4096 Sep 19 04:43 venv
Loading the .env in settings.py
================================
"""
Django settings for blog project.
Generated by 'django-admin startproject' using Django 3.1.7.
For more information on this file, see
https://docs.djangoproject.com/en/3.1/topics/settings/
For the full list of settings and their values, see
https://docs.djangoproject.com/en/3.1/ref/settings/
"""
from dotenv import load_dotenv
from pathlib import Path
from os import getenv
load_dotenv()
...
____________________________________________________
Run Time!
---------
Make sure 0.0.0.0, the IP address, and future domain name are in
ALLOWED_HOSTS.
Migrate database:
Run setup script (this is a me thing -- populates the DB and stuff)
UFW Rules
==========
sudo ufw enable
sudo ufw allow "OpenSSH"
sudo ufw allow 8000
python3 manage.py runserver 0.0.0.0:8000
-- Check by looking at http://<ip_address>:8000
gunicorn --bind 0.0.0.0:8000 <project_main_app>.wsgi
Configure Gunicorn Socket Service
---------------------------------
sudo nano /etc/systemd/system/gunicorn.service
Example (notice the EnvironmentFile):
[Unit]
Description=Gunicorn daemon
After=network.target
[Service]
User=ryan
Group=www-data
WorkingDirectory=/home/<user>/<project_name>
EnvironmentFile=/home/<user>/<project_name>/.env
ExecStart=/home/<user>/<project_name>/venv/bin/gunicorn \
--access-logfile - \
--workers 3 \
--bind unix:/run/gunicorn.sock \
<project_main_app>.wsgi:application
[Install]
WantedBy=multi-user.target
sudo systemctl enable gunicorn
sudo systemctl start gunicorn
systemctl status gunicorn
-- If it failed, run sudo journalctl -u gunicorn
Configure Nginx
---------------
sudo nano /etc/nginx/sites-available/<project-name>
-- Example:
server {
listen 80;
server_name 198.51.100.255;
location = /favicon.ico { access_log off; log_not_found off; }
location / {
include proxy_params;
proxy_pass http://unix:/run/gunicorn.sock;
}
}
Or if using static files storage:
server {
listen 80;
server_name 198.51.100.255;
location = /favicon.ico { access_log off; log_not_found off; }
location /static/ {
root /home/<user>/<example-django-site>;
}
location / {
include proxy_params;
proxy_pass http://unix:/run/gunicorn.sock;
}
}
sudo ln -s /etc/nginx/sites-available/<project_name> /etc/nginx/sites-enabled
sudo nginx -t
sudo systemctl restart nginx
sudo ufw delete allow 8000
sudo ufw allow 'Nginx Full'
____________________________________________________
** NOTE: If you get Nginx 502 error and you check the log
(sudo tail -F /var/log/nginx/error.log) and see that there is a
(13: Permission denied) error, make sure your folders are modified for
global read and execute and the socket file is modified also allows for
read/write:
sudo chmod 0755 /run
Check the permissions with `namei -l`:
namei -l /run
f: /run
drwxr-xr-x root root /
drwxr-xr-x root root run
Restart nginx service and see if that will resolve it.
____________________________________________________
Change the Upload Size
========================
Nginx's default upload size limit is 1MB. This is fine sometimes. If your
website allows the upload of large media files, this will need to be
increased.
sudo nano /etc/nginx/nginx.conf
-- Edit the client_max_body_size to suit your use case:
http{
...
client_max_body_size 50M;
...
}
Setup Domain
------------
Domain Registrar
================
Log into domain registrar site and go to DNS settings.
Set custom nameservers
(obviously use different ones if not deployed on DigitalOcean):
ns1.digitalocean.com
ns2.digitalocean.com
ns3.digitalocean.com
Server Host -- Add A records
============================
In DigitalOcean project, go to Create > Domains/DNS > Enter domain name
Create records:
A: Host: @, Direct to: <site IP address>
A: Host: www, Direct to <site IP address>
Edit Nginx Config
=================
sudo nano /etc/nginx/sistes-available/<project-name>
-- Change server name from IP address to domain name.
-- Example: server_name example.com www.example.com;
sudo systemctl restart nginx
systemctl status gunicorn
systemctl status nginx
* NOTE: This might not work unless you run `sudo ufw allow 'Nginx HTTP'`
Secure with SSL
---------------
sudo nginx -t
sudo systemctl reload nginx
-- If necessary, make sure Nginx sans regular HTTP is allowed:
sudo ufw allow 'Nginx Full'
sudo ufw delete allow 'Nginx HTTP'
sudo certbot --nginx -d example.com -d www.example.com
Run through the setup.
Now visit the https version of your site in a web browser to make sure it
works.
* If there was a 404 error and the server failed the acme challenge, you
will want to check that your DNS nameservers have been configured for your
server provider and that you have set the necessary A records pointing
your domain variants to your server IP address. I would wait 20-30 minutes
after setting the A records, based on my experience.
Verify Certbot Auto-Renewal
===========================
sudo systemctl status certbot.timer
sudo certbot renew --dry-run
Secure SSH
----------
Change the Default Port
=======================
Change default port! 22 is the default. Set a different port because 22
is the first a hacker would try.
Common ports:
20 tcp ftp-data
21 tcp ftp server
22 tcp ssh server
23 tcp telnet server
25 tcp email server
53 tcp/udp Domain name server
69 udp tftp server
80 tcp HTTP server
110 tcp/udp POP3 server
123 tcp/udp NTP server
443 tcp HTTPS server
sudo nano /etc/ssh/sshd_config
-- Change the line "Port 22" to your port.
* NOTE: To avoid using the "-P" argument with ssh command on local machine,
update the config file from earlier on local machine.
Example local .ssh/config file:
===============================
Host 198.51.100.255
Hostname 198.51.100.255
Port 2539
User ryan
IdentityFile ~/.ssh/private_key_name
Host example.com
Hostname 198.51.100.255
Port 2539
User ryan
IdentityFile ~/.ssh/private_key_name
Update IPTables Rules
=====================
sudo ufw allow <new_port>/tcp comment 'SSH port'
sudo ufw delete allow "OpenSSH"
sudo ufw status
Configure Fail2Ban
==================
cd /etc/fail2ban
sudo cp jail.conf jail.local
sudo nano jail.local
Set rules for SSH as part of the [DEFAULT] config.
sudo systemctl restart fail2ban
systemctl status fail2ban
Increase the SSH Timeout
------------------------
Servers apparently have a default timeout of about 15 minutes. This is not
good if you are running a major system upgrade. I have had an upgrade
interrupted and it cause breakage that made my webserver go down until I
cleaned up the mess and reinstalled everything.
sudo nano /etc/ssh/sshd_config
Find the (perhaps commented) lines, `ClientAliveInterval` and `ClientAliveCountMax`.
The alive interval is the amount of time that elapses in seconds before the host
sends a keep-alive request to your client. The alive count max variable specifies the
amount of times the server will send a keep-alive request that recieves no response.
After the max is reached with no response, the connection terminates. I set mine as such:
ClientAliveInterval 600
ClientAliveCountMax 4
This means that every 600 seconds (10 minutes) a keep-alive message is sent to my SSH
client. If I am inactive for as much as 40 minutes (the time for all 4 keep-alive messages)
to be sent, my connection will then be terminated. Finally, restart the ssh service to
apply your changes:
sudo systemctl restart ssh
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment