Skip to content

Instantly share code, notes, and snippets.

@ride90
Last active February 14, 2024 10:42
Show Gist options
  • Save ride90/b433b99b6b6f0195bb6ba6496db23dae to your computer and use it in GitHub Desktop.
Save ride90/b433b99b6b6f0195bb6ba6496db23dae to your computer and use it in GitHub Desktop.
My Ubuntu 18.04 server setup

Ubuntu 18.04 server setup

Preword

This manual contains some basic things I do for Ubuntu 18.04 server setup.
In general, it's not a mature production-ready setup, but it might be if you know how to polish it.
I use it for my personal needs e.q. file sharing, cloud file storage, serving static pages, running python/node apps, etc.

⚠️ I'm not a system administrator, so don't try it at home!

ToC

4. FirewallπŸ”₯

πŸ‘‰ For the next steps I assume that you have a fresh Ubuntu 18.04 installed on your server with root access.

Must have software:

sudo apt-get update
sudo apt-get upgrade
sudo apt dist-upgrade
sudo apt-get install -y make build-essential libssl-dev zlib1g-dev libbz2-dev libreadline-dev libsqlite3-dev wget curl llvm libncurses5-dev libncursesw5-dev xz-utils tk-dev libffi-dev liblzma-dev libpq-dev python-psycopg2 python-openssl git nano libjpeg8-dev zlib1g-dev

You can read more about this commands by man apt-get.

Might be handy software

sudo apt-get install mc
sudo apt-get install tmux

I highly recommend setting different themes for root and non-root users.

.bashrc

Enhance promt to show git branch. Append this snippet to your ~/.bashrc:

# show git branch
parse_git_branch() {
 git branch 2> /dev/null | sed -e '/^[^*]/d' -e 's/* \(.*\)/(\1)/'
}
if [ "$color_prompt" = yes ]; then
 PS1='${debian_chroot:+($debian_chroot)}\[\033[01;32m\]\u@\h\[\033[00m\]:\[\033[01;34m\]\w\[\033[01;31m\] $(parse_git_branch)\[\033[00m\]\$ '
else
 PS1='${debian_chroot:+($debian_chroot)}\u@\h:\w$(parse_git_branch)\$ '
fi

and source it to apply changes. source ~/.bashrc IMO no sense to add to .bashrc of root uset.

It's not a good practice to use a root user all the time, so in this section, we'll create a normal user and grant a superuser permissions to it to have an ability execute commands with sudo.

Login with a root and execute:

adduser oleg

Then enter a new password and optionally fill other fields.
For sure you can use another name, no need to use mine πŸ˜‰

πŸ“— πŸ‘‡
# list all users
compgen -u

# list all groups
compgen -g

Grant sudo permissions to a newly created user:

usermod -aG sudo oleg

That's it. Now you can login as oleg and do somthing like that sudo _your_cmnd_.

Generate new SSH key:

ssh-keygen -t rsa -b 4096 -C "your_email@example.com"

⚠️ Run this command on machine you want to connect from. Most probably it's your laptop or pc.

Now copy a generated public key to your ubuntu server, if you can πŸ˜‰ or do it manually:

# copy your key
cat ~/.ssh/id_rsa.pub

Create a directory on server if necessary:

mkdir -p ~/.ssh

Create or modify the authorized_keys file in ~/.ssh directory. You can add the contents of your id_rsa.pub file to the end of the authorized_keys file:

~/.ssh/authorized_keys

If you’re using the root account to set up keys for a user account, it’s also important that the ~/.ssh directory belongs to the user and not to root:

chown -R oleg:oleg ~/.ssh

4. Firewall πŸ”₯

The default firewall configuration tool for Ubuntu is ufw. Developed to ease iptables firewall configuration, ufw provides a user friendly way to create an IPv4 or IPv6 host-based firewall. By default UFW is disabled and sometimes it's even not preinstalled.

If ufw is not installed:

sudo apt install ufw

Different applications can register their profiles with UFW during installation. These profiles allow ufw to manage these applications by user friendly name.

List all profiles:

sudo ufw app list
Available applications:
  ...
  OpenSSH
  ...

OpenSSH is a connectivity tool for remote login with the SSH protocol and it has a profile registered with ufw.

You can get a ufw firewall status:

sudo udo ufw status

It must be disabled by default.
⚠️ Don't activate it before you allow ssh connection!

Now it's time to allow OpenSSH, so we can SSH to our remote server:

sudo ufw allow OpenSSH

Activate/Deactivate:

sudo ufw enable
sudo ufw disable

Once it's active you can see a list of rules:

$ sudo ufw status
Status: active

To                         Action      From
--                         ------      ----
OpenSSH                    ALLOW       Anywhere
OpenSSH (v6)               ALLOW       Anywhere (v6)

⚠️ If your ubuntu server is VPS and it works inside some container e.g OpenVZ, in this case you might need to activate a iptables in your container control panel. Some system features can be controlled by virtualization system, so to use ufw iptables must be activated for your container.

SFTP is available by default with no additional configuration on all servers that have SSH access enabled. It's secure and easy to use, but comes with a disadvantage: in a standard configuration, the SSH server grants file transfer access and terminal shell access to all users with an account on the system.

In some cases, you might want only certain users to be allowed file transfers and no SSH access. In this tutorial, we'll set up the SSH daemon to limit SFTP access to one directory with no SSH access allowed on per-user basis.

create a new user who will be granted only file transfer access to the server.

sudo adduser olegftp

Creating a Directory for File Transfers

In order to restrict SFTP access to one directory, we first have to make sure the directory complies with the SSH server's permissions requirements, which are very particular.

Specifically, the directory itself and all directories above it in the filesystem tree must be owned by root and not writable by anyone else. Consequently, it's not possible to simply give restricted access to a user's home directory because home directories are owned by the user, not root.

There are a number of ways to work around this ownership issue. In this tutorial, we'll create and use /var/sftp/uploads as the target upload directory. /var/sftp will be owned by root and will not be writable by other users; the subdirectory /var/sftp/uploads will be owned by olegftp, so that user will be able to upload files to it.

Create dirs:

sudo mkdir -p /var/sftp/uploads

Set the owner of /var/sftp to root:

sudo chown root:root /var/sftp

Give root write permissions, and for other users only read and execute rights.

sudo chmod 755 /var/sftp

Change the ownership on the uploads directory to olegftp.

sudo chown olegftp:olegftp /var/sftp/uploads

Ok, let's configure the SSH

Disallow terminal access for olegftp but allow file transfer access:

sudo nano /etc/ssh/sshd_config

and append next snippet:

# tells the SSH server to apply the following commands only to the olegftp
Match User olegftp
# forces the SSH server to run the SFTP server upon login, disallowing shell access
ForceCommand internal-sftp
PasswordAuthentication yes
ChrootDirectory /var/sftp
PermitTunnel no
# disables port forwarding, tunneling and X11 forwarding
AllowAgentForwarding no
AllowTcpForwarding no
X11Forwarding no

save it.

Apply new configuration:

sudo systemctl restart sshd

Now you can take you favorite SFTP client and connect to your server πŸ“‚

Configure Nginx, SSL and DNS to serve your lovely static web site.

Nginx

No need to comment this part

sudo apt install nginx

Adjust our ufw firewall.
List app profiles

sudo ufw app list
Available applications:
  Nginx Full
  Nginx HTTP
  Nginx HTTPS
  OpenSSH

ℹ️ Nginx Full: opens both port 80 and 443

Let's open both:

sudo ufw allow 'Nginx Full'

See changes:

sudo ufw status

Check nginx status:

systemctl status nginx

If it works, you can access page by IP and see "Welcome to nginx!" msg.

ℹ️ some nginx manage commands:
sudo systemctl enable nginx - make nginx start on server boot
sudo systemctl disable nginx - opposite to ⬆️
sudo systemctl stop nginx - stop server
sudo systemctl start nginx - start server
sudo systemctl restart nginx - restart server
sudo systemctl reload nginx - reload configs sudo nginx -t - test configs

First of all let's delete default hello world nginx site. Just delete symlink from /etc/nginx/sites-enabled

Ok, let's imagine that you are an owner of damndomainname.com and you want to serve your index.html when type it in the browser. What you need to do is to go to your DNS management, face the horribly UI/UX and try to setup an A record for your domain and point it to your server IP. That's it.

Your nginx config in the very simple form might look like this:

server {
    listen 80 default_server;
    listen [::]:80 default_server ipv6only=on;
        server_name damndomainname.com;

    root /var/www/damndomainname/;
    index index.html index.htm;

    access_log /var/log/nginx/damndomainname.access.log;
    error_log /var/log/nginx/damndomainname.error.log;

    location / {
        # First attempt to serve request as file, then
        # as directory, then fall back to displaying a 404.
        try_files $uri $uri/ =404;
        # Uncomment to enable naxsi on this location
        # include /etc/nginx/naxsi.rules
    }
}

Config is straightforward, no need to explain it. Save it under /etc/nginx/sites-available/damndomainname_com and symlink to /etc/nginx/sites-enabled/damndomainname_com

I know that we did not create a root /var/www/damndomainname/; dir yet. Let's do it:

sudo mkdir /var/www/damndomainname/

set rights to this folder:

sudo chown -R $USER:$USER /var/www/damndomainname/

upload your index.html to /var/www/damndomainname/.
BTW. You can do it via SFTP.

Validate and reload configs:

sudo nginx -t
sudo systemctl reload nginx

Now you can access it.

HTTPS

Install TLS/SSL certificates to enable HTTPS. Let's Encrypt is a Certificate Authority (CA) that provides an easy way to obtain and install free TLS/SSL certificates, thereby enabling encrypted HTTPS on web servers. It simplifies the process by providing a software client, Certbot, that attempts to automate most (if not all) of the required steps. Currently, the entire process of obtaining and installing a certificate is fully automated on both Apache and Nginx.

In this tutorial, you will use Certbot to obtain a free SSL certificate for Nginx on Ubuntu 18.04 and set up your certificate to renew automatically.

Installing Certbot

First of all we need to install add-apt-repository command line utility for adding PPA (Personal Package Archive):

sudo apt-get install -y software-properties-common

Add repository:

sudo add-apt-repository ppa:certbot/certbot

Install Certbot Nginx package:

sudo apt install python-certbot-nginx

! you might need to run apt-get update before executing this command.

Configure nginx

Certbot needs to be able to find the correct server block in your Nginx configuration for it to be able to automatically configure SSL. Specifically, it does this by looking for a server_name directive that matches the domain you request a certificate for. In our case we have already configured it in previous steps.

Allow HTTPS in UFW

we actually did it before, but check that you have Nginx Full allowed:

sudo ufw status
Status: active

To                         Action      From
--                         ------      ----
...
Nginx Full                 ALLOW       Anywhere
Nginx Full (v6)            ALLOW       Anywhere (v6)
...

Otherwise:

sudo ufw allow 'Nginx Full'

Obtaining an SSL Certificate

πŸ“— πŸ‘‡

# read it
certbot --help

The Nginx plugin will take care of reconfiguring Nginx and reloading the config whenever necessary:

sudo certbot --nginx -d damndomainname.com

certbot will communicate with the Let's Encrypt server, then run a challenge to verify that you control the domain you're requesting a certificate for.

That's it. Now you nginx conf is updated.

Let's Encrypt's certificates are only valid for ninety days. The certbot care of this for us by adding a renew script to /etc/cron.d. This script runs twice a day and will automatically renew any certificate that's within thirty days of expiration.

To test the renewal process, you can do a dry run with certbot:

sudo certbot renew --dry-run

Time zone

To check your current timezone, run:

date
Sat Aug 10 12:59:01 UTC 2019

UTC is the default time zone. To avoid timezone confusion and the complexities of adjusting clocks for daylight saving time in accordance with regional custom, it is often recommended that all servers use UTC.

If your timezone is not UTC it's better to change it to UTC:

sudo timedatectl set-timezone UTC

Run Python Djano/Celery application

PostgreSQL

Install

To install PostgreSQL:

sudo apt-get install postgresql postgresql-contrib

In my case version 10.10 is installed.

Login

πŸ“— πŸ‘‡

By default, Postgres uses a concept called "roles" to handle in authentication and authorization. These are, in some ways, similar to regular Unix-style accounts, but Postgres does not distinguish between users and groups and instead prefers the more flexible term "role".
Upon installation, Postgres is set up to use ident authentication, meaning that it associates Postgres roles with a matching Unix/Linux system account. If a role exists within Postgres, a Unix/Linux username with the same name is able to sign in as that role.
The installation procedure created a user account called postgres that is associated with the default Postgres role. In order to use Postgres, you can log into that account.

Run psql using postgres user

sudo -u postgres psql

ℹ️ type \q to exit from interactive Postgres session.

Create a new database

Start postgres session:

sudo -u postgres psql

I think it's clear and there is no need to explain what is going on here πŸ‘‡

postgres=# CREATE DATABASE api;
CREATE DATABASE
postgres=# CREATE USER api_user WITH password 'yourpswd';
CREATE ROLE
postgres=# GRANT ALL ON DATABASE api TO api_user;
GRANT
postgres=# \q

Allow remote connections to PostgreSQL

Change listen_addresses from default localhost to *

sudo nano /etc/postgresql/10/main/postgresql.conf

Allow access to all databases for all users.

sudo nano /etc/postgresql/10/main/pg_hba.conf

Change config to:

# TYPE  DATABASE        USER            ADDRESS                 METHOD

# "local" is for Unix domain socket connections only
local   all             all                                     peer
# IPv4 local connections:
host    all             all             all                     md5
# IPv6 local connections:
host    all             all             all                     md5
# Allow replication connections from localhost, by a user with the
# replication privilege.
local   replication     all                                     peer
host    replication     all             127.0.0.1/32            md5
host    replication     all             ::1/128                 md5

You need to change address for IPv4 and IPv6 to all.

Restart PostgreSQL server:

sudo systemctl restart postgresql

Adjust ufw to allow port 5432:

sudo ufw allow 5432/tcp
Rule added
Rule added (v6)

sudo ufw status
Status: active

To                         Action      From
--                         ------      ----
OpenSSH                    ALLOW       Anywhere
Nginx Full                 ALLOW       Anywhere
5432/tcp                   ALLOW       Anywhere
OpenSSH (v6)               ALLOW       Anywhere (v6)
Nginx Full (v6)            ALLOW       Anywhere (v6)
5432/tcp (v6)              ALLOW       Anywhere (v6)

That's it, now you can connect to postgresql from the "outside".

RabbitMQ

Why we need it? RabbitMQ is used by Celery as a message exchange backend.

Install

Install RabbitMQ server:

sudo apt-get install rabbitmq-server

Enable management plugin with web gui:

sudo rabbitmq-plugins enable rabbitmq_management

TODO describe how to server web console using nginx

Create vhost, user and tag user with admin permisisons

sudo rabbitmqctl add_vhost api Creating vhost "api" oleg@ridelink:/var/www$ sudo rabbitmqctl add_user api api Creating user "api" oleg@ridelink:/var/www$ sudo rabbitmqctl set_permissions -p api api "." "." ".*" Setting permissions for user "api" in vhost "api" oleg@ridelink:/var/www$ sudo rabbitmqctl set_user_tags api administrator Setting tags for user "api" to [administrator]

RabbitMQ

https://github.com/martinrusev/django-redis-sessions http://redis.io/

Install

sudo apt-get install redis-server

Python

Before going almost anywhere with python I do this:

sudo apt-get install python3-dev python3-pip python3-virtualenv

Clone your project

I assume you can find something to clone. In this example we run django-rest- framework app and I will clone it from github.

cd /var/www
git clone https://github.com/username_is_here/some-django-app.git

⚠️ Don't even think about clonning your repo into /var/www for production instances. Put it into /home/ dir or to /opt folders

πŸ“— πŸ‘‡
# if your git hub repo is private
git clone https://username_is_here:psswd@github.com/username_is_here/some-django-app.git

pyenv

good place to start with pyenv

default python on 18.04 is Python 3.6.8 and I will run my app in 3.7

install pyenv:

curl -L https://raw.githubusercontent.com/pyenv/pyenv-installer/master/bin/pyenv-installer | bash

Append next code to you ~/.bashrc profile:

export PATH="/home/oleg/.pyenv/bin:$PATH"
eval "$(pyenv init -)"
eval "$(pyenv virtualenv-init -)"

It will make load pyenv automatically.

Reload:

source ~/.bashrc
πŸ“— πŸ‘‡
pyenv --help
pyenv commands

Let's install python 3.7:

pyenv install 3.7.0

Check python versions:

pyenv versions
* system (set by /home/oleg/.pyenv/version)
  3.7.0

Let's call our project api Create virtualenv using pyenv:

pyenv virtualenv 3.7.0 api

List your venvs:

pyenv virtualenvs

Activate it:

pyenv local api
πŸ“— πŸ‘‡

You’ve seen the pyenv local command before, but this time, instead of specifying a Python version, you specify an environment. This creates a .python-version file in your current working directory and because you ran eval "$(pyenv virtualenv-init -)" in your environment, the environment will automatically be activated.

if you do pyenv which python in project folder where .python-version and outside of it, you will see that different pythons will be used.

Install python requiremets

python-psycopg2 must be installed I believe that your python project instance has a requirements.txt file:

pip install -r requirements.txt

Here I'm not going to explain how to proceed with django, what manage.py commands to run and stuff like that. Django has a gorgeous documentation. In general you might need to run next commands migrate, createsuperuser and collectstatic, but it depends on your app. Idea is to show how to run WSGI app, so you can even run flask, falcon or any other WSGI app.

For this step I assume that your WSGI app is ready to go.

Nginx

Using Nginx to serve static files and proxy other requests to another server is a common approach. Let's say our api will be available under api.damndomainname.com, so all requests to api.damndomainname.com/static/* or api.damndomainname.com/media/* will be handled by Nginx and other requests, e.q. api.damndomainname.com/admin/, Nginx will proxy to WSGI server.

There are multiple WSGI servers you can use. I personally prefer uwsgi and gunicorn. I also used nginx + circus + chaussette, but after some time I noticed that circus not as reliable as supervisor, but the idea behind it is interesting.

HTTPS https://raturi.in/blog/how-to-implement-https-django-nginx-ubuntu-letsencrypt-certbot/

sudo apt-get install supervisor

https://codesamplez.com/management/supervisord-web-interface-and-plugin tell about web interface and nginx config

sudo service supervisor restart sudo supervisorctl reread sudo supervisorctl update

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment