Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Save DefrostedTuna/2ea60b6e5f745949181dafb9f9589732 to your computer and use it in GitHub Desktop.
Save DefrostedTuna/2ea60b6e5f745949181dafb9f9589732 to your computer and use it in GitHub Desktop.

Ubuntu 16.04 Server Setup Reference Guide

Table Of Contents

User Setup

If this has not already been done, you'll need to set up users and allow SSH access to the server for each user.

Log onto the server as root, create a new user and give them sudo privileges using these commands.

adduser ${username}
usermod -aG sudo ${username}
gpasswd -a www-data ${username}

Replacing ${username} with the desired username, this will add the account to the machine and give it super user permissions.

Test it out and make sure you can log into it.

su - ${username}

SSH Keys

Adding an SSH key is optional, but is very helpful when connecting to the server remotely. It adds a layer of security and also allows approved users to log in without requiring the use of a password.

Make sure that SSH is installed on the server, and also the client machine. If you need to install SSH on the server, update your repositories and install the openssh-server package.

sudo apt-get update
sudo apt-get install -y openssh-server

Note: If you are behind a proxy, you will need to set up the environment variables and the SSL certificates before reaching out to the network.

I will not go over how to install SSH on the client machines; there are lots of different preferred ways to do so. Suffice to say that I use Mac OSX for connecting, SSH comes installed by default.

On the server, you'll need to set up who can and cannot connect via SSH. Open up the /etc/ssh/sshd_config file for editing.

sudo nano /etc/ssh/sshd_config

You'll want to change the following lines to reflect this:

PermitRootLogin no
AllowUsers ${username}
PasswordAuthentication yes

Note: Additional usernames can be added, separated by spaces.

Save and exit, then restart the service.

systemctl restart sshd

These settings will allow the given user to login, but will also prevent root from being able to log in. Thus increasing security. Password Authentication will be turned off once we have our SSH keys in place.

On the client machine, you need to generate an SSH key.

Already have an SSH key? You can skip generating an SSH key and proceed with placing it on the server.

ssh-keygen -t rsa

You'll be prompted to fill in a few things, just follow along and everything will be generated. The end result will look something like this.

Generating public/private rsa key pair.
Enter file in which to save the key (/home/${client-machine-username}/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/${client-machine-username}/.ssh/id_rsa.
Your public key has been saved in /home/${client-machine-username}/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:KKgxptphfNUx5DWLSde1aqQTGb+LuqcTK1kwXjpoJKE ${username}@${machine-name}
The key's randomart image is:
+---[RSA 2048]----+
|        o =. ..  |
|  .    + = *.  . |
| . .    * + o .  |
|E ... oo.o + o   |
|o..o.oo=S o +    |
|o=  oo+ o  + .   |
|o +..  + o. .    |
|.o o  o o..      |
|. .    .+=       |
+----[SHA256]-----+

Defaults are perfectly fine, don't worry too much about paths and such.

Now that the key has been generated, we'll need to place it on the server so it will recognize the client machine when it tries to connect. This can be done by using ssh-copy-id from the client machine.

ssh-copy-id ${server-username}@${server-ip-address}

After a few lines and a password prompt, you'll have your SSH key installed on the server. Try it out!

ssh ${server-username}@${server-ip-address}

If you can connect, then everything is good to go. Lets go back into the /etc/ssh/ssh_config file and remove Password Authentication, which will only allow those who have set up SSH keys to have access.

sudo nano /etc/ssh/sshd_config
PasswordAuthentication no

Restart the service once more and that's it!

systemctl restart sshd

Environment Variables

You'll need to set up the environment variables so that you can get through the proxy. Open the environment file and paste in the following lines of code.

sudo nano /etc/environment
PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/home/user/.composer/vendor/bin"
http_proxy='http://your_proxy_server:proxy_port/'
HTTP_PROXY='http://your_proxy_server:proxy_port/'
https_proxy='http://your_proxy_server:proxy_port/'
HTTPS_PROXY='http://your_proxy_server:proxy_port/'
ftp_proxy='http://your_proxy_server:proxy_port/'
FTP_PROXY='http://your_proxy_server:proxy_port/'
socks_proxy='socks://your_proxy_server:proxy_port/'
SOCKS_PROXY='socks://your_proxy_server:proxy_port/'
HTTP_PROXY_REQUEST_FULLURI=false
HTTPS_PROXY_REQUEST_FULLURI=false

This will set all of the variables system wide for a good majority of programs to use.

Next we will want to set it so that using sudo doesn't conflict with these variables, essentially keeping them intact. Run visudo and place the following just after the line that reads Defaults env_reset.

sudo visudo
Defaults        env_keep += "http_proxy https_proxy HTTP_PROXY HTTPS_PROXY ftp_proxy DISPLAY XAUTHORITY"

Certificate Installation

Note: Before doing this, please make sure all certificate chains are updated, as an incorrect chain will reject and block all traffic associated with that domain. Be sure to check the entire chain. Individual certificates do not always seem to list the root or intermediate authorities.

You'll first need to create the folders that are needed for this process.

mkdir ~/certs
sudo mkdir /media /media/usb /usr/share/ca-certificates/custom-certs /usr/local/share/ca-certificates/custom-certs

Now it is time to copy the certificates to the server. For this, you'll need to plug a flash drive into the server with the certificate files loaded onto it. Use fdisk to output a list of devices attached to the server.

sudo fdisk -l

A ton of information may be dumped out to the screen. What we're looking for here is the mount point.

Here's an example:

Disk /dev/sda: 546.8 GiB, 587128266752 bytes, 1146734896 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xd861dc64

Device     Boot      Start        End    Sectors   Size Id Type
/dev/sda1  *          2048 1079648255 1079646208 514.8G 83 Linux
/dev/sda2       1079650302 1146734591   67084290    32G  5 Extended
/dev/sda5       1079650304 1146734591   67084288    32G 82 Linux swap / Solaris


Disk /dev/sdc: 14.9 GiB, 16004415488 bytes, 31258624 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xc3072e18

Device     Boot Start      End  Sectors  Size Id Type
/dev/sdc1          32 31258623 31258592 14.9G  c W95 FAT32 (LBA) <-- This is the flash drive partition

In this case, the mount point is /dev/sdc1.

Once you have found the mount point, run this command to mount the device to the server.

sudo mount /dev/sdc1 /media/usb

We can now copy the files to the proper locations.

cp -ar /media/usb/certs ~
sudo chmod -R 777 ~/certs
sudo cp -a ~/certs/crt /usr/local/share/ca-certificates/custom-certs
sudo cp -a ~/certs/crt /usr/share/ca-certificates/custom-certs

Placing the certificates into these folders will allow the system to pick up the new entries when reconfiguring the ca-certificates package. Do so with the following command.

sudo dpkg-reconfigure ca-certificates

Proceed with the prompts and select all of the new certificates that you have added. Once this completes, you'll need to run these commands to link everything together properly.

sudo update-ca-certificates --fresh
sudo c_rehash

Everything should be good to go at this point. Try to curl both an HTTP:// and an HTTPS:// url to make sure the request goes through and returns a response.

Web server setup

Following the completion of the certificate installation, we are now ready to install a few different tools and applications.

Beforehand though, you'll want to make sure the system is up to date.

sudo apt-get update
sudo apt-get upgrade

NGINX

NGINX is one of the options available for hosting web pages, among a few other uses it has.

sudo apt-get install -y nginx

Once installed, we can make sure NGINX is running by checking systemctl.

systemctl status nginx

This will output the status of the service, which should look something like this:

● nginx.service - A high performance web server and a reverse proxy server
   Loaded: loaded (/lib/systemd/system/nginx.service; enabled; vendor preset: enabled)
   Active: active (running) since Fri 2017-06-23 03:53:59 UTC; 2min 16s ago
 Main PID: 3345 (nginx)
   CGroup: /system.slice/nginx.service
           ├─3345 nginx: master process /usr/sbin/nginx -g daemon on; master_process on
           └─3346 nginx: worker process

If everything went according to plan, you should be able to visit your server's IP address in a browser and see a "Welcome to nginx!" page.

Important NGINX paths

  • /var/www/html: The actual web content, which by default only consists of the default Nginx page you saw earlier, is served out of the /var/www/html directory. This can be changed by altering Nginx configuration files.
  • /etc/nginx: The nginx configuration directory. All of the Nginx configuration files reside here.
  • /etc/nginx/nginx.conf: The main Nginx configuration file. This can be modified to make changes to the Nginx global configuration.
  • /etc/nginx/sites-available/: The directory where per-site "server blocks" can be stored. Nginx will not use the configuration files found in this directory unless they are linked to the sites-enabled directory (see below). Typically, all server block configuration is done in this directory, and then enabled by linking to the other directory.
  • /etc/nginx/sites-enabled/: The directory where enabled per-site "server blocks" are stored. Typically, these are created by linking to configuration files found in the sites-available directory.
  • /etc/nginx/snippets: This directory contains configuration fragments that can be included elsewhere in the Nginx configuration. Potentially repeatable configuration segments are good candidates for refactoring into snippets.
  • /var/log/nginx/access.log: Every request to your web server is recorded in this log file unless Nginx is configured to do otherwise.
  • /var/log/nginx/error.log: Any Nginx errors will be recorded in this log.

Note: Configuring the NGINX server blocks will be done once the rest of the server has been set up.

PHP

At the time of writing, the most recent version of PHP is 7.1. You'll notice that the default repositories only provide PHP 7.0. We'll fix this by adding a third party repository to the list.

sudo add-apt-repository -y ppa:ondrej/php
sudo apt-get update

Now you'll have access to the latest version of PHP. Here are the packages you'll want to install.

sudo apt-get install -y php7.1 php7.1-cli php7.1-fpm php7.1-curl php7.1-mysql php7.1-sqlite3 php7.1-gd php7.1-xml php7.1-json php7.1-mcrypt php7.1-mbstring php7.1-iconv php7.1-zip

After a moment or two you'll have everything installed on your system. Don't forget to set the proper timezone in php.ini. This has caused problems for me in the past, so I make sure to set it every time I install PHP.

To find which php.ini file your system is using, check the php command.

php -i | grep 'php.ini'

You should receive something similar to the following output:

Configuration File (php.ini) Path => /etc/php/7.1/cli
Loaded Configuration File => /etc/php/7.1/cli/php.ini

In this case /etc/php/7.1/cli/php.ini is the path to my configuration file. Run this command, with the proper timezone mind you, to set everything up.

echo 'date.timezone = America/Kentucky/Louisville' | sudo tee -a /etc/php/7.1/cli/php.ini

MySQL

MySQL is an easy one.

sudo apt-get install -y mysql-server

This will get you the most recent version of MySQL available for your system. Once this has completed installing, you'll want to run the secure installation to remove the test databases and disallow root login remotely.

mysql_secure_installation

Follow the prompts and you'll be on your way to a quick setup.

Configuring NGINX

Lets walk through an example of how to set up NGINX server blocks. We'll do this for single application configurations, as well as multiple application configurations.

Here's what a default server block looks like with the comments stripped out:

server {
   listen 80 default_server;
   listen [::]:80 default_server;

   root /var/www/html;
        
   index index.html index.htm index.nginx-debian.html;

   server_name _;
   
   location / {
      try_files $uri $uri/ =404;
   }
}

There are a couple different things going on here. Let's break everything down.

  • server: This is the base element that will define the way your server is constructed.
  • listen: Your server will need to know what port to listen on, here is where you define that. Default is port 80, you won't generally have a reason to change this.
  • root: This is where your local files are stored. Point this to a directory of your choosing.
  • index: A list, separated by spaces, of the files to try and load, and in what order.
  • server_name: The address that will trigger this server block.
  • location: This block will tell the server what to do when accessing the the file found from the index section.

Note: Notice the default_server flag. This will define which server block to serve by default. Do not place this on more than one block at a time.

Single Application Configuration

Static

NGINX does a pretty good job setting you up with a base template. If we wanted to host a static page, we'd only have to change a few things here.

Say we have a static HTML page we would like to host. Let's call it example_static. Assume we have the files already placed in /var/www/example_static, that our domain name is www.example.com, and our IP is 192.168.10.10. The only things we would have to change on the default server block are the root and the server_name. Create a new file in /etc/nginx/sites-available/ and drop the changes to the server block configuration.

sudo touch /etc/nginx/sites-available/example_static
sudo nano /etc/nginx/sites-available/example_static
server {
  listen 80 default_server;
  listen [::]:80 default_server;

  root /var/www/example_static;

  index index.html index.htm index.nginx-debian.html;

  server_name example.com www.example.com 192.168.10.10;

  location / {
  	try_files $uri $uri/ =404;
  }
}

Before the changes take effect, we'll need to enable the server block, test the configuration, and then restart the NGINX service.

sudo ln -s /etc/nginx/sites-available/example_static /etc/nginx/sites-enabled
sudo nginx -t
systemctl restart nginx

Note: Be sure to disable the default NGINX server block. This can be done by deleting the link in /etc/nginx/sites-enabled.

If you have replaced the example information with your actual information, you should be able to navigate to your domain or IP and see your content.

PHP

What if we want to host a PHP project? This requires some additional settings on the server block.

For this example, lets again assume that we have the files placed in /var/www/example_dynamic, that our domain name is www.example.com, and our IP is 192.168.10.10. Our index.php is located in the /var/www/example_dynamic/public folder and we'll have to account for that as well.

Add a new server block to the /etc/nginx/sites-available directory, and configure it accordingly.

sudo touch /etc/nginx/sites-available/example_dynamic
sudo nano /etc/nginx/sites-available/example_dynamic
server {
   listen 80 default_server;
   listen [::]:80 default_server;
   
   root /var/www/example_dynamic/public;

   server_name example.com www.example.com 192.168.10.10;

   index index.php index.html index.htm index.debian-default.html;

   location / {
      try_files $uri $uri/ /index.php$is_args$args;
   }

   location ~ \.php$ {
     try_files $uri =404;
     fastcgi_index index.php;
     fastcgi_split_path_info ^(.+\.php)(/.+)$;
     fastcgi_pass unix:/var/run/php/php7.1-fpm.sock;
     fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
     include fastcgi_params;
   }
}

This looks a bit different from our last configuration, but not by too much.

  • root: We've changed this to the proper path for our PHP project.
  • index: Prepended here is index.php. This is to ensure that the file is loaded before any other files are checked.
  • location /: PHP will need to access any arguments without being redirected away from the server block. /index.php$is_args$args will pass query parameters to PHP as well.

There's a lot going on in the new location ~ \.php$ section, so lets go over that in it's own list.

  • try_files: This is the same as the previous block, however we don't need as many things as before as PHP is handling a good chunk of them.
  • fastcgi_index: Fairly straightforward, this is the name of the index file to pass to PHP.
  • fastcgi_split_path_info: Not quite as obvious, but this will rewrite the URL to make things look prettier.
  • fastcgi_pass: Here is where the magic happens. You'll want to point this to the unix socket that PHP is running on. In my case, and most others, it is set to unix:/var/run/php/php7.1-fpm.sock.
  • fastcgi_param and include: Don't worry about these too much. They basically help with the passing of variables to PHP and such.

As with the last server block, once you've saved the file you'll need to enable it by linking it to the /etc/nginx/sites-enabled directory. Test the configuration and restart NGINX.

sudo ln -s /etc/nginx/sites-available/example_dynamic /etc/nginx/sites-enabled
sudo nginx -t
systemctl restart nginx

Multiple Application Configuration

Setting up multiple application configurations can be a bit tricky sometimes. For static pages it's a breeze, but for PHP applications such as Laravel projects, it's a bit confusing at first.

Static

Let's assume that we have two applications we want to host. The files will be stored under /var/www/first and /var/www/second. We want the first application to be on the root of the domain, while the second to be accessed via the /second route.

Create a server block for the configuration and drop the configuration into it.

sudo touch /etc/nginx/sites-available/multiple_static
sudo nano /etc/nginx/sites-available/multiple_static
server {
  listen 80 default_server;
  listen [::]:80 default_server;

  root /var/www/first;

  index index.html index.htm index.nginx-debian.html;

  server_name example.com www.example.com 192.168.10.10;

  location / {
  	try_files $uri $uri/ =404;
  }
  
  location /second {
  	alias /var/www/second
  	try_files $uri $uri/ =404;
  }
}

You'll notice that not very much has changed from the default server block. We've pointed the root to the directory of the application we want hosted at the root of our domain. However, we've also added a new location /second block. This new block will be accessed whenever /second is detected in the URL request. Setting the alias to the location of the second application will allow NGINX to search that directory for one of the index files.

Save the file, enable and test the server block, then restart NGINX.

sudo ln -s /etc/nginx/sites-available/multiple_static /etc/nginx/sites-enabled
sudo nginx -t
systemctl restart nginx

After a successful restart, you'll be able to see th first project at the root of the domain, and the second on the /second route.

PHP

This one takes a bit of trickery to get to work properly. We have to pass a URL rewrite to get NGINX to process the full request properly. Let's look at an example.

Using the stipulations from the static example again, we've got two applications that we'd like to host. One is located at /var/www/first, and the second at /var/www/second. Since this is a PHP example, both of the index.php will be served from the /public folder on each of these applications.

Spin up a new server block and paste in the configuration.

sudo touch /etc/nginx/sites-available/multiple_dynamic
sudo nano /etc/nginx/sites-available/multiple_dynamic
server {
    listen 80;
    listen [::]:80;

    root /var/www/first/public;

    server_name example.com www.example.com 192.168.10.10;

    index index.php index.html index.htm index.nginx-debian.html;

    location / {
        try_files $uri $uri/ /index.php$is_args$args;
    }
    
    location ~ \.php$ {
        try_files $uri =404;
        fastcgi_index index.php;
        fastcgi_split_path_info ^(.+\.php)(/.+)$;
        fastcgi_pass unix:/var/run/php/php7.1-fpm.sock;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        include fastcgi_params;
   }
    
   location @second {
        rewrite ^/second/(.*)$ /second/index.php?/$1 last;
   }

   location ^~ /second {
        alias /var/www/second/public;
        try_files $uri $uri/ @second;

        location ~ \.php$ {
            try_files $uri =404;
            fastcgi_split_path_info ^(.+\.php)(/.+)$;
            fastcgi_pass unix:/var/run/php/php7.1-fpm.sock;
            fastcgi_param SCRIPT_FILENAME $request_filename;
            fastcgi_index index.php;
            include fastcgi_params;
        }
   }
}

Our first application is set up the same way as it was in the single application example, however our second application starts to look a bit different. The location ^~ /second block is aliased the same way as the static example, however under try_files we have a new parameter; @second. What this does is call the location @second block and rewrite the URL to be properly parsed by NGINX. I won't go into detail about the rewrite works, explaining Regex is a beast in itself. Lastly, we have the location ~/.php$ block inside of the aliased application. This is required due to the rewrite nature of the request.

As always, serve the application and bask in the success!

sudo ln -s /etc/nginx/sites-available/multiple_dynamic /etc/nginx/sites-enabled
sudo nginx -t
systemctl restart nginx

Misc Setup

Setting the time and date

This is an important step that is overlooked most of the time. In some cases you may need to set up time synchronization to a specific server. Let's go over how to set that up.

Ubuntu 16.04 ships with timedatectl as the default time management application. This is fine in most cases and does quite a good job at handling everything. You can check the status of timedatectl simply by issuing the command in terminal.

timedatectl
      Local time: Fri 2017-06-23 14:47:21 EDT
  Universal time: Fri 2017-06-23 18:47:21 UTC
        RTC time: Fri 2017-06-23 18:48:51
       Time zone: US/Eastern (EDT, -0400)
 Network time on: yes
NTP synchronized: yes
 RTC in local TZ: no

Out of the box, this is what the command should output. Most of the servers I've spun up have been configured with the timezone as UTC, you may need to change this to your current locale. Do so by calling the set-timezone argument.

timedatectl set-timezone America/Kentucky/Louisville

Note: If you need to look up the available timezones, you can pas the list-timezones argument.

How do we tell timedatectl to sync to a specific server though? Luckily, timedatectl has a fairly straightforward way of configuring timeservers. You'll want to edit the /etc/systemd/timesyncd.conf file and specify the NTP servers you'd like to connect to.

sudo nano /etc/systemd/timesyncd.conf
[Time]
NTP=hostname or IP
#FallbackNTP=ntp.ubuntu.com

Note: You can set multiple NTP servers by separating them by spaces.

After you've set any NTP servers you'd like to sync to, simply restart the timesyncd service.

systemctl restart systemd-timesyncd

The time should now be synched properly!

Permissions

Once you deploy an application, you'll want to change the permissions to prevent malicious intent by users who can access these directories. Ideally the owner of these files should be the web server itself, and it should be set so that no other users have the innate ability to write or execute anything unless explicitly defined. Let's walk through an example Laravel application for this, although the same principles apply for other types of applications as well.

After dropping a fresh project onto the server, the permissions should be relatively intact. You'll want 755 for folders, and 644 for individual files.

  • 775: The folder's owner may read, write, and execute the folder. All others may read and execute the folder.
  • 664: The owner may read and write a file, while all others may only read the file.

Here's an example of a fresh project set up on the server:

user@server:/var/www$ ls -la
total 12
drwxrwxrwx  3 root          root          4096 Jun 26 12:34 .
drwxr-xr-x 14 root          root          4096 May 23 16:00 ..
drwxrwxr-x 12 user          user          4096 Jun 26 12:35 project
user@server:/var/www/project$ ls -la
total 416
drwxrwxr-x 12 root			root		    4096 Jun 26 12:35 .
drwxrwxrwx  3 root			root		    4096 Jun 26 12:35 ..
drwxrwxr-x  6 user			user		    4096 Jun 26 12:34 app
-rw-rw-r--  1 user			user		    1646 Jun 26 12:34 artisan
drwxrwxr-x  3 user			user		    4096 Jun 26 12:34 bootstrap
-rw-rw-r--  1 user			user		    1300 Jun 26 12:34 composer.json
-rw-rw-r--  1 user			user		  122425 Jun 26 12:34 composer.lock
drwxrwxr-x  2 user			user		    4096 Jun 26 12:34 config
drwxrwxr-x  5 user			user		    4096 Jun 26 12:34 database
-rw-rw-r--  1 user			user		     572 Jun 26 12:35 .env
-rw-rw-r--  1 user			user		     521 Jun 26 12:34 .env.example
-rw-rw-r--  1 user			user		     111 Jun 26 12:34 .gitattributes
-rw-rw-r--  1 user			user		     131 Jun 26 12:34 .gitignore
-rw-rw-r--  1 user			user		    1063 Jun 26 12:34 package.json
-rw-rw-r--  1 user			user		    1043 Jun 26 12:34 phpunit.xml
drwxrwxr-x  4 user			user		    4096 Jun 26 12:34 public
drwxrwxr-x  5 user			user		    4096 Jun 26 12:34 resources
drwxrwxr-x  2 user			user		    4096 Jun 26 12:34 routes
-rw-rw-r--  1 user			user		     563 Jun 26 12:34 server.php
drwxrwxr-x  5 user			user		    4096 Jun 26 12:34 storage
drwxrwxr-x  4 user			user		    4096 Jun 26 12:34 tests
drwxrwxr-x 31 user			user		    4096 Jun 26 12:35 vendor
-rw-rw-r--  1 user			user		     549 Jun 26 12:34 webpack.mix.js
-rw-rw-r--  1 user			user		  211844 Jun 26 12:34 yarn.lock

Out of the box, our permissions are set to what we want. Great. If they are somehow changed in any way, you can run the following commands to change the permissions accordingly.

sudo find /path/to/root/directory -type d -exec chmod 775 {} \;
sudo find /path/to/root/directory -type f -exec chmod 664 {} \;

This will search for each folder, and set the permissions to 775, as well as set the individual files to 664.

Only one thing left to do; change the ownership to the web server. By default, NGINX and most other web services run on the system default user www-data and belong to the group of the same name. Changing ownership of them is easy.

sudo chown -R www-data: /path/to/root/directory

Now that all of the permissions are set properly, only the web server will be able to securely interact with the files.

If you need to run commands on a directory after giving permission to www-data, you can run them as www-data by using this command:

sudo -u www-data ${command}

Developer Tools

I've left these out of the main server setup because these are largely dependent on the types of projects that will be hosted. Here are the tools that I use in my everyday setup.

Git

Honestly, Git should be installed by default. If it's not you'll want to install it.

sudo apt-get install -y git

That's simple enough.

Composer

Composer is a dependency management tool for PHP projects. It is also required to install any Laravel projects on the server. Lets start by installing some packages that are needed for Composer's functionality.

sudo apt-get install -y zip unzip

Once these are done installing, you can go ahead and set up Composer. Ideally you'll want to install this globally so you can call it from anywhere. Paste this command into the terminal to download Composer, and allow it to be called globally.

curl -sS https://getcomposer.org/installer | sudo php -- --install-dir=/usr/local/bin --filename=composer

Note: If you want to be able to call Composer packages globally, you'll need to add Composer's bin directory to your PATH.

sudo nano /etc/environment
PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/home/${username}/.composer/vendor/bin"

Here we are appending /home/${username}/.composer/vendor/bin to the end of our PATH. Replace ${username} with the user who will be calling the command.

NPM

NPM is another tool that you'll want in your arsenal. Installing this one is pretty easy.

sudo apt-get -y install nodejs
sudo apt-get -y install npm

With only those two commands, you'll be done.


Written by Rick Bennett.

Last updated on 06/27/2017

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment