-
Default path: vim /etc/nginx/conf.d/default.conf
-
Custom Config vim /etc/nginx/conf.d/example.com.conf
-
Default index.html: /usr/share/nginx/html/index.html
-
Custom index.html: /var/www/example.com/html/index.html
- Path explanation:
- Default path : /var/www/
- Virtualhostname : example.com
- Path explanation:
-
Test Configuration of the server
$ nginx -t
-
Test Ouput:
$ curl localhost
-
Test Configuration of a different hosts/virtualservers
$ curl --header "Host: example.com" localhost
-
Troubleshooting:
- Error 403:
$curl --header "Host: example.com" localhost
<html> <head><title>403 Forbidden</title></head> <body bgcolor="white"> <center><h1>403 Forbidden</h1></center> <hr><center>nginx/1.14.2</center> </body> </html>
- Solution: Check the error logs
- Reason is due to selinux security
$ less /var/log/nginx/error.log 2019/03/19 21:30:49 [error] 30362#30362: *13 "/var/www/example.com/html/index.html" is forbidden (13: Permission denied), client: 127.0.0.1, server: example.com, request: "GET / HTTP/1.1", host: "example.com"
- Error 403:
-
Check if proper permission exsists to display the content
$ semanage fcontext -l | grep /usr/share/nginx/html $ semanage fcontext -a -t httpd_sys_content_t '/var/www/(/.*)?' $ semanage fcontext -l | grep /var/www
-
To get the proper permissions to use the virtulhosts
$ restorecon -R -v /var/www
- To add error messages
vim /etc/nginx/conf.d/default.conf
server { listen 80 default_server; server_name _; root /usr/share/nginx/html; error_page 404 /404.html; error_page 500 501 502 503 504 /50x.html; }
- Create html pages for errors at the below location
$ vim /usr/share/nginx/html/404.html
- HTTP basic authentication allows us to require a username and password before we allow a user to access a specific URL. To set up NGINX with HTTP basic auth, we’ll be using the auth_basic module. We can specify that specific location blocks utilize basic auth while others do not. Let’s create a new location block that will prevent a specific HTML file from being accessible:
$ vim etc/nginx/conf.d/default.conf
server { listen 80 default_server; server_name _; root /usr/share/nginx/html; location = /admin.html { auth_basic "Login Required"; auth_basic_user_file /etc/nginx/.htpasswd; } error_page 404 /404.html; error_page 500 501 502 503 504 /50x.html; }
- The location = /admin.html is how we specify what should happen if someone navigates to the /admin.html path and it matches exactly. From there, we’re able to specify a value for auth_basic using a string (any string will do), and we give the path to the file containing the username/password pairs that have access.
- Create the admin.html
$ echo "<h1>Admin Page</h1>" > /usr/share/nginx/html/admin.html
- Generating a Password File
- Our configuration is currently valid, but we don’t have a password file to work with yet. We’re going to install the httpd-tools to create our .htpasswd file:
$ yum install -y httpd-tools
- now have access to the htpasswd utility that can generate a file for us with encrypted passwords for our users
$ htpasswd -c /etc/nginx/.htpasswd admin New password: Re-type new password: Adding password for user admin
- Test the configuration
$ systemctl reload nginx $ curl localhost/admin.html
<html> <head><title>401 Authorization Required</title></head> <body bgcolor="white"> <center><h1>401 Authorization Required</h1></center> <hr><center>nginx/1.14.2</center> </body> </html>
- se the -u flag for curl, then we can pass in the username and password divided by a
$ curl -u admin:****** localhost/admin.html
<h1>ADMIN PAGE</h1>
- Create SSL certificate using openssl
- Configuring the Host for SSL/TLS/HTTPS using Nginx http module, server directive, ssl module.
- Configure https server
server { listen 80 default_server; # Enable SSL listen 443 ssl; server_name _; root /usr/share/nginx/html; # Enable SSL add public and private certs ssl_certificate /etc/nginx/ssl/public.pem; ssl_certificate_key /etc/nginx/ssl/private.key; location = /admin.html { auth_basic "Login Required"; auth_basic_user_file /etc/nginx/.htpasswd; } error_page 404 /404.html; error_page 500 501 502 503 504 /50x.html; }
- Test connection
$ curl -k https://localhost
-
Documentation
- NGINX http module
- NGINX server directive
- NGINX try_files directive
- NGINX rewrite directive
-
Remove .html at the end of our files. To remove this, we’re going to utilize the rewrite and try_files directives.
-
Convert URLs using a rewrite rule. Going to /admin.html should redirect to /admin.
-
Use try_files to ensure that we're rendering the proper page or rendering a 404.
$ vim /etc/nginx/conf.d/default.conf
server { listen 80 default_server; listen 443 ssl; server_name _; root /usr/share/nginx/html; ssl_certificate /etc/nginx/ssl/public.pem; ssl_certificate_key /etc/nginx/ssl/private.key; # added rewrite directive rewrite ^(/.*)\.html(\?.*)?$ $1$2 redirect; rewrite ^/(.*)/$ /$1 redirect; location = /admin.html { auth_basic "Login Required"; auth_basic_user_file /etc/nginx/.htpasswd; } error_page 404 /404.html; error_page 500 501 502 503 504 /50x.html; }
-
Rewrite rules follow the regex pattern.
rewrite REGEX REPLACEMENT [flag];
-
Regex break down
^(/.*)\.html(\?.*)?$
- The starting
^
indicates that we’re going from the very beginning of the URI. We create a capture group using parentheses. - The pattern
/.*
means starting with a/
and including all characters after that. - Next, we expect to find a
.html
(notices that we needed to escape it using a)
. - Lastly, we create another capture group that gets an optional query string (anything after a ?) until the end of the URI (as indicated by the $ character).
- The capture groups will be usable in the REPLACEMENT portion of the direction using
$1, $2, etc
.
- The starting
-
Reload the server
$ systemctl reload nginx
-
Test it by opening a browser and hit the nginx server public ip.
-
we’ll run into an infinite redirect loop. This is happening because we’re even reloading the 404.html page so we can’t even render that properly. To fix this redirect loop, we’ll need to use try_files.
$ vim /etc/nginx/conf.d/default.conf
server { listen 80 default_server; listen 443 ssl; server_name _; root /usr/share/nginx/html; ssl_certificate /etc/nginx/ssl/public.pem; ssl_certificate_key /etc/nginx/ssl/private.key; rewrite ^(/.*)\.html(\?.*)?$ $1$2 redirect; rewrite ^/(.*)/$ /$1 redirect; # added try_files location / { try_files $uri/index.html $uri.html $uri/ $uri =404; } location = /admin.html { auth_basic "Login Required"; auth_basic_user_file /etc/nginx/.htpasswd; try_files $uri/index.html $uri.html $uri/ $uri =404; // added try_files } error_page 404 /404.html; error_page 500 501 502 503 504 /50x.html; }
Explanation:
- We’ve added a try_files directive to both of our location blocks with slightly different rules. We want to be able to handle someone going to the index.html within potentially nested directories.
- The try_files call will look for the first file (or directory) that matches the request moving from left to right. If nothing is found, then the rightmost file or status code will be returned.
-
Reload the server configuration and try to navigate to the root URL or the status page, we will now see the content.
-
Unfortunately, we appear not to be hitting our HTTP basic authentication anymore.
-
Reason for this is that the $uri is changed before we process the location blocks and we’re never going to have a $uri that matches /admin.html. We need to change that location check to match against /admin instead.
$ vim /etc/nginx/conf.d/default.conf
server { listen 80 default_server; listen 443 ssl; server_name _; root /usr/share/nginx/html; ssl_certificate /etc/nginx/ssl/public.pem; ssl_certificate_key /etc/nginx/ssl/private.key; rewrite ^(/.*)\.html(\?.*)?$ $1$2 redirect; rewrite ^/(.*)/$ /$1 redirect; location / { try_files $uri/index.html $uri.html $uri/ $uri =404; } location = /admin { # removed .html auth_basic "Login Required"; auth_basic_user_file /etc/nginx/.htpasswd; try_files $uri/index.html $uri.html $uri/ $uri =404; } error_page 404 /404.html; error_page 500 501 502 503 504 /50x.html; }
-
Documentation
- NGINX http module
- NGINX server directive
- NGINX return directive
- NGINX $request_uri variable
- NGINX $host variable
-
Seperate the HTTP and HTTPS handling for our server by creating two separate server blocks. One will listen on port 80 and not display any content. The other will hold our current configuration but only listen on port 443.
-
Here’s what our configuration looks like with them divided out:
$ vim /etc/nginx/conf.d/default.conf
# Seperate server block to listen on port 80 and not display any content server { listen 80 default_server; server_name _; return 404; } server { listen 443 ssl default_server; server_name _; root /usr/share/nginx/html; ssl_certificate /etc/nginx/ssl/public.pem; ssl_certificate_key /etc/nginx/ssl/private.key; rewrite ^(/.*)\.html(\?.*)?$ $1$2 redirect; rewrite ^/(.*)/$ /$1 redirect; location / { try_files $uri/index.html $uri.html $uri/ $uri =404; } location = /admin { auth_basic "Login Required"; auth_basic_user_file /etc/nginx/.htpasswd; try_files $uri/index.html $uri.html $uri/ $uri =404; } error_page 404 /404.html; error_page 500 501 502 503 504 /50x.html; }
-
Test visit our IP address with http://, we should receive a 404 error, but if we visit the same page with https:// we’ll see the content that we expect.
- Redirect HTTP traffic, we want to start redirecting the user to the proper page via HTTPS.
- we have already done using the return directive in the above snippet.
return STATUS_CODE URL;
- Since we’re doing a redirect, we’re going to use the 301 status code.
- As for the URL portion of the statement, we’re going to use variables that are present when processing the request to ensure that we go to the proper spot.
- To ensure that it goes to the proper server we’ll use the $host variable in our destination URL to pass along the domain name.
- Using $host will make this server redirect to the proper server after we have multiple HTTPS, virtual hosts.
- We’ll also use the $request_uri variable which represents the path portion of the request.
- The only other thing that we need to do is start the URL with https://.
$ vim /etc/nginx/conf.d/default.conf
server { listen 80 default_server; server_name _; return 301 https://$host$request_uri; # Change using return directive } server { listen 443 ssl default_server; server_name _; root /usr/share/nginx/html; ssl_certificate /etc/nginx/ssl/public.pem; ssl_certificate_key /etc/nginx/ssl/private.key; rewrite ^(/.*)\.html(\?.*)?$ $1$2 redirect; rewrite ^/(.*)/$ /$1 redirect; location / { try_files $uri/index.html $uri.html $uri/ $uri =404; } location = /admin { auth_basic "Login Required"; auth_basic_user_file /etc/nginx/.htpasswd; try_files $uri/index.html $uri.html $uri/ $uri =404; } error_page 404 /404.html; error_page 500 501 502 503 504 /50x.html; }
- Reload our server and visit the /admin route using http:// we should be redirected to the https:// version of the URL and prompted for our password.
-
Documentation
- NGINX load_module directive
-
NGINX 1.9.11, there are dynamic modules that can be loaded in when the configuration is loaded and don’t require the NGINX binary to be recompiled.
-
To see the dynamic modules that are installed when we install NGINX, look in /usr/share/nginx/modules (The /etc/nginx/modules directory is a symbolic link to this path also).
$ ls -l /etc/nginx/modules/
-
There’s nothing here because the official precompiled build that we’re using has many of the first-party, optional modules compiled into the binary itself.
-
We can see the modules that are installed by looking at how our binary was configured using
$ nginx -V nginx version: nginx/1.14.2 built by gcc 4.8.5 20150623 (Red Hat 4.8.5-28) (GCC) built with OpenSSL 1.0.2k-fips 26 Jan 2017 TLS SNI support enabled configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --modules-path=/usr/lib6 4/nginx/modules --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log - -http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/n ginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/v ar/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi -temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --use r=nginx --group=nginx --with-compat --with-file-aio --with-threads --with-http_addition_modul e --with-http_auth_request_module --with-http_dav_module --with-http_flv_module --with-http_g unzip_module --with-http_gzip_static_module --with-http_mp4_module --with-http_random_index_m odule --with-http_realip_module --with-http_secure_link_module --with-http_slice_module --wit h-http_ssl_module --with-http_stub_status_module --with-http_sub_module --with-http_v2_module --with-mail --with-mail_ssl_module --with-stream --with-stream_realip_module --with-stream_s sl_module --with-stream_ssl_preread_module --with-cc-opt='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_S OURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -fPIC' --with-ld-opt='-Wl,-z,relro -Wl,-z,now -pie'
-
To look in to a specific info of
_module
$ nginx -V 2>&1 | tr -- - '\n' | grep _module http_addition_module http_auth_request_module http_dav_module http_flv_module http_gunzip_module http_gzip_static_module http_mp4_module http_random_index_module http_realip_module http_secure_link_module http_slice_module http_ssl_module http_stub_status_module http_sub_module http_v2_module mail_ssl_module stream_realip_module stream_ssl_module stream_ssl_preread_module
-
This is a list of all of the additional modules that were compiled into the binary, and the directives defined in these modules are available to us without needing to load them into our configuration using the load_module directive.
- Documentation
- ModSecurity
- ModSecurity-nginx
- NGINX dynamic modules
- load_module directive
- Before we can build the dynamic NGINX module for ModSecurity, we need to have the V3 of ModSecurity (also known as libModSecurity) built.
- Since we’re going to be compiling this package, we’ll install git and some other developer tools to allow us to get the source code.
$ yum groupinstall 'Development tools' $ yum install -y \ geoip-devel \ gperftools-devel \ libcurl-devel \ libxml2-devel \ libxslt-devel \ libgd-devel \ lmdb-devel \ openssl-devel \ pcre-devel \ perl-ExtUtils-Embed \ yajl-devel \ zlib-devel
- Clone ModSecurity, compile and install the core libraries
$ cd /opt $ git clone --depth 1 -b v3/master https://github.com/SpiderLabs/ModSecurity $ cd ModSecurity $ git submodule init $ git submodule update $ ./build.sh $ ./configure //make command will take some time to compile [10 - 20 min] $ make $ make install
- Download the NGINX source code for our installed version of NGINX and use it to build a dynamic module using the ModSecurity-nginx project.
# Download ModSecurity-nginx $ cd /opt $ git clone --depth 1 https://github.com/SpiderLabs/ModSecurity-nginx.git $ nginx -v # Download and unpack NGINX source nginx version: nginx/1.12.2 $ wget http://nginx.org/download/nginx-1.12.2.tar.gz $ tar zxvf nginx-1.12.2.tar.gz # Configure and build dynamic module: $ cd nginx-1.12.2 $ ./configure --with-compat --add-dynamic-module=../ModSecurity-nginx $ make modules $ cp objs/ngx_http_modsecurity_module.so /etc/nginx/modules/
- As we learned load_module is used to load dynamic modules.
- Let's use it
/etc/nginx/nginx.conf
file to load our newLoad ModSecurity dynamic module
vim /etc/nginx/nginx.conf user nginx; worker_processes 1; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; # Load ModSecurity dynamic module load_module /etc/nginx/modules/ngx_http_modsecurity_module.so; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; #tcp_nopush on; keepalive_timeout 65; #gzip on; include /etc/nginx/conf.d/*.conf; }
- Create a rules file. ModSecurity source code that we download contains a recommended configuration.
- Create a new directory local to our NGINX and copy that configuration over
mkdir /etc/nginx/modsecurity $ cp /opt/ModSecurity/modsecurity.conf-recommended /etc/nginx/modsecurity/modsecurity.conf
- Change the default configuration so that the audit log will go to
/var/log/nginx
to skip issues with SELinux. - Open modsecurity.conf
$ vim /etc/nginx/modsecurity/modsecurity.conf
- Edit the below line in the
modsecurity.conf
file
$ SecAuditLog /var/log/nginx/modsec_audit.log
- Enable the
modsecurity
directive in the primary server block(https block)
server { listen 80 default_server; server_name _; return 301 https://$host$request_uri; } server { listen 443 ssl default_server; server_name _; root /usr/share/nginx/html; # Enabling modsecurity directive modsecurity on; modsecurity_rules_file /etc/nginx/modsecurity/modsecurity.conf; ssl_certificate /etc/nginx/ssl/public.pem; ssl_certificate_key /etc/nginx/ssl/private.key; rewrite ^(/.*)\.html(\?.*)?$ $1$2 redirect; rewrite ^/(.*)/$ /$1 redirect; location / { try_files $uri/index.html $uri.html $uri/ $uri =404; } location = /admin { auth_basic "Login Required"; auth_basic_user_file /etc/nginx/.htpasswd; try_files $uri/index.html $uri.html $uri/ $uri =404; } error_page 404 /404.html; error_page 500 501 502 503 504 /50x.html; }
- Check the nginx configuration
$ nginx -t nginx: the configuration file /etc/nginx/nginx.conf syntax is ok nginx: configuration file /etc/nginx/nginx.conf test is successful $ systemctl reload nginx
- Reverse proxy: Sits between internet traffic and our servers.
- Proxy: Sits between our clients and the internet
Prereq:
Documentation:
- http_proxy NGINX module
- proxy_pass NGINX directive
- Set up the host photos.example.com listening on port 80 to proxy to the application that is running locally on port 3000.
- Create a new configuration file within /etc/nginx/conf.d
$ vim /etc/nginx/conf.d/photos.example.com.conf server { listen 80; server_name photos.example.com; location / { proxy_pass http://127.0.0.1:3000; proxy_http_version 1.1; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Real-IP $remote_addr; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; } }
- Reload Nginx server, you will see the below 502 error.
$ systemctl reload nginx $ curl --header "Host: photos.example.com" localhost
<html> <head><title>502 Bad Gateway</title></head> <body bgcolor="white"> <center><h1>502 Bad Gateway</h1></center> <hr><center>nginx/1.14.2</center> </body> </html>
- Take a look at nginx logs we will see a permission denied error.
$ tail -50 /var/log/nginx/error.log
- The above error is due to SELinux.
- There are quite a few boolean values in SELinux that can be set regarding the httpd service (which it considers NGINX part of).
- If we list them out using
getsebool -a | grep httpd
we’ll find that there is one calledhttpd_can_network_connect
that is set to off. - Enable this feature for our server to be able to make network requests.
setbool -P httpd_can_network_connect 1
- Now try to hit the
phots.example.com
$ curl --header "Host: photos.example.com" localhost
- Expected output
curl --header "Host: photos.example.com" localhost <script src="//code.jquery.com/jquery-1.11.0.min.js"></script><script src="/javascripts/files elect.js"></script><!DOCTYPE html><html><head><link rel="stylesheet" href="https://maxcdn.boo tstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css"><title>S3 Photo Storage Demo</title></he ad><body><div class="container" style="margin-top: 20px;"></div><div class="row"><div class=" col-lg-12 col-sm-12 col-12"><div class="jumbotron"><h1 style="margin-left: 20px;">S3 Photo St orage Demo</h1><p style="margin-left: 20px;">auto-generated table id: 4a39af83-40c8-435b-9457 -347f23166220</p></div></div></div><div class="main-content"><div class="container" style="ma rgin-top: 20px;"><div class="row"><div class="col-lg-12 col-sm-12 col-12"><h4>Upload new imag e</h4><div class="form-group page-header text-center"><form method="post" enctype="multipart/ form-data" action="/photo"><div class="input-group"><label class="input-group-btn"><span clas s="btn btn-primary">Browse… <input type="file" name="uploadedImage" style="display: none;"></ span></label><input class="form-control" type="text" readonly=""></div><span class="help-bloc k">Select a file to upload</span><label class="btn btn-default btn-file">Submit<div class="hi dden"><input type="submit" name="uploadImage"></div></label></form></div></div></div></div></ div><div class="footer text-center"><p>Demo Service by Linux Academy and Cloud Assessments </p></div></body></html>
- To view in the web browser add the below line in to
/etc/hosts
<public_IP_of_nginx_server> photos.example.com
- In the browser
developer tools --> network inspector
, we can see that we’re serving up static content successfully, but we’re doing so through the Node.js application. - Add the following location block to our configuration to take that load off of the application server itself.
$ vim /etc/nginx/conf.d/photos.example.com.conf
location ~* \.(js|css|png|jpe?g|gif) { root /var/www/photos.example.com; }
- Reload nginx
$ systemctl reload nginx
- Inspect the page using
developer tools --> network inspector
. We can observe a 403 error from javascrupt. This is a SELinux issue. Reason is we’re trying to server content from a directory that is symbolically linked into /var/www - Add a rule to handle this for the public directories.
# Instead of creating a folder Link it to /srv/www/s3photoapp/apps/web-client/public /var/www/photos.example.com ln -s /srv/www/s3photoapp/apps/web-client/public /var/www/photos.example.com # Create file context $ semanage fcontext -a -t httpd_sys_content_t '/srv/www/.*/public(/.*)?' $ restorecon -R /srv/www $ systemctl reload nginx
- Documentation
- NGINX http_upstream module
- NGINX upstream directive
- NGINX server directive from the upstream module
-
Run more than one instance of application.
$ cp /etc/systemd/system/web-client{,2}.service $ cp /etc/systemd/system/web-client{,3}.service
-
Add the ports environment variable in the duplicate services which we created above.
$ vim /etc/systemd/system/web-client2.service [Unit] Description=S3 Photo App Node.js service After=network.target photo-filter.target photo-storage.target [Service] Restart=always User=nobody Group=nobody Environment=NODE_ENV=production Environment=AWS_ACCESS_KEY_ID=YOUR_AWS_KEY_ID Environment=AWS_SECRET_ACCESS_KEY=YOUR_AWS_SECRET_KEY ## Enviroment variable for port Environment=PORT=3100 ExecStart=/srv/www/s3photoapp/apps/web-client/bin/www [Install] WantedBy=multi-user.target
$ vim /etc/systemd/system/web-client3.service [Unit] Description=S3 Photo App Node.js service After=network.target photo-filter.target photo-storage.target [Service] Restart=always User=nobody Group=nobody Environment=NODE_ENV=production Environment=AWS_ACCESS_KEY_ID=YOUR_AWS_KEY_ID Environment=AWS_SECRET_ACCESS_KEY=YOUR_AWS_SECRET_KEY ## Enviroment variable for port Environment=PORT=3100 ExecStart=/srv/www/s3photoapp/apps/web-client/bin/www [Install] WantedBy=multi-user.target
-
Reload nginx server and the duplicate apps.
$ systemctl reload nginx $ systemctl start web-client2 $ systemctl enable web-client2 $ systemctl start web-client2 $ systemctl enable web-client2
- To load balance the traffic between 3 identical apps/services we will use http_upstream module and upstream directive.
- Upstream directive create a new context where we configure all the load balancing behaviour.
$ vim /etc/nginx/conf.d/photos.example.com # Server groups upstream photos { server 127.0.0.1:3000; server 127.0.0.1:3100; server 127.0.0.1:3101; } server { listen 80; server_name photos.example.com; client_max_body_size 5m; location / { ## Instead of pointing to one server and it port # proxy_pass http://127.0.0.1:3000; ## We now have a server group to load balance so pointing the proxy_pass to the above server group(Upstream context) proxy_pass http://photos; proxy_http_version 1.1; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Real-IP $remote_addr; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; } location ~* \.(js|css|png|jpe?g|gif) { root /var/www/photos.example.com; } }
- Test the load balancing refreshing the page in the browser multiple times.
$ systemctl status web-client ● web-client.service - photo-storage Node.js service Loaded: loaded (/etc/systemd/system/web-client.service; enabled; vendor preset: disabled) Active: active (running) since Thu 2019-03-21 16:21:15 UTC; 2h 17min ago Main PID: 4087 (node) CGroup: /system.slice/web-client.service └─4087 node /srv/www/s3photoapp/apps/web-client/bin/www Mar 21 18:19:32 xxxxx.mylabserver.com www[4087]: GET / 200 856.310 ms - 2030 Mar 21 18:19:32 xxxxx.mylabserver.com www[4087]: GET /javascripts/fileselect.js 200 ...5 Mar 21 18:19:33 xxxxx.mylabserver.com www[4087]: GET / 200 921.996 ms - 2034 Mar 21 18:19:33 xxxxx.mylabserver.com www[4087]: GET /javascripts/fileselect.js 200 ...5 Mar 21 18:19:34 xxxxx.mylabserver.com www[4087]: GET / 200 862.634 ms - 2030 Mar 21 18:19:34 xxxxx.mylabserver.com www[4087]: GET /javascripts/fileselect.js 200 ...5 Mar 21 18:19:36 xxxxx.mylabserver.com www[4087]: GET / 200 963.758 ms - 2032 Mar 21 18:19:36 xxxxx.mylabserver.com www[4087]: GET /javascripts/fileselect.js 200 ...5 Mar 21 18:21:05 xxxxx.mylabserver.com www[4087]: GET / 200 1030.965 ms - 2030 Mar 21 18:21:10 xxxxx.mylabserver.com www[4087]: GET / 200 922.742 ms - 2026 Hint: Some lines were ellipsized, use -l to show in full. $ systemctl status web-client2 ● web-client2.service - photo-storage Node.js service Loaded: loaded (/etc/systemd/system/web-client2.service; enabled; vendor preset: disabled) Active: active (running) since Thu 2019-03-21 18:10:30 UTC; 28min ago Main PID: 18124 (node) CGroup: /system.slice/web-client2.service └─18124 node /srv/www/s3photoapp/apps/web-client/bin/www Mar 21 18:10:30 xxxxx.mylabserver.com systemd[1]: Started photo-storage Node.js service. Mar 21 18:10:30 xxxxx.mylabserver.com www[18124]: Listening on port 3100 Mar 21 18:21:07 xxxxx.mylabserver.com www[18124]: GET / 200 1197.122 ms - 2032 Mar 21 18:21:12 dimpastra1.mylabserver.com www[18124]: GET / 200 980.748 ms - 2030 Hint: Some lines were ellipsized, use -l to show in full. $ systemctl status web-client3 ● web-client3.service - photo-storage Node.js service Loaded: loaded (/etc/systemd/system/web-client3.service; enabled; vendor preset: disabled) Active: active (running) since Thu 2019-03-21 18:10:32 UTC; 28min ago Main PID: 18140 (node) CGroup: /system.slice/web-client3.service └─18140 node /srv/www/s3photoapp/apps/web-client/bin/www Mar 21 18:10:32 xxxxx.mylabserver.com systemd[1]: Started photo-storage Node.js service. Mar 21 18:10:32 xxxxx.mylabserver.com www[18140]: Listening on port 3101 Mar 21 18:21:08 xxxxx.mylabserver.com www[18140]: GET / 200 1183.243 ms - 2030 Mar 21 18:21:14 xxxxx.mylabserver.com www[18140]: GET / 200 972.307 ms - 2028 Hint: Some lines were ellipsized, use -l to show in full.
- Default LB strategy is Round-Robin
- The strategy LB which we configured using the above steps follow the Round-Robin.
- Different available load balancing methods are
- hash: Specifies how a key should be build based on the request, mapping all requests with the same key to the same server.
$ vim /etc/nginx/conf.d/photos.example.com
upstream photos { hash $request_uri; server 127.0.0.1:3000; server 127.0.0.1:3100; server 127.0.0.1:3101; }
- ip_hash: Specifies a client-server grouping based on the client IP.
- least_conn: Specifies that requests should be routed to the server with the least number of active connections. This approach also takes into account server weights.
$ vim /etc/nginx/conf.d/photos.example.com
upstream photos { # least conn method least_conn; server 127.0.0.1:3000; server 127.0.0.1:3100; server 127.0.0.1:3101; }
- least_time: Routes traffic to the server with the lowest average response time and least number of active connections. Note: This is only available in Nginx Plus
- hash: Specifies how a key should be build based on the request, mapping all requests with the same key to the same server.
- To prioritize sending traffic to server 1\
$ vim /etc/nginx/conf.d/photos.example.com
upstream photos { server 127.0.0.1:3000 weight=2; # Server 1 server 127.0.0.1:3100 # Server 2 server 127.0.0.1:3101 # Server 3 }
- Setup passive health checks for server 2 and server 3
$ vim /etc/nginx/conf.d/photos.example.com
upstream photos { server 127.0.0.1:3000 weight=2; # Server 1 server 127.0.0.1:3100 max_fails=3 fail_timeout=20s; # Server 2 server 127.0.0.1:3101 max_fails=3 fail_timeout=20s; # Server 3 }
- The
fail_timeout
option sets both the span of time to count failures and also the duration that a server with errors should be marked as “unavailable”.
- The