This documentation describes configuration and deployment steps for Nextcloud on Google Cloud Platform (GCP).
Adapted for Google Cloud Computing (GCP) based on blog post by Carsten Rieger:
https://www.c-rieger.de/nextcloud-installation-guide/
Carsten Rieger updated new instructions that can be adapted for cloud instances.
https://www.c-rieger.de/nextcloud-installation-guide-advanced/
However, this page has reached End of Support. Instead, use the following one-script install:
https://www.c-rieger.de/spawn-your-nextcloud-server-using-one-shell-script/
DISCLAIMER: Following the steps below WILL INCUR CHARGES to your Google Cloud Platform account!
UPDATE: The following instructions exist for historical purposes only.
Choose a unique project name and ID for the Compute Engine in your GCP account. Select and provision the machine type most closely matches your needs and budget.
https://cloud.google.com/compute/pricing
Check both "Allow HTTP" and "Allow HTTPS" when you provision the Compute Engine instance.
If your Google Compute Engine instance contains sufficient resources to run both Nextcloud and a MySQL/MariaDB server, then you might consider doing so because that will reduce your costs.
A Cloud SQL instance is essentially a Compute Engine with a SQL DB server installed without shell access. Google allows you to configure SQL DB server configuration settings through the Developer Console, but there is no Cloud SQL instance command prompt per se.
Provision a a new Cloud SQL instance with the machine type most closely matches your needs and budget. Or, you can use an existing Cloud SQL instance in your GCP account if that instance has sufficient resources to support Nextcloud as well.
https://cloud.google.com/sql/pricing
Make sure both the Compute Engine and Cloud SQL instance reside in the same Region and Zone:
https://cloud.google.com/compute/docs/regions-zones/regions-zones
Click on the "defaults" link under Networks section. Under the Allowed IPs, add the External IP address of the Compute Engine instance you provisioned in the previous section.
sudo -s
apt update && apt upgrade -y
apt install logrotate geoip-database libgeoip-dev libgeoip1 zip unzip -y
cd /usr/share/GeoIP
wget http://geolite.maxmind.com/download/geoip/database/GeoLiteCountry/GeoIP.dat.gz
rsync -a GeoIP.dat GeoIP.dat.bak
gunzip -f GeoIP.dat.gz
cd /usr/local/src
Add the nginx key to your system
wget http://nginx.org/keys/nginx_signing.key
apt-key add nginx_signing.key
Add the mainline repo to your ubuntu-sources.
vi /etc/apt/sources.list.d/nginx.list
deb http://nginx.org/packages/mainline/ubuntu/ xenial nginx
deb-src http://nginx.org/packages/mainline/ubuntu/ xenial nginx
Update your software sources
apt update
Ignore errors.
Download the build dependencies and the source code for the new nginx
apt build-dep nginx -y
apt source nginx
Ignore errors.
Please adjust the current release number, currently 1.11.9 (nginx-1.11.9)
mkdir nginx-1.11.9/debian/modules
cd nginx-1.11.9/debian/modules
In the modules directory, we are going to download and extract the code for each of the modules we want to include (ngx_cache_purge 2.3).
wget https://github.com/FRiCKLE/ngx_cache_purge/archive/2.3.tar.gz
Extract the binaries.
tar -zxvf 2.3.tar.gz
Change back to the debian-directory and edit the compiler information
cd /usr/local/src/nginx-1.11.9/debian
vi rules
Add a “blank” behind “with-http_flv_module” and add
--with-http_geoip_module
Then add a “blank” behind “with-ld-opt=”$(LDFLAGS)” and add
--add-module="$(CURDIR)/debian/modules/ngx_cache_purge-2.3"
as shown below:
...
--with-http_flv_module --with-http_geoip_module --with-http_gunzip_module
...
--with-ld-opt="$(LDFLAGS)" --add-module="$(CURDIR)/debian/modules/ngx_cache_purge-2.3"
and
...
--with-http_flv_module --with-http_geoip_module --with-http_gunzip_module
...
--with-ld-opt="$(LDFLAGS)" --add-module="$(CURDIR)/debian/modules/ngx_cache_purge-2.3" --with-debug
Save and quit (:wq!) this file.
We will now build the debian package, please ensure you are in the nginx source directory:
cd /usr/local/src/nginx-1.11.9
dpkg-buildpackage -uc -b
After package building is finsished (may take a while ~10 min) please change to the src – directory.
cd /usr/local/src
First remove any old nginx fragments
apt remove nginx nginx-common nginx-full
Then start installing the new nginx-webserver, choose the package depending on your environment:
Examples: Ubuntu 64-bit uses "amd64", Raspberry PI uses "arm64", netbooks use "armhf", etc.
dpkg --install nginx_1.11.9-1~xenial_amd64.deb
or
dpkg --install nginx_1.11.9-1~xenial_arm64.deb
or
dpkg --install nginx_1.11.9-1~xenial_armhf.deb
Note: Although the initial nginx installation worked ok, subsequent apt upgrade nginx operations resulted in 'unknown directive "geoip_country" in /etc/nginx/nginx.conf' error. So run:
apt install nginx-module-geoip
and add "load module" directives to the nginx.conf configuration in the next subsection.
Looking for the amount of CPUs and Process limits
grep ^processor /proc/cpuinfo | wc -l
Result: 4
ulimit -n
Result: 1024
Change the nginx.conf with regards to the previous values
rsync -a /etc/nginx/nginx.conf /etc/nginx/nginx.conf.bak
vi /etc/nginx/nginx.conf
to
user www-data;
worker_processes 4; # result of `grep ^processor /proc/cpuinfo | wc -l`
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
load_module modules/ngx_http_geoip_module.so;
load_module modules/ngx_stream_geoip_module.so;
events {
worker_connections 1024; # result of `ulimit -n`
multi_accept on;
use epoll;
}
http {
geoip_country /usr/share/GeoIP/GeoIP.dat;
map $geoip_country_code $allowed_country {
default no;
# add two-letter country codes of your choice
#DE yes;
US yes;
}
geo $exclusions {
default 0;
192.168.2.0/24 1;
}
limit_req_zone $binary_remote_addr zone=noflood:10m rate=10r/s;
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
client_body_buffer_size 12800K;
client_body_timeout 3600;
client_header_buffer_size 25600k;
client_header_timeout 3600;
send_timeout 3600;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
#gzip on;
server_tokens off;
include /etc/nginx/conf.d/*.conf;
}
Start and verify the running nginx with ngx_cache_purge enabled
service nginx restart
nginx -V 2>&1 | grep ngx_cache_purge -o
nginx -V 2>&1 | grep http_geoip_module -o
Create web-folders, create or edit your nginx “nextcloud.conf” and move the origin nginx “default.conf”.
mkdir -p /var/www/nextcloud && mkdir -p /var/nc_data && mkdir -p /var/www/letsencrypt
mv /etc/nginx/conf.d/default.conf /etc/nginx/conf.d/default.conf.bak
vi /etc/nginx/conf.d/nextcloud.conf
Paste the following lines, but please set the accordingly to your environment
upstream php-handler {
server unix:/run/php/php7.1-fpm.sock;
}
fastcgi_cache_path /usr/local/tmp/cache levels=1:2 keys_zone=NEXTCLOUD:100m inactive=60m;
fastcgi_cache_key $scheme$request_method$host$request_uri;
map $request_uri $skip_cache {
default 1;
~*/thumbnail.php 0;
~*/apps/galleryplus/ 0;
~*/apps/gallery/ 0;
}
server {
listen 80;
server_name <yourcloud.yourdomain.com> <192.168.2.17>;
add_header X-Content-Type-Options "nosniff" always;
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header X-Robots-Tag none;
add_header X-Download-Options noopen;
add_header X-Permitted-Cross-Domain-Policies none;
root /var/www/nextcloud/;
location = /robots.txt {
allow all;
log_not_found off;
access_log off;
}
location ^~ /.well-known/acme-challenge {
default_type text/plain;
root /var/www/letsencrypt;
}
if ($allowed_country = yes) {
set $exclusions 1;
}
if ($exclusions = "0") {
return 403;
}
location = /.well-known/carddav { return 301
$scheme://$host/remote.php/dav; }
location = /.well-known/caldav { return 301
$scheme://$host/remote.php/dav; }
client_max_body_size <10240M>;
fastcgi_buffers 256 16k;
fastcgi_buffer_size 128k;
fastcgi_busy_buffers_size 256k;
fastcgi_temp_file_write_size 256k;
gzip on;
gzip_vary on;
gzip_types application/javascript application/x-javascript text/javascript text/xml text/css;
# or set "gzip off;"
fastcgi_cache_key $http_cookie$request_method$host$request_uri;
fastcgi_cache_use_stale error timeout invalid_header http_500;
fastcgi_ignore_headers Cache-Control Expires Set-Cookie;
error_page 403 /core/templates/403.php;
error_page 404 /core/templates/404.php;
location / {
rewrite ^ /index.php$uri;
}
location ~ ^/(?:build|tests|config|lib|3rdparty|templates|data)/ {
deny all;
}
location ~ ^/(?:\.|autotest|occ|issue|indie|db_|console) {
deny all;
}
location ~ ^/(?:index|remote|public|cron|core/ajax/update|status|ocs/v[12]|updater/.+|ocs-provider/.+|core/templates/40[34])\.php(?:$|/) {
limit_req zone=noflood burst=15;
fastcgi_split_path_info ^(.+\.php)(/.*)$;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
fastcgi_param modHeadersAvailable true;
fastcgi_param front_controller_active true;
fastcgi_pass php-handler;
fastcgi_intercept_errors on;
fastcgi_request_buffering off;
fastcgi_read_timeout 600;
fastcgi_send_timeout 600;
fastcgi_connect_timeout 600;
fastcgi_cache_bypass $skip_cache;
fastcgi_no_cache $skip_cache;
fastcgi_cache NEXTCLOUD;
fastcgi_cache_valid 60m;
fastcgi_cache_methods GET HEAD;
}
location ~ ^/(?:updater|ocs-provider)(?:$|/) {
try_files $uri/ =404;
index index.php;
}
location ~* \.(?:css|js)$ {
try_files $uri /index.php$uri$is_args$args;
add_header Cache-Control "public, max-age=7200";
add_header X-Content-Type-Options "nosniff" always;
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header X-Robots-Tag none;
add_header X-Download-Options noopen;
add_header X-Permitted-Cross-Domain-Policies none;
access_log off;
}
location ~* \.(?:svg|gif|png|html|ttf|woff|ico|jpg|jpeg)$ {
try_files $uri /index.php$uri$is_args$args;
access_log off;
}
}
Attention (‘10240M’): the maximum value for 32Bit-OS is less than or equal to 2G (≤ 2048M)
Create the nginx-cache directory (for ngx_cache_purge)
mkdir -p /usr/local/tmp && mkdir -p /usr/local/tmp/cache
Check nginx:
nginx -t
If the following output appears
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
go ahead with the installation of PHP. Otherwise verify all previous changes.
If ngx_cache_purge and http_geoip_module appear everything works fine, you could mark nginx as “hold” to avoid any updates to nginx using apt upgrade. You might want to do this later when you notice the 1.11.x branch no longer offers revision releases.
apt-mark hold nginx
Remove the created source file “nginx.list” or
rm /etc/apt/sources.list.d/nginx.list
disable its content:
vi /etc/apt/sources.list.d/nginx.list
by adding ‘#’ for each row:
# deb http://nginx.org/packages/mainline/ubuntu/ xenial nginx
# deb-src http://nginx.org/packages/mainline/ubuntu/ xenial nginx
Install PHP directly from the ubuntu repository
mkdir /upload_tmp
chown -R www-data:www-data /upload_tmp
apt install language-pack-en-base -y
sudo LC_ALL=en_US.UTF-8 add-apt-repository ppa:ondrej/php -y
If everything is OK, go ahead with the installation of PHP:
apt update && apt install php-fpm php-gd php-mysql php-curl php-xml php-zip php-intl php-mcrypt php-mbstring php-apcu php-imagick php-json php-bz2 php-zip -y
OK, now PHP 7 is installed and must be configured accordingly to Nextcloud.
rsync -a /etc/php/7.1/fpm/pool.d/www.conf /etc/php/7.1/fpm/pool.d/www.conf.bak
vi /etc/php/7.1/fpm/pool.d/www.conf
Search for:
Pass environment variables like LD_LIBRARY_PATH. ALL $VARIABLES are taken from the current environment
and remove the semicolon at the beginning of the following lines. Without this changes an error message would appear in the Nextcloud-Admin panel.
For details on computing appropriate values for your environment, please refer to Carsten Rieger's blog post:
https://www.c-rieger.de/nextcloud-installation-guide/
Edit the php-fpm-configuration
vi /etc/php/7.1/fpm/pool.d/www.conf
and change the following lines examplarily to
...
pm.max_children = 240
...
pm.start_servers = 20
...
pm.min_spare_servers = 10
...
pm.max_spare_servers = 20
...
pm.max_requests = 500
...
These are values we used for Ubuntu 16.04 LTS. Carsten Rieger's blog post includes sample values for Raspberry PI 3 and ODROID-C2.
It might be neccessary to edit the php.ini (cli) to avoid APCu-messages like "Memcache \OC\Memcache\APCu not available for local cache" by running the following commands
rsync -a /etc/php/7.1/cli/php.ini /etc/php/7.1/cli/php.ini.bak
vi /etc/php/7.1/cli/php.ini
...
post_max_size = 10240M
...
upload_tmp_dir = /upload_tmp
...
upload_max_filesize = 10240M
...
max_file_uploads = 100
...
max_execution_time = 1800
...
max_input_time = 3600
...
output_buffering = off
...
Attention (‘10240M’): the maximum value for 32Bit-OS is less than or equal to 2G (≤ 2048M)
Note: When having an open_basedir configured within your php.ini file, make sure to include /dev/urandom.
https://docs.nextcloud.com/server/11/admin_manual/configuration_server/harden_server.html
Adjust all the red values accordingly and scroll to the end of this file. Paste “apc.enable_cli = 1” prior to the last row “; End:”
...
; Local Variables:
; tab-width: 4
apc.enable_cli = 1
; End:
Save and leave the file (wq!), then modify the php.ini in the fpm-directory also:
rsync -a /etc/php/7.1/fpm/php.ini /etc/php/7.1/fpm/php.ini.bak
vi /etc/php/7.1/fpm/php.ini
...
post_max_size = 10240M
...
upload_tmp_dir = /upload_tmp
...
upload_max_filesize = 10240M
...
max_file_uploads = 100
...
max_execution_time = 1800
...
max_input_time = 3600
...
output_buffering = off
...
Note: When having an open_basedir configured within your php.ini file, make sure to include /dev/urandom.
https://docs.nextcloud.com/server/11/admin_manual/configuration_server/harden_server.html
Adjust all the red values accordingly, then save and leave the file (wq!).
rsync -a /etc/php/7.1/fpm/php-fpm.conf /etc/php/7.1/fpm/php-fpm.conf.bak vi /etc/php/7.1/fpm/php-fpm.conf
Set the following values
...
emergency_restart_threshold = 10
...
emergency_restart_interval = 1m
...
process_control_timeout = 10s
...
Leave the file (wq!) and restart PHP. From now all changes are in place.
service php7.1-fpm restart && service nginx restart
You could mark PHP as “hold” to avoid any updates to nginx using apt upgrade. Although doing this will make your environment stable against PHP version changes, keep in mind that you won't be able to patch PHP security vulnerabilities.
apt-mark hold php-fpm php-gd php-mysql php-curl php-xml php-zip php-intl php-mcrypt php-mbstring php-apcu php-imagick php-json php-bz2 php-zip
Suppose you chose not to freeze PHP version, and apt upgrade modifies the default PHP version from 7.1 to a newer version. You will need to modify PHP configuration settings of the new version comparable to what we did in this section.
The folders for Nextcloud (Sources: /var/www/nextcloud), nextcloud_data (/var/nc_data) and let’s encrypt (/var/www/letsencrypt) were already created.
Change to our working directory again
cd /usr/local/src
and download the current Nextcloud package.
wget https://download.nextcloud.com/server/releases/latest.tar.bz2
Extract the Nextcloud package to your web-folder
tar -xjf latest.tar.bz2 -C /var/www
and set the permissions again manually or
chown -R www-data:www-data /var/www/nextcloud && chown -R www-data:www-data /var/nc_data
by running a script. Create the script called “permissions.sh”
vi ~/permissions.sh
and paste the following commands
#!/bin/bash
find /var/www/ -type f -print0 | xargs -0 chmod 0640
find /var/www/ -type d -print0 | xargs -0 chmod 0750
chown -R www-data:www-data /var/www/
chown -R www-data:www-data /upload_tmp/
# umount //<NAS>/<share>
chown -R www-data:www-data /var/nc_data/
# mount //<NAS>/<share>
chmod 0644 /var/www/nextcloud/.htaccess
chmod 0644 /var/www/nextcloud/.user.ini
# chmod 600 /etc/letsencrypt/live/<yourcloud.yourdomain.com>/fullchain.pem
# chmod 600 /etc/letsencrypt/live/<yourcloud.yourdomain.com>/privkey.pem
# chmod 600 /etc/letsencrypt/live/<yourcloud.yourdomain.com>/chain.pem
# chmod 600 /etc/letsencrypt/live/<yourcloud.yourdomain.com>/cert.pem
# chmod 600 /etc/ssl/certs/dhparam.pem
Save and quit this script (:wq!) and make it executable
chmod u+x ~/permissions.sh
You can execute this script by running
~/permissions.sh
Start your browser and call http://<yourcloud.yourdomain.com> or <your.ip.address.here>
Username*: cloudadmin
Password*: Your-very-Strong!-NC-PassWord
* hint: values can be selected as you want
Data folder: /var/nc_data
Datenbankuser: nextcloud
DB-Passwort: nextcloud
Datenbank-Name: nextcloud
Host: localhost
Click "Finish setup" and wait few seconds … the installation will be finished and you will be prompt to the Nextcloud-welcome board.
Some smaller changes needs to be applied to the Nextcloud config.php:
sudo -u www-data vi /var/www/nextcloud/config/config.php
# Open the Nextcloud-config as webuser (www-data)
...
array (
0 => 'yourcloud.yourdomain.com',
1 => 'your.ip.address.here',
),
...
'memcache.local' => '\OC\Memcache\APCu',
'loglevel' => 1,
'logtimezone' => 'UTF',
'logfile' => '/var/nc_data/nextcloud.log',
'log_rotate_size' => 10485760,
'cron_log' => true,
'filesystem_check_changes' => 1,
...
Save and quit (:wq!) the config.php.
If you want to increase the MAXIMUM UPLOAD SIZE for Nextcloud you have to edit the file <.user.ini> as well.
sudo -u www-data vi /var/www/nextcloud/.user.ini
upload_max_filesize=10240M
post_max_size=10240M
memory_limit=512M
mbstring.func_overload=0
always_populate_raw_post_data=-1
default_charset='UTF-8'
output_buffering=0
Attention (‘10240M’): the maximum value for 32Bit-OS is less than or equal to 2G (≤ 2048M)
“… Nextcloud comes with its own nextcloud/.htaccess file. Because php-fpm can’t read PHP settings in .htaccess these settings must be set in the nextcloud/.user.ini file. …”
Configure the Nextcloud cron-job running as Webuser (www-data)
crontab -u www-data -e
# Edit CRONTAB as User "www-data"
*/15 * * * * php -f /var/www/nextcloud/cron.php > /dev/null 2>&1
A cronjob will run every 15 minutes as webuser (www-data).
Logon to Nextcloud as Administrator an change the AJAX-cronjob to cron:
The change will be stored while selecting another section or change the mouse-focus, please have a look to the state of the cron e.g. “Last cron job execution: seconds ago”…
Run the installation of redis
apt-get update && apt install redis-server php-redis -y
Then edit the redis’ configuration-file
cp /etc/redis/redis.conf /etc/redis/redis.conf.bak
vi /etc/redis/redis.conf
Change both
a) the default port to ‘0’
# port 6379
port 0
and
b) the unixsocket-entries from
# unixsocket /var/run/redis/redis.sock
# unixsocketperm 700
to
unixsocket /var/run/redis/redis.sock
unixsocketperm 770
Now change the value for maxclients from 10000 to an appropriated value to avoid errors like:
# You requested maxclients of 10000 requiring at least 10032 max file descriptors.
# Redis can't set maximum open files to 10032 because of OS error: Operation not permitted
# Current maximum open files is 4096. maxclients has been reduced to 4064 to compensate for low ulimit. If you need higher maxclients increase 'ulimit -n'.
Depending on your server set the value to e.g. 512 for OdroidC2:
# maxclients 10000
maxclients 512
Save and quit (:wq!) the file and grant all privileges to the webuser (e.g. www-data) needed for Redis in combination with Nextcloud
usermod -a -G redis www-data
To fix
# WARNING overcommit_memory is set to 0! Background save may fail under low memory condition.
in redis-server.log add vm.overcommit_memory = 1 to /etc/sysctl.conf and then reboot or run the command directly:
vi /etc/sysctl.conf
At the end add the following row:
vm.overcommit_memory = 1
You may run this command
sysctl vm.overcommit_memory=1
for this to take effect immediately. Another warning occurs
"# WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128."
To fix this warning you have to set a new config to /etc/rc.local so that the setting will persist upon reboot:
vi /etc/rc.local
and add
sysctl -w net.core.somaxconn=65535
When you reboot the next time, the new setting will be to allow 65535 connections instead of 128 as before.
shutdown -r now
Validate the existence of
ls -la /run/redis
- and -
ls -la /var/run/redis
both files in both folders
redis-server.pid
- and -
redis.sock.
Once again you have to modify the Nextcloud configuration in the config.php as the webuser (www-data)
sudo -u www-data vi /var/www/nextcloud/config/config.php
and add the following lines
...
'filelocking.enabled' => 'true',
'memcache.locking' => '\\OC\\Memcache\\Redis',
'redis' =>
array (
'host' => '/var/run/redis/redis.sock',
'port' => 0,
'timeout' => 0.0,
),
...
Restart redis and Nextcloud (nginx)
service redis-server restart && service nginx restart
Our exemplarily nextcloud/config/config.php (including smtp-mailconfiguration and enhanced preview behaviour):
<?php
$CONFIG = array (
'instanceid' => '...',
'passwordsalt' => '...',
'secret' => '...',
'trusted_domains' =>
array (
0 => 'yourcloud.yourdomain.com',
1 => 'your.ip.address.here',
),
'datadirectory' => '/var/nc_data',
'dbtype' => 'mysql',
'version' => '11.0.0.10',
'dbname' => 'nextcloud',
'dbhost' => 'localhost',
'dbtableprefix' => 'oc_',
'dbuser' => 'nextcloud',
'dbpassword' => 'nextcloud',
'htaccess.RewriteBase' => '/',
'overwrite.cli.url' => '/nextcloud',
'loglevel' => 1,
'logtimezone' => 'Europe/Berlin',
'logfile' => '/var/nc_data/nextcloud.log',
'log_rotate_size' => 10485760,
'cron_log' => true,
'installed' => true,
'filesystem_check_changes' => 1,
'quota_include_external_storage' => false,
'knowledgebaseenabled' => false,
'memcache.local' => '\\OC\\Memcache\\APCu',
'filelocking.enabled' => 'true',
'memcache.locking' => '\\OC\\Memcache\\Redis',
'redis' =>
array (
'host' => '/var/run/redis/redis.sock',
'port' => 0,
'timeout' => 0.0,
),
'mail_smtpmode' => 'smtp',
'mail_from_address' => '<you>',
'mail_domain' => 'mailserver.de',
'mail_smtpsecure' => 'tls',
'mail_smtpauthtype' => 'LOGIN',
'mail_smtpauth' => 1,
'mail_smtphost' => 'your.mailserver.de',
'mail_smtpport' => '587',
'mail_smtpname' => 'xyz',
'mail_smtppassword' => '...',
'maintenance' => false,
'theme' => '',
'integrity.check.disabled' => false,
'updater.release.channel' => 'stable',
'enable_previews' => true,
'enabledPreviewProviders' =>
array (
0 => 'OC\\Preview\\PNG',
1 => 'OC\\Preview\\JPEG',
2 => 'OC\\Preview\\GIF',
3 => 'OC\\Preview\\BMP',
4 => 'OC\\Preview\\XBitmap',
5 => 'OC\\Preview\\MarkDown',
6 => 'OC\\Preview\\MP3',
7 => 'OC\\Preview\\TXT',
8 => 'OC\\Preview\\Movie',
),
);
Attention: adjust the red ones
Open the and add the follwing code
vi /etc/fstab
...
tmpfs /tmp tmpfs defaults,noatime,nosuid,nodev,noexec,mode=1777 0 0
tmpfs /var/tmp tmpfs defaults,noatime,nosuid,nodev,noexec,mode=1777 0 0
...
Save and quit (:wq!) the file and mount the tmpfs-filesystem manually.
mount -a
From now the tmpfs (ramdisk) is used by your server.
To either move cache directories to ramdisk, create the file “/etc/profile.d/xdg_cache_home.sh”
vi /etc/profile.d/xdg_cache_home.sh
and paste the following two lines
#!/bin/bash
export XDG_CACHE_HOME="/dev/shm/.cache"
Save and quit (:wq!) the file and set permissions to
chmod +x /etc/profile.d/xdg_cache_home.sh
Reboot your system.
Tips on hardening Ubuntu and other Linux servers can be found in Carsten Rieger's guide (https://www.c-rieger.de/nextcloud-installation-guide/) as well as other sources on the Internet. Nevertheless, we recommend that you use the network access Web Console within GCP to control ingress and egress network traffic of your Compute Engine instance. In particular, avoid using ufw and iptables.
WARNING: If you enable ufw and/or iptables WITHOUT allowing access to SSH port, YOU WILL LOSE ACCESS to your GCP Compute Engine instance!
Apache: https://nextcloud.com/collaboraonline/
nginx: https://icewind.nl/entry/collabora-online/
A note on SSL certs. I was unable to get Docker image of Collabora Office integration working using Nextcloud with self-signed certs; Nextcloud or the Collabora Docker instance throws permissions/authorization errors.
Thing I tried without success:
- Various adjustments to Apache and nginx reverse proxy settings
- Modifying Nextcloud PHP source code to disable Guzzle HTTP client cert verification
- Compile LibreOffice Online from source (https://github.com/LibreOffice/online)
Based on the following blog posts:
Nemskiller: https://help.nextcloud.com/t/howto-collabora-2-0-without-using-docker-not-for-prod/2546
Hector Herrero Hermida: http://www.tundra-it.com/en/integrando-collabora-online-con-nextcloud/
Pisoko: https://central.owncloud.org/t/howto-install-collabora-online-on-ubuntu-16-04-without-docker/3844
Docker image has a limit of 10 differents documents at the same time, or 20 connections at the same time.
Requirements:
- Nextcloud
- Apache or nginx with HTTPS and certs from Let's Encrypt or other CA.
- Docker
Launch Collabora Docker image but don't direct Port 9980 to reverse proxy. My Docker installation requires sudo before each docker command.
docker pull collabora/code
docker run -t -d -p 127.0.0.1:9980:9980 --restart always --cap-add MKNOD collabora/code
Docker should return a long hash string, where the first part is the CONTAINER_ID. Now run
docker ps
CONTAINER_ID is the string for the Docker instance running on Port 9980.
The goal now is to extract the files we need out of the Docker image. We will use a lot this command :
docker cp CONTAINER_ID:/path/inside/dockerimage /path/inside/harddrive
So here the list of command you will use :
docker cp CONTAINER_ID:/opt/collaboraoffice5.1/ /opt/
docker cp CONTAINER_ID:/usr/bin/loolforkit /usr/bin/
docker cp CONTAINER_ID:/usr/bin/loolmap /usr/bin/
docker cp CONTAINER_ID:/usr/bin/loolmount /usr/bin/
docker cp CONTAINER_ID:/usr/bin/looltool /usr/bin/
docker cp CONTAINER_ID:/usr/bin/loolwsd /usr/bin/
docker cp CONTAINER_ID:/usr/bin/loolwsd-systemplate-setup /usr/bin/
docker cp CONTAINER_ID:/etc/loolwsd/ /etc/
docker cp CONTAINER_ID:/usr/share/loolwsd/ /usr/share/
docker cp CONTAINER_ID:/usr/lib/libPocoCrypto.so.45 /usr/lib/
docker cp CONTAINER_ID:/usr/lib/libPocoFoundation.so.45 /usr/lib/
docker cp CONTAINER_ID:/usr/lib/libPocoJSON.so.45 /usr/lib/
docker cp CONTAINER_ID:/usr/lib/libPocoNet.so.45 /usr/lib/
docker cp CONTAINER_ID:/usr/lib/libPocoNetSSL.so.45 /usr/lib/
docker cp CONTAINER_ID:/usr/lib/libPocoUtil.so.45 /usr/lib/
docker cp CONTAINER_ID:/usr/lib/libPocoXML.so.45 /usr/lib/
Now stop the Docker instance:
docker stop CONTAINER_ID
docker rm CONTAINER_ID
Need to install some libraries:
apt-get install libcups2 libgl1-mesa-glx libsm6 libpixman-1-0 libxcb-shm0 libxcb-render0 libxrender1 libcairo2-dev
Now we start the Configuration.Let’s open loolwsd.xml with your favorite text editor :
nano /etc/loolwsd/loolwsd.xml
Line 12 add /usr/share/loolwsd no / after loolwsd between and
Note: Make sure you set the "relative" parameter of tags in loolwsd.xml consistent with the path you specify for that tag. For example, "/usr/share/loolwsd" should have "relative" parameter set to "false". If this setting is inconsistent, your Nextcloud Collabora Online integration will open LibreOffice compatible files only with connections originating from the same server (IP or FQDN) no matter which user account you use. If the connection originates from another server with hosts that match allow patterns in loolwsd.xml, opening a LibreOffice compatible file will result in "termporarily moved or unavailable" error.
Line 30 to Line 32 modify the path to find the cert files.
Line 38 add a line to allow your Server to connect to Collabora Online :
your.fqdn.com Line 48 the same :
your.fqdn.com
Line 53 – 54 configure a Username and a Password
Then, we will create user lool and some chown – chmod…
useradd lool
sudo setcap cap_fowner,cap_mknod,cap_sys_chroot=ep /usr/bin/loolforkit
sudo setcap cap_sys_admin=ep /usr/bin/loolmount
mkdir /var/cache/loolwsd/
mkdir /opt/lool/child-roots/
chown -R lool:lool /var/cache/loolwsd/
chown -R lool:lool /opt/lool/child-roots/
Now we will lunch the system template setup :
sudo /usr/bin/loolwsd-systemplate-setup /opt/lool/systemplate /opt/collaboraoffice5.1
sudo chown -R lool:lool /opt/lool/systemplate
In order to finish this Step by Step we will create loolwsd service : (thanks to @depawlur) CAUTION : IT'S ONLY FOR SYSTEMD SYSTEM LIKE UBUNTU 16.04, search How to make a service in Fedora, CentOs or other...
nano /etc/systemd/system/loolwsd.service
[Unit]
Description=loolwsd as a service
[Service]
User=lool
ExecStart=/usr/bin/loolwsd --o:sys_template_path=/opt/lool/systemplate --o:lo_template_path=/opt/collaboraoffice5.1 --o:child_root_path=/opt/lool/child-roots --o:file_server_root_path=/usr/share/loolwsd
ExecReload=/bin/kill -HUP $MAINPID
KillMode=process
Restart=on-failure
[Install]
WantedBy=multi-user.target
Now start LibreOffice Online as a service:
sudo systemctl enable /etc/systemd/system/loolwsd.service
sudo systemctl daemon-reload
sudo systemctl start loolwsd.service
Enable Collabora Online integration for Nextcloud. Log into Nextcloud admin panel, go to the new section Collabora Online, and insert (remember to click Apply) this value for Collabora Server:
https://your.fqdn.com:9980
LibreOffice Online should open inside Nextcloud when you open a supported document format.
i was trying this setup on a GCP VM over this weekend, just to play with. What should be the server_name? (the domain address), how do i get DNS address of GCP compute VM i am setting this up.
server_name <yourcloud.yourdomain.com>
Thanks
sivakk@gmail.com