Skip to content

Instantly share code, notes, and snippets.

@gauravtiwari
Forked from egg82/seaweedfs_owncloud.md
Created September 23, 2021 08:36
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save gauravtiwari/648fb44938747b54f2fa7bade2627b32 to your computer and use it in GitHub Desktop.
Save gauravtiwari/648fb44938747b54f2fa7bade2627b32 to your computer and use it in GitHub Desktop.
SeaweedFS + OwnCloud

A quick write-up on SeaweedFS + OwnCloud

This guide aims to take a look at a distributed, scalable SeaweedFS as a backend storage for an OwnCloud server. Why not NextCloud? Because NextCloud's S3 connector is outdated and not maintained, and won't work with SeaweedFS. Both OwnCloud and NextCloud have the same support and the same plugins. The only concern anyone has is some weird split that happened between the two years ago. It's fine.

This was a giant pain in the ass, but eventually I got it working. This is how.

Filesystem/Hardware

All servers start from fresh, clean installs of Ubuntu 18.04.3

This guide will use three seperate boxes (one master/OwnCloud and two slaves).
I'm assuming there's spare, unformatted disks attached to the slaves. If not, ignore or change the section on mounting to fit your needs.
Each box will have an internet-facing connection and a LAN-facing one with no access to the internet.
For all intents and purposes, these will be named "public" and "private" for the rest of this guide.
We'll configure UFW to ensure good network security between the boxes.

SeaweedFS

GitHub

We'll be installing SeaweedFS on each of our nodes so they can link up together.
Here we run into our first problem. You can't actually download/install SeaweedFS as per the instructions (starting off on the right foot).
You'll still need to install Golang and mercurial as per the instructions, but to build the project we'll need to do an old-fashioned make.
I'd also like to run everything under a new user account for security, since all ports are > 1024.
Normally I wouldn't create a home directory for these types of users, but in this instance only pain and suffering will greet you if you don't.
Additionally we'll be putting SeaweedFS in /opt, because that's generally where these things go. Don't like it? Change it.
Finally, we'll be running SeaweedFS (both master and slaves) as systemd services because that'll make managing them a lot easier.

Master/Slave

We'll need to build SeaweedFS on each of our nodes (master and both slaves), so do that on all of them now.

Building SeaweedFS

From git clone:

sudo add-apt-repository ppa:longsleep/golang-backports
sudo apt update
sudo apt install golang-go mercurial build-essential
sudo useradd -d /home/seaweed -m -s /bin/false seaweed
cd /opt
sudo git clone https://github.com/chrislusf/seaweedfs.git
sudo chown -R seaweed:seaweed seaweedfs/
cd seaweedfs
sudo su -s /bin/bash -c 'make' seaweed
sudo su -s /bin/bash -c 'make 5_byte_linux_build' seaweed
cd build
sudo tar xvzf linux_amd64_large_disk.tar.gz
sudo chown seaweed:seaweed ./weed
sudo mv ./weed ../weed/

Alternatively, from releases (if git/master is broken):

weed_version=1.77
sudo add-apt-repository ppa:longsleep/golang-backports
sudo apt update
sudo apt install golang-go mercurial build-essential
sudo useradd -d /home/seaweed -m -s /bin/false seaweed
cd /opt
sudo wget https://github.com/chrislusf/seaweedfs/archive/$weed_version.tar.gz
sudo tar xvzf $weed_version.tar.gz
sudo mv seaweedfs-$weed_version seaweedfs
sudo chown -R seaweed:seaweed seaweedfs/
cd seaweedfs
sudo su -s /bin/bash -c 'make' seaweed
sudo su -s /bin/bash -c 'make 5_byte_linux_build' seaweed
cd build
sudo tar xvzf linux_amd64_large_disk.tar.gz
sudo chown seaweed:seaweed ./weed
sudo mv ./weed ../weed/

Wait a bit and you should have an executable at /opt/seaweedfs/weed/weed which is what we'll use for everything.

Master

Work on the master for SeaweedFS is fairly light, so this won't be difficult. Really we'll just want to add a few services here.
In short, we'll be spinning up these SeaweedFS services:

  • Master (controller, communication between servers)
  • Filer (an API, for APIs. An in-between for the Master and the other systems. It's weird, I know.)
  • Amazon S3 API (the interface OwnCloud will actually be using to SeaweedFS)

Create the file /etc/systemd/system/seaweed.service
In it, we'll add the following:
take note of the -ip and -defaultReplication parameters

[Unit]
Description=Seaweed Master
After=network.target

[Service]
Type=simple
Restart=on-failure
User=seaweed
Group=seaweed
ExecStart=/opt/seaweedfs/weed/weed master -ip=<private> -defaultReplication=010
KillSignal=SIGTERM

[Install]
WantedBy=multi-user.target

There's an optimization guide in the Wiki that says you should add the -volumePreallocate flag to your master node. Don't do this unless you want your disk to get full real quick.
Also, there's some parts of the Wiki that use ./weed server - don't, as we'll be using a more "manual" spproach.

Quick note on defaultReplication

You'll want to change that to whatever you need. The flag is a mask, with each number representing a different replication "type".
The Wiki doesn't do a fantastic job of explaining it well, so here goes:

In each slave server, you're going to create volumes (in this guide, two volumes per slave since each slave has two external disks attached).
We're going to pretend that we're professionals and have rack-mounted servers, and that each slave is on a new rack.
We're also going to pretend we're professionals and say that the three total servers we have are in a datacenter.
That's what each of those flags represents. When you store a file onto SeaweedFS, those flags tell it how many copies of that file it should store, and where it should store said copies.
010 = 0 copies on another datacenter, 1 copy on another rack, 0 copies on another volume on the same slave.
Similarly, 102 would = 1 copy on another datacenter, 0 copies on another rack, and 2 copies on another volume on the same slave.
Get it? Yeah, it's a bit confusing. Oh well. Our current flags will replicate a file across both slave servers for redundancy.

Back on track, to Master

Reload the systemd daemon, enable the service on startup, and start the service:

sudo systemctl daemon-reload
sudo systemctl enable seaweed.service
sudo service seaweed start

Now we'll need a way to easily access the SeaweedFS. There's multiple ways to do this, but we'll be using the Amazon S3 API.
Note that while you can use WebDAV, you absolutely should not as it's horrendously slow and will cause everyone uploading to your OwnCloud to shift uncomfortably in their chair while they wait for "processing" to finish, only to be met with errors at the end even when the upload was successful - and then wait some more as OwnCloud disconnects and reconnects to SeaweedFS and their files disappear and reappear. It's not pleasant.

Let's create the master and filer config files:

sudo mkdir /etc/seaweedfs
sudo /opt/seaweedfs/weed/weed scaffold -config=master -output=/etc/seaweedfs/
sudo /opt/seaweedfs/weed/weed scaffold -config=filer -output=/etc/seaweedfs/

Now edit the top of the file /etc/seaweedfs/filer.toml:

[leveldb2]
enabled = true
dir = "/home/seaweed"
Small note on filer.toml

You might see, down the auto-generated file somewhere, that you have some lines like the following:

// allows reads from slave servers or the master, but all writes still go to the master
[...]
// automatically use the closest Redis server for reads

Replace the // with # instead, or else you'll get errors in starting the service we're about to create. Lovely, isn't it?

This will ensure that the service has the proper access to the files it needs. Speaking of, let's create that service.
Create the file /etc/systemd/system/seaweedfiler.service
In it, we'll add the following:

[Unit]
Description=Seaweed Filer
After=network.target

[Service]
Type=simple
Restart=on-failure
User=seaweed
Group=seaweed
ExecStart=/opt/seaweedfs/weed/weed filer
KillSignal=SIGTERM

[Install]
WantedBy=multi-user.target

Pretty simple. Reload the systemd daemon again and enable/start the service.

sudo systemctl daemon-reload
sudo systemctl enable seaweedfiler.service
sudo service seaweedfiler start

And now the S3 API. Again, fairly straightforward.
Create the file /etc/systemd/system/seaweeds3.service
In it, we'll add the following:

[Unit]
Description=Seaweed S3 API
After=network.target

[Service]
Type=simple
Restart=on-failure
User=seaweed
Group=seaweed
ExecStart=/opt/seaweedfs/weed/weed s3
KillSignal=SIGTERM

[Install]
WantedBy=multi-user.target

Reload, etc.

sudo systemctl daemon-reload
sudo systemctl enable seaweeds3.service
sudo service seaweeds3 start

Finally, FUSE. This one will create a mount so we can look at uploaded files in case things go wrong.

sudo mkdir /mnt/weed
sudo chown -R seaweed:seaweed /mnt/weed

Edit the file /etc/fuse.conf and uncomment or add the following:

user_allow_other


Create the file /etc/systemd/system/seaweedfuse.service
In it, we'll add the following:

[Unit]
Description=Seaweed FUSE
After=network.target

[Service]
Type=simple
Restart=on-failure
User=seaweed
Group=seaweed
ExecStart=/opt/seaweedfs/weed/weed mount -dir=/mnt/weed -filer.path=/
KillSignal=SIGTERM

[Install]
WantedBy=multi-user.target

One last time. Reload, enable/start, etc etc.

sudo systemctl daemon-reload
sudo systemctl enable seaweedfuse.service
sudo service seaweedfuse start

Finally, we'll install the aws cli and create a bucket as per the Wiki. Thankfully, there's an apt for that.

sudo apt install awscli
aws configure
  -> none
  -> none
  -> local
  -> [None]
aws configure set default.s3.signature_version s3v4
aws --endpoint-url http://localhost:8333 s3 mb s3://owncloud

We're done with Master for now. Time to move on to slaves.

Slaves

Each slave, in this scenario, has two drives. Thus, we'll be creating two new mounts and two new SeaweedFS volumes on each slave.
The slaves are mirrors of eachother (as they should be), so just copy the following across both with the single edit that will be mentioned later.

First, we'll need to find and mount those drives.

sudo ls -l /dev | grep sd

You should see sda, sdb, and sdc, maybe more or less depending on your install and hardware. I'm assuming the two blank drives that we want to erase are /dev/sdb and /dev/sdc.
We'll create the appropriate file systems and mount them now:

sudo mkfs.ext4 /dev/sdb
sudo mkfs.ext4 /dev/sdc
sudo ls -l /dev/disk/by-uuid/

Now that we know what each disk's UUID is, we can mount them in fstab appropriately.
Modify /etc/fstab and add the following:

/dev/disk/by-uuid/<UUID-1> /mnt/data1 auto nosuid,nodev,nofail,x-gvfs-show 0 0
/dev/disk/by-uuid/<UUID-2> /mnt/data2 auto nosuid,nodev,nofail,x-gvfs-show 0 0

And finally, create the mount points and mount the drives:

sudo mkdir /mnt/data1
sudo mkdir /mnt/data2
sudo mount -a
sudo chown -R seaweed:seaweed /mnt/data1
sudo chown -R seaweed:seaweed /mnt/data2

Now we can create the services.
Create the file /etc/systemd/system/seaweed1.service
In it, we'll add the following:
take note of the -ip, -mserver, -dataCenter, and -rack parameters

[Unit]
Description=Seaweed Slave 1
After=network.target

[Service]
Type=simple
Restart=on-failure
User=seaweed
Group=seaweed
ExecStart=/opt/seaweedfs/weed/weed volume -index=leveldb -dir=/mnt/data1 -max=100 -mserver=<master_private>:9333 -ip=<private> -port=8081 -dataCenter=dc1 -rack=rack1
KillSignal=SIGTERM

[Install]
WantedBy=multi-user.target

Remember those flags from the master replication section? This is where we define the data center and rack for those.
When you're doing slave #2, make sure to modify rack1 to be rack2 instead.

Anyway, on to the next service. We'll enable them in a bit.
Create the file /etc/systemd/system/seaweed2.service
In it, we'll add the following:
take note of the -ip, -mserver, and -rack parameters

[Unit]
Description=Seaweed Slave 2
After=network.target

[Service]
Type=simple
Restart=on-failure
User=seaweed
Group=seaweed
ExecStart=/opt/seaweedfs/weed/weed volume -index=leveldb -dir=/mnt/data2 -max=100 -mserver=<master_private>:9333 -ip=<private> -port=8082 -dataCenter=dc1 -rack=rack1
KillSignal=SIGTERM

[Install]
WantedBy=multi-user.target

Now we reload/enable/etc the services:

sudo systemctl daemon-reload
sudo systemctl enable seaweed1.service
sudo systemctl enable seaweed2.service
sudo service seaweed1 start
sudo service seaweed2 start

If all goes well, your slaves should now be connected to your master. Check for errors in all of the services, but assuming the sun and moon align just right you should be fine.

OwnCloud, on Master

Every install method on the official documentation in painful, outdated, or just flat-out won't work- some of the methods will even brick your system- EXCEPT the appliances and docker. Even then, we'll have to edit some broken stuff. We're going with Docker.

Step one is to install docker and docker-compose.

sudo apt install apt-transport-https ca-certificates curl software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu bionic stable"
sudo apt update
sudo apt install docker-ce
sudo usermod -aG docker ${USER}
sudo curl -L https://github.com/docker/compose/releases/download/1.21.2/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose

Log out and back in for the new group to take effect. Now we stand up OwnCloud internally.

owncloud_version=10.3.2
owncloud_user=<admin user>
owncloud_pass=<admin pass>
cd ~
wget https://raw.githubusercontent.com/owncloud/docs/master/modules/admin_manual/examples/installation/docker/docker-compose.yml
cat << EOF > .env
OWNCLOUD_VERSION=$owncloud_version
OWNCLOUD_DOMAIN=localhost
ADMIN_USERNAME=$owncloud_user
ADMIN_PASSWORD=$owncloud_pass
HTTP_PORT=8080
EOF
docker-compose up -d

And finally, we proxy port 80 to 8080 using Apache. You can using Nginx as well, I just personally prefer Apache.

sudo apt install apache2
sudo a2enmod proxy proxy_html proxy_http proxy_http2 remoteip headers

And create a file, /etc/apache2/sites-available/owncloud.conf with the following:

Define HOST <yourhost.com>
Define SUBDOMAIN <yoursubdomain>

<VirtualHost *:80>
        ServerName ${SUBDOMAIN}.${HOST}
        ServerAdmin admin@${HOST}

        # Available loglevels: trace8, ..., trace1, debug, info, notice, warn,
        # error, crit, alert, emerg.
        # It is also possible to configure the loglevel for particular
        # modules, e.g.
        #LogLevel info ssl:warn
        ErrorLog ${APACHE_LOG_DIR}/error-${SUBDOMAIN}.${HOST}.log
        CustomLog ${APACHE_LOG_DIR}/access-${SUBDOMAIN}.${HOST}.log combined

        ProxyPass         /  http://localhost:8080/ nocanon
        ProxyPassReverse  /  http://localhost:8080/
        AllowEncodedSlashes NoDecode

        ProxyPreserveHost On
        ProxyRequests     Off

        RequestHeader set X-Real-IP X-Forwarded-For
        RemoteIPHeader X-Forwarded-For
</VirtualHost>

# If you have an SSL cert, use it here
#<VirtualHost *:443>
#        ServerName ${SUBDOMAIN}.${HOST}
#        ServerAdmin admin@${HOST}

#        ErrorLog ${APACHE_LOG_DIR}/error-${SUBDOMAIN}.${HOST}.log
#        CustomLog ${APACHE_LOG_DIR}/access-${SUBDOMAIN}.${HOST}.log combined

#        SSLEngine on
#        SSLCertificateFile /var/www/${HOST}/ssl/cert.pem
#        SSLCertificateKeyFile /var/www/${HOST}/ssl/key.pem
#        SSLCACertificateFile /var/www/ssl/your-ca.pem

#        ProxyPass         /  http://localhost:8080/ nocanon
#        ProxyPassReverse  /  http://localhost:8080/
#        AllowEncodedSlashes NoDecode

#        ProxyPreserveHost On
#        ProxyRequests     Off

#        RequestHeader set X-Forwarded-Proto "https"
#        RequestHeader set X-Forwarded-Port "443"
#        RequestHeader set X-Real-IP X-Forwarded-For
#        RemoteIPHeader X-Forwarded-For
#</VirtualHost>

# vim: syntax=apache ts=4 sw=4 sts=4 sr noet

And now we'll enable the site and restart apache.

sudo a2dissite 000-default.conf
sudo a2ensite owncloud.conf
sudo service apache2 restart

Trust me, this was far, far easier than any of the other install methods. Go to your master's public IP and type in your admin username and password.
Next, go to the marketplace (drop down on the top left) and install the S3 Object Storage add-on.
Finally, we'll set up external storage for S3.

Back to the console, edit /var/lib/docker/volumes/${USER}_files/_data/config/config.php and add the following to the bottom, before the ); ending:

  'objectstore' => [
        'class' => 'OCA\Files_Primary_S3\S3Storage',
        'arguments' => [
            // replace with your bucket
            'bucket' => 'owncloud',
            'options' => [
                // version and region are required
                'version' => '2006-03-01',
                'region'  => 'local',
                // replace key, secret and bucket with your credentials
                'credentials' => [
                    // replace key and secret with your credentials
                    'key'    => 'none',
                    'secret' => 'none',
                ],
                // replace the ceph endpoint with your rgw url
                'endpoint' => 'http://localhost:8333/',
                // Use path style when talking to ceph
                'use_path_style_endpoint' => true,
            ],
        ],
    ],
  'skeletondirectory' => '',

Finally, restart the docker stack.

cd ~
docker-compose down
docker-compose up -d

Firewall

Last but not least, the firewall rules.

Master

Edit /etc/ufw/after.rules and add the following at the bottom of the file, after COMMIT:
Note the use of <public> in the file below. Replace this with your public interface's name.

# Put Docker behind UFW
*filter
:DOCKER-USER - [0:0]
:ufw-user-input - [0:0]

-A DOCKER-USER -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A DOCKER-USER -m conntrack --ctstate INVALID -j DROP
-A DOCKER-USER -i <public> -j ufw-user-input
-A DOCKER-USER -i <public> -j DROP
COMMIT

Add UFW rules.

private_network=<private>
sudo ufw disable
sudo ufw default deny incoming
sudo ufw default deny outgoing
sudo ufw allow ssh
sudo ufw allow out 80/tcp
sudo ufw allow out 443/tcp
sudo ufw allow out 53
sudo ufw allow out 123/udp
sudo ufw allow out 9418/tcp
sudo ufw allow in on $private_network
sudo ufw allow out on $private_network
sudo ufw allow in 80/tcp
sudo ufw allow in 443/tcp
sudo ufw enable
Slaves
sudo ufw disable
sudo ufw default deny incoming
sudo ufw default deny outgoing
sudo ufw allow ssh
sudo ufw allow out 80/tcp
sudo ufw allow out 443/tcp
sudo ufw allow out 53
sudo ufw allow out 123/udp
sudo ufw allow out 9418/tcp
sudo ufw allow in on <private>
sudo ufw allow out on <private>
sudo ufw enable

And voila, you're done!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment