Skip to content

Instantly share code, notes, and snippets.

@zicklag
Last active May 25, 2021 00:31
Show Gist options
  • Save zicklag/d4a16addb23275152b10144c2f53deed to your computer and use it in GitHub Desktop.
Save zicklag/d4a16addb23275152b10144c2f53deed to your computer and use it in GitHub Desktop.
SeaweedFS Swarm Stack
version: '3.5'
# WARNING: Haven't tested this version of this YAML exactly, but it *should* be correct.
services:
master-1:
image: chrislusf/seaweedfs:latest
networks:
- hostnet
command: "master -port=9333 -defaultReplication=001 -peers=localhost:9333,localhost:9334,localhost:9335"
entrypoint: /usr/bin/weed
# TODO: The storage mountpoint is /data for all services
volumes:
- master-1-data:/data
master-2:
image: chrislusf/seaweedfs:latest
networks:
- hostnet
command: "master -port=9334 -defaultReplication=001 -peers=localhost:9333,localhost:9334,localhost:9335"
entrypoint: /usr/bin/weed
volumes:
- master-2-data:/data
master-3:
image: chrislusf/seaweedfs:latest
networks:
- hostnet
command: "master -port=9335 -defaultReplication=001 -peers=localhost:9333,localhost:9334,localhost:9335"
entrypoint: /usr/bin/weed
volumes:
- master-3-data:/data
volume-1:
image: chrislusf/seaweedfs:latest
networks:
- hostnet
command: 'volume -mserver=localhost:9333,localhost:9334,localhost:9335 -port=8080'
volumes:
- volume-1-data:/data
volume-2:
image: chrislusf/seaweedfs:latest
networks:
- hostnet
command: 'volume -mserver=localhost:9333,localhost:9334,localhost:9335 -port=8081'
volumes:
- volume-2-data:/data
filer:
image: chrislusf/seaweedfs:latest
networks:
- hostnet
command: 'filer -master=localhost:9333,localhost:9334,localhost:9335 -port=8888'
tty: true
stdin_open: true
volumes:
- filer-data:/data
networks:
hostnet:
external: true
name: host
volumes:
# "driver: local" is implied on all of these volumes because driver is not specified
- master-1-data:
- master-2-data:
- master-3-data:
- volume-1-data:
- volume-2-data:
- filer-data:
@opax7
Copy link

opax7 commented Apr 2, 2019

Hey @zicklag, it looks like we're missing the volume mounts for the volume servers. You mind adding that to this gist?

@zicklag
Copy link
Author

zicklag commented Apr 2, 2019

There you go!

@dkdndes
Copy link

dkdndes commented Feb 1, 2020

Could you explain the setup of the volumes, in this context?

@zicklag
Copy link
Author

zicklag commented Feb 1, 2020

@dkdndes The yaml was actually missing the volumes section, which I added to the bottom. The data will be put in stack-scoped Docker named volumes on the local filesystem.

For example, if you deployed this YAML in a stack named seaweedfs you would end up with the following volumes:

  • seaweedfs_master-1-data
  • seaweedfs_master-2-data
  • seaweedfs_master-3-data
  • seaweedfs_volume-1-data
  • seaweedfs_volume-2-data
  • seaweedfs_filer-data

The volume for each service will be stored locally on the server that the service is deployed on.

Edit:

This particular example was made assuming that all of the services were actually running on the same machine ( even though it is a Docker swarm yaml ). You would want to structure the yaml a bit differently for running on a Cluster:

If you were running in a cluster you would more likely want to have one volume-server service that runs with a global scale and a single volume named something like volume-server-data. That service will then run on every server in the cluster and store its data in the seaweedfs_volume-server-data volume on each host.

For the master servers you would need to use labels to constrain them to run on specific hosts so that they don't lose their volumes when getting spun up on other hosts.

@xirius
Copy link

xirius commented Mar 9, 2020

@zicklag There is no filer redundancy? What happens if filer node goes down ? I guess it will affect all the container storage. How to make it resilient to it?

@dkdndes
Copy link

dkdndes commented Mar 9, 2020

@zicklag thank you for the update, any chance you provide the swarm updated version?

@xirius
Copy link

xirius commented Mar 9, 2020

@dkdndes I'm playing with this setup: Swarm Cluster on 3 nodes
The only problem I have is that only 1 filer works. It means that if the filer node goes offline, the mounted volumes become inaccessible. If anyone has an idea how to make filers resilient it would be awesome.

@zicklag
Copy link
Author

zicklag commented Mar 9, 2020

@dkdndes I haven't actually used SeaweedFS in a while and I don't have a Swarm cluster to test on at the moment. You are probably best trying out @xirius's YAML.

@xirius, in order to scale the filer you have to setup an external filer store, such as Cassandra or a plethora of other databases. Then you can scale to any number of filers as long as they all point at the same filer store. Of course that means that now you have to take into account how you are going to scale the chosen filer store as well.

@xirius
Copy link

xirius commented Mar 10, 2020

@zicklag oh I see, thanks. I assumed filer should replicate the metadata for redundancy.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment