Skip to content

Instantly share code, notes, and snippets.

@zicklag
Last active May 25, 2021 00:31
Show Gist options
  • Save zicklag/d4a16addb23275152b10144c2f53deed to your computer and use it in GitHub Desktop.
Save zicklag/d4a16addb23275152b10144c2f53deed to your computer and use it in GitHub Desktop.
SeaweedFS Swarm Stack
version: '3.5'
# WARNING: Haven't tested this version of this YAML exactly, but it *should* be correct.
services:
master-1:
image: chrislusf/seaweedfs:latest
networks:
- hostnet
command: "master -port=9333 -defaultReplication=001 -peers=localhost:9333,localhost:9334,localhost:9335"
entrypoint: /usr/bin/weed
# TODO: The storage mountpoint is /data for all services
volumes:
- master-1-data:/data
master-2:
image: chrislusf/seaweedfs:latest
networks:
- hostnet
command: "master -port=9334 -defaultReplication=001 -peers=localhost:9333,localhost:9334,localhost:9335"
entrypoint: /usr/bin/weed
volumes:
- master-2-data:/data
master-3:
image: chrislusf/seaweedfs:latest
networks:
- hostnet
command: "master -port=9335 -defaultReplication=001 -peers=localhost:9333,localhost:9334,localhost:9335"
entrypoint: /usr/bin/weed
volumes:
- master-3-data:/data
volume-1:
image: chrislusf/seaweedfs:latest
networks:
- hostnet
command: 'volume -mserver=localhost:9333,localhost:9334,localhost:9335 -port=8080'
volumes:
- volume-1-data:/data
volume-2:
image: chrislusf/seaweedfs:latest
networks:
- hostnet
command: 'volume -mserver=localhost:9333,localhost:9334,localhost:9335 -port=8081'
volumes:
- volume-2-data:/data
filer:
image: chrislusf/seaweedfs:latest
networks:
- hostnet
command: 'filer -master=localhost:9333,localhost:9334,localhost:9335 -port=8888'
tty: true
stdin_open: true
volumes:
- filer-data:/data
networks:
hostnet:
external: true
name: host
volumes:
# "driver: local" is implied on all of these volumes because driver is not specified
- master-1-data:
- master-2-data:
- master-3-data:
- volume-1-data:
- volume-2-data:
- filer-data:
@xirius
Copy link

xirius commented Mar 9, 2020

@zicklag There is no filer redundancy? What happens if filer node goes down ? I guess it will affect all the container storage. How to make it resilient to it?

@dkdndes
Copy link

dkdndes commented Mar 9, 2020

@zicklag thank you for the update, any chance you provide the swarm updated version?

@xirius
Copy link

xirius commented Mar 9, 2020

@dkdndes I'm playing with this setup: Swarm Cluster on 3 nodes
The only problem I have is that only 1 filer works. It means that if the filer node goes offline, the mounted volumes become inaccessible. If anyone has an idea how to make filers resilient it would be awesome.

@zicklag
Copy link
Author

zicklag commented Mar 9, 2020

@dkdndes I haven't actually used SeaweedFS in a while and I don't have a Swarm cluster to test on at the moment. You are probably best trying out @xirius's YAML.

@xirius, in order to scale the filer you have to setup an external filer store, such as Cassandra or a plethora of other databases. Then you can scale to any number of filers as long as they all point at the same filer store. Of course that means that now you have to take into account how you are going to scale the chosen filer store as well.

@xirius
Copy link

xirius commented Mar 10, 2020

@zicklag oh I see, thanks. I assumed filer should replicate the metadata for redundancy.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment