Skip to content

Instantly share code, notes, and snippets.

@zelig
Last active July 5, 2016 16:13
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save zelig/b00d7307ba777543091a105233c94b63 to your computer and use it in GitHub Desktop.
Save zelig/b00d7307ba777543091a105233c94b63 to your computer and use it in GitHub Desktop.
swarm end to end automated testing from scratch

assumptions

  • setup according to wiki
  • nodes.lst lists the host nodes of the remote swarm cluster (formmat is cicada@3.3.0.1 per line)
  • given a directory swarm locally (633Mb)
du -m -d 0 /tmp/swarm/
650     /tmp/swarm/
# let's have an absolutely clean start
swarm remote-run nodes.lst sudo reboot

# clean up directories
swarm remote-run nodes.lst rm -rf bin bzz

# update the control scripts on each host
swarm remote-update-scripts nodes.lst

# update the binary on each host node
swarm remote-update-bin nodes.lst

# spawn a 2-instance local cluster on each of our 10 host nodes
# will also launch network monitoring 
swarm remote-run nodes.lst 'swarm init  2; swarm netstatconf sworm; swarm netstatrun'

# collect enodes from all instances on all hosts
swarm remote-run nodes.lst 'swarm enode all' | tr -d '"' |grep -v running > enodes.lst

# copy enodes file to each host node
for node in `cat nodes.lst`; do scp enodes.lst $node:; done

# inject all enodes as peers to
swarm remote-run nodes.lst swarm addpeers all enodes.lst

# restart an instance on the first two  host nodes to get the blockchain rolling
swarm remote-run <(head -2 nodes.lst) swarm restart 00 --mine

2 transfer the directory first to

time scp -r /tmp/swarm tron@5.1.88.50:

real    4m39.005s
user    0m7.320s
sys     0m2.628s
time rsync -vaz /tmp/swarm
sent 630,073,833 bytes  received 223 bytes  2,428,031.04 bytes/sec
total size is 680,877,183  speedup is 1.08

real    4m18.951s
user    1m3.864s
sys     0m1.892s
rsync -avz /tmp/swarm/  ubuntu@54.93.54.238:swarm
sending incremental file list
created directory /tmp/swarm
./
index.html
Swarm_files/
...
sent 633,345,343 bytes  received 634 bytes  2,488,589.30 bytes/sec
total size is 680,877,183  speedup is 1.08
time swarm up 00 album/ index.html
Upload file 'album/' to node 00...
"4607832ca9343ec977d5b25d034c9587f4186fc5d5f46998494126c679de169d"

real    0m0.749s
user    0m0.218s
sys     0m0.022s

time swarm download 00 4607832ca9343ec977d5b25d034c958
7f4186fc5d5f46998494126c679de169d  /tmp/album-00
download '4607832ca9343ec977d5b25d034c9587f4186fc5d5f46998494126c679de169d' from node 00 to '/tmp/album-00'
real    0m0.465s
user    0m0.213s
sys     0m0.025s

diff -r album/ /tmp/album-00/ > /dev/null && echo PASS

time upload:

hash=`swarm up 00 ./swarm/examples/album index.html

test download on all nodes:

swarm remote-run nodes.lst "rm -rf album-*; time swarm download 00 $hash /tmp/album-00; time swarm download 01 $hash /tmp/album-01; diff -r /tmp/album-* >/dev/null && echo PASS || echo FAIL"

test download from every node

swarm remote-run nodes.lst 'rm -rf /tmp/album*; time swarm download 00 4607832ca9343ec977d5b25d034c9587f4186fc5d5f46998494126c679de169d /tmp/album-00; time swarm download 01 4607832ca9343ec977d5b25d034c9587f4186fc5d5f46998494126c679de169d /tmp/album-01; diff -r /tmp/album-* >/dev/null && echo PASS || echo FAIL'
swarm up 00 /tmp/swarm/ index.html
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment