Skip to content

Instantly share code, notes, and snippets.

@Wingman4l7
Created September 6, 2019 08:15
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save Wingman4l7/68ad6f43d0161942a32154e23216f25b to your computer and use it in GitHub Desktop.
Save Wingman4l7/68ad6f43d0161942a32154e23216f25b to your computer and use it in GitHub Desktop.

Dragonchain Deployment: Post-Mortem

In this, I run down some of the sticking points for me in deploying the Dragonchain and getting it to successfully register on the Dragon Net. I suspect some of these might be common, so they could prove helpful when putting together more comprehensive documentation for deploying such chains, or a "Tips & Tricks for Success" document.

For my environment, I used a minikube running in a VM created by VirtualBox, on a Windows machine. The deployment was done primarily using a cygwin console.

  • The first sticking point was pods crashing due to insufficient memory. Minikube defaults to allocating 2GB. I initially attempted to rectify this with adding a flag when starting the minikube, which seemed to prevent some of the crashes: minikube start --memory 4096
  • However, the es-master-0 pod continued to crash. Help on the Slack channel indicated that I should try SSHing into the minikube and setting the virtual memory allocation higher for Elasticsearch: sudo sysctl -w vm.max_map_count=262144. This resolved the remaining pod crashes. Stability achieved!

Next up was to get the chain to successfully register with Dragon Net.

  • I was getting a 403 error at first, which caused some initial confusion. This turned out to be due to a mismatch between the chain level set in opensource-config.yaml (which is default set at L1) and the chain level I created in the web interface (L2). Part of this confusion was caused from the labelling on the Configuration Details page: L1_bug

  • Once that was fixed, all that was left to do was ensure that the chain was reachable from the Internet. This was a command I found helpful when checking that the VM was correctly port-forwarding to the minikube: netstat -na | grep ":30000"

  • Some pesky mucking about with port-forwarding through a WiFi router ensued, and then -- success!

$ curl https://matchmaking.api.dragonchain.com/registration/verify/23oQwsNzXnHdajoD3RvYrp61Mz2D6wzi47KDU1dQFh5Zx
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100    88  100    88    0     0     88      0  0:00:01 --:--:--  0:00:01   401
{"success":"Dragon Net configuration is valid and chain is reachable. No issues found."}

The only general advice that I would add for other users attempting this, would be to be careful with how the pods are restarted / recreated after updating settings in opensource-config.yaml; debugging confusion can ensue if components remain running or restart but with the old settings. The helm flag --recreate-pods may be able to do this correctly.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment