Skip to content

Instantly share code, notes, and snippets.

@wjentner
Last active April 13, 2020 21:48
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save wjentner/788f335c2cb77f81bdecf0594b8746b1 to your computer and use it in GitHub Desktop.
Save wjentner/788f335c2cb77f81bdecf0594b8746b1 to your computer and use it in GitHub Desktop.
A little bit of devtalk about CoronaVis

DevTalk: What does it take to make CoronaVis?

CoronaVis is a tool developed by volunteers of the Data Analysis and Visualization Group (twitter: @dbvis).

Background

On Saturday, March 21st 2020, we received an email from a local physician asking us if we could build a tool helping physicians and other decision-makers in better distributing patients within Germany - and beyond. Two hours later, we had a group telco and decided to start immediately as time is a critical factor in this pandemic. In total, 20 people (Ph.D. students, PostDocs & Professors) volunteered to help develop the tool.

Framework & Infrastructure

In our GIS lectures, we use a stack of Angular with Leaflet, python-flask and PostgreSQL with the PostGIS extension to teach our students how to create useful GIS visualizations. We decided to build upon this stack as it was readily available. We also host our own GitLab instance with already configured GitLab CI Runners. A month before, we started building our own kubernetes bare-metal cluster, which should replace the already available but smaller k8 cluster used for lingvis.io.

The early development process

Since we did not want to take any chances to run into problems with the new cluster, we decided to first deploy everything to the older and well-tested lingvis.io cluster, which runs for over two years now. We quickly split up into subteams to parallelize the development and DevOps process, and the teams formed as shown on our team page in the CoronaVis tool. One team wrote crawlers in python crawling the bed capacity data from DIVI, which is now hosted here. Other crawlers were built to gather data for Covid19-Cases provided by the Robert Koch Institut. Two other teams started developing the REST API services as well as the frontend. One team was and is still in charge of the external communication with the hospital and other entities, which later also included press and twitter. This helps the rest of us to focus on development. Of course, everything was quite chaotic, but daily confcalls for everyone and hanging out all day in subteam-confcalls definitely helped. To speed up the process and tighten the feedback loop with the stakeholders, we configured our gitlab-ci to release on every push to master while all development was and is being done in feature branches. The first running version was already up and deployed on Monday, March 23rd - what an incredible team effort!

New development process

Within the first week, we had 60 to 100 commits every day. Afterward, it slowed down a little bit, and it was also the time that our tool becomes more stable. We, therefore, changed our release process to have three environments: review (feature branches), staging (master branch), production (tags). Every branch is deployed with its own URL, which helps us better discuss new features, especially when it comes to UIX. Meanwhile, the new k8 cluster was being finished, and we switched over without having any downtime. Now we have a cluster with three etcd nodes, two master nodes, and six worker nodes. Each of the worker nodes has eight cores and 64GB memory plus 1TB disk space available. To prevent any licensing issues and to be able to style our map freely, we decided to host our own tile server - of course, also in k8. In the first version, we used a WebGL renderer, which proved to be really slow, specifically on mobile devices. The tile server is customized for Germany as this already requires 20GB of disk space, and the docker images blow up real fast. In the production environment, everything is replicated at least two times to prevent any downtimes. Today we can proudly state: no downtime, and 26,000 users at one day (04/12/2020) are no problem at all. We try to publish a new feature-release roughly every two to three days. In between, we publish bugfix-releases and refactor our code to keep it maintainable. For error tracking, we use sentry, which helps us to better deal with the crazy amount of browsers, versions, and devices out there. Today with version 1.4.0 just released, we count 1,555 commits, and 230 merged pull requests.

What about testing?

We decided not to use our time to write unit tests or any other automated tests. Time is precious, and developers are rare. Plus, with hourly changing data (and APIs from time to time), unit testing becomes quite an effort as the tests have to be continuously maintained. Instead, two people are in charge to rigorously test the tool and cross-check everything with the data source we use but also external data sources. Furthermore, they hint at UIX problems and check every version on different devices.

The future

It is quite hard to make any certain predictions and plans during this pandemic. What is certain is that we are all committed to continuing the development of CoronaVis. We have countless of ideas on how we can extend and improve our tool. We are also discussing right now how we can open-source our tool. Furthermore, we want to add more data from other countries to our tool, as we believe that this pandemic can only be solved by standing together.

For any questions please message me (@wjentner) or us (@dbvis) on twitter. You can also reach us via email at support@dbvis.inf.uni-konstanz.de.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment