Skip to content

Instantly share code, notes, and snippets.

@ForbesLindesay
Created April 17, 2014 12:16
Show Gist options
  • Save ForbesLindesay/10978820 to your computer and use it in GitHub Desktop.
Save ForbesLindesay/10978820 to your computer and use it in GitHub Desktop.

Building a MongoDB GUI

A small team at Red Gate recently set out to build a graphical interface for querying and interacting with MongoDB in just one week. As it was a new project, we got to experiment with some of the newer libraries and techniques for building rich web apps in node.js. Here are some of the more interesting things we found.

Architecture

We used express as the framework for the server side application, and browserify (via browserify-middleware) to help us structure the client side application. The beauty of browserify is that it allows us to use node.js style modules on the client side. Rather than choose any specific framework (e.g. angular or ember) we were able to mix and match just the things we needed. This makes it possible to pick the best tool for each individual job.

The client side of the application consists of a main entry point, which uses page.js to handle routing on the client side. This uses full HTML5 push state, and we were careful to make sure that all the relevant state was stored in the URL. The advantage of using this technique in a client side application is that all the browser features work as users expect them to (e.g. back and forwards buttons and opening pages in new tabs).

We created a "View Model" for each page, and for each component that recurred on multiple pages. For example, the ConnectionsViewModel looked like this:

'use strict';

var ko = require('knockout');
var data = require('../data');
var ConnectionViewModel = require('./connection');

function ConnectionsViewModel() {
  this.templateName = 'connections';

  // Fields for creating a connection
  this.name = ko.observable();
  this.connectionString = ko.observable();
  this.saving = ko.observable(false);

  // Field to store the list of saved connections
  this.connections = ko.observable([]);

  // Populate the list of saved connections
  data.connections.get().done(function (connections) {
    this.connections(connections.map(function (connection) {
      return new ConnectionViewModel(connection, this);
    }.bind(this)));
  }.bind(this));
}

ConnectionsViewModel.prototype.add = function() {
  //save the connection to the database
  this.saving(true);
  data.connections.create({
    name: this.name(),
    connectionString: this.connectionString()
  }).done(function (connection) {
    // update the UI to show the saved connection without refreshing
    this.connections.push(new ConnectionViewModel(connection, this));
    // clear the name and connection string
    this.name("");
    this.connectionString("");
    this.saving(false);
  }.bind(this), function (err) {
    this.saving(false);
    alert('There was an error saving the connection');
    throw err;
  }.bind(this));
};

module.exports = ConnectionsViewModel;

As you can see, we are using knockout to do databinding. This was largely because the team had some prior experience using knockout. The first, constant property of templateName appears on every view model and determines which view to use when displaying this view model in the user interface.

Another design descision you can see here is that we placed all the code that interacted with the network in the data module. Keeping all this code in one place helps with maintainability, but the biggest advantage is that it would make it easier to mock that code out and unit test just the view models.

The final thing worth noting in this sample is that we are using promises as the return value for all asynchronous functions. This helps keep code tidy, and makes it much easiser to keep the error handling logic working as expected.

Pagination

One thing we knew we'd need to address was pagination of results.We knew this was something that's often not done very well and wanted to experiment with better ways of doing this. After a lengthy session on a whiteboard drawing all sorts of crazy ideas, we decided to try using the browser's native ability to scroll, and simply only loading the appropriate elements to actually display.

We fixed the location of all the UI elements on the page, so that they wouldn't move as you scrolled. We then created a blank <div> element that was the height of all the rows combined. When you scrolled, you were in fact scrolling this background <div>, not the UI. We used JavaScript to detect the rows that should be visible, and just rendered those in a table in the UI.

This worked really well on small data sets, it felt simple and fast to use. We added a few nicities like a box containing the page number, so you could see where you were and jump to a specific page. We also added some caching to ensure that it was smooth. We had a problem with memory usage and download speeds though. It was possible to scroll through very fast, and we had to keep up downloading the new data from the user's database. In the end, it just wasn't possible to keep enough data in memory to have this UI feeling fast and not glitchy.

Since we were running out of time to try anything clever, we reluctantly went back to the tried and tested method of having next/previous buttons, and a box to let you jump to a specific page. We also removed all the caching, just loading the current page as and when it was asked for. This actually worked remarkably well. Pagination was fast enough, that it rarely became a significant frustration. The lesson seems to be that if you need to implement pagination, do something simple, but then try really hard to make it really fast.

Authentication

We used passport for the authentication. This makes it increadibly easy to have users log in via practically any means. For the prototype we used mozilla's persona. This was great for getting up and running fast, but we might want to consider an alternative provider if this system moves into production.

To store settings and connection details, we used MongoDB on the backend. This helped us to dogfood the application as much as possible.

Parsing mongo queries and adding autocomplete

We used the mongod npm package to connect to the database. Of course we needed to be very careful to avoid running untrusted code on our server, so we opted to parse the mongodb queries and then re-build them against mongod. This offered us better flexibility, and was also useful when it came to building auto-complete support.

We used esprima to parse the query and then we build that into our own data-model that was easy to evaluate against mongod. This lets you use most of the JavaScript syntax you'd be used to, just as long as you don't include functions/loops etc. The key is that this query syntax is the same as what the command line tools and documentation use.

As well as auto-completing the methods for the MongoDB query we wanted to auto-complete the properties you could filter on. To do this we needed a schema, which seems like an odd thing to suggest in a MongoDB world. The reality is that most collections do have a schema though. We created a tool that can infer a schema from a JSON object. We then went a step further and supported combining two such schemas to get a schema that matches both objects. If a property exists on one schema but not the other, it gets marked as optional. If it's the same type on both schemas we just pass it straight through. If it's differnt types on the different schemas we simply mark it down as being one thing or the other. We can then just query the first few rows of each collection in order to build up a schema that's suitable for auto-complete and for displaying a tree view to users. Of course this will miss a lot when the schemas are very varied, but it's a good aproximation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment