Skip to content

Instantly share code, notes, and snippets.

@danielrbradley
Forked from thinktainer/carolr.markdown
Last active January 1, 2016 23:59
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save danielrbradley/8220604 to your computer and use it in GitHub Desktop.
Save danielrbradley/8220604 to your computer and use it in GitHub Desktop.

CarolR

The aim of CarolR is to play synchronised, real-time, polyphonic music from an orchestra of devices in the same area, to create an immersive, collaborative, Christmas music experience.

Challenges

In the design of such a system, we had several challenges to overcome:

  • Live performance music output
  • Encoding and programming of synchronised, muti-track music
  • Live communication of what to play to a collection of devices
  • Time synchronisation of the music being played across devices to create an orchestrated experience

Given the time constraint of the hack day, we quickly settled on developing a browser-based application as it would allow us to easily run the application on a range of mobile devices without the need for digging into multiple native platform APIs.

The structure of what we wanted to create can be described as a conductor device, who decides what and when notes should be played, and any number of musician devices which each follow the conductor's lead, emitting notes from their instruments

[REPLACE ME: Picture of architecture of orchestra]

Music Encoding and Programming

We experimented with creating a custom low-bandwidth protocol for sending the instructions from the conductor device to the instrument devices, but decided that harnessing the MIDI protocol was going to have desirable properties, such as:

  • Support for real-time music generation for an interactive experience
  • The bandwidth required was going to be acceptably low
  • There is an abundance of music with permissive rights available

We found the MIDI.js open source library, which contained two key components for our implementation: programmable MIDI instruments and a MIDI file decoder. The decoder could be run on the conductor's device to load an existing piece of music, then the notes could be sent to the musician devices which can then play the notes via their instrument.

Using the MIDI Instruments

The MIDI.js library exposed an API which allowed us to:

  1. Initialise an instrument in the browser (loading the sounds required to mimic a specific instrument, handling cross-browser issues)
  2. Start or stop playing a note on that instrument at a specific pitch and volume

Decoding MIDI files

The MIDI.js decoder passes the decoded instructions to methods defined on a generalised interface, where the implementation is chosen depending on the browser being used. To intercept the decoded messages, we created our own implementation of the interface which, rather than executing the instruction locally, can send the MIDI instructions to the musicians.

Communication and Synchronisation

Ultrasonic Broadcasting

Our first line of investigation was to use the sonicnet.js library for ultrasonic networking to deliver instructions from a single device to any devices in audible range. This would allow us to run the orchestra without the need for any GSM or other wireless network.

We managed to get a simple demonstration working with a reduced MIDI protocol allowing us to play a single octave of notes on a single instrument. However, the bandwidth was too low, causing concurrency issues when trying to send too many instructions at once, and limited us to a mono-phonic tune at approximately 60 BPM, essentially one distinct note per second.

Web Sockets

Our backup plan was to use web sockets to send the instructions of which notes to play. We looked at using the SignalR stack as provided by Microsoft, as that was familiar to both of us, however, this seemed like quite a heavy requirement for something that was going to push out a single type of event (MIDI data). We decided in the spirit of a 'hack' day, to go with something off our beaten track and settled on Node.js and the excellent socket.io node package.

Bringing It All Together

Having created musician devices that could be sent notes to play via web sockets, a conductor device which can read notes to play from a MIDI file, and set up a node server with socket.io, we had everything in place to start co-ordinating music.

After this we decided for it to be more interesting, and to highlight the fact that it is not all devices starting playback on the same music track at the same time we were going to split the music into individual parts, as defined in the midi files. You could think of it like one device playing the bass line, another playing the melody, etc. It turned out that this was really easy to do, as all we needed to do was filter the playback of the midi notes in the clients by channel. We added a text field and a button to change the channel on the device, and a little Santa gif to give it the famous 'graphics once over'.

This took as to a little after 5 and it was time for the demo in a short while, so we started asking people to start connecting their devices. A small let-down was that the iPhones seem to have some issues with MIDI.js playback, but this still allowed us to produce a good demo using android phones and our work laptops (which gave us the benefit of better speakers), even if the synchronisation could have done with a little more work.

We are both really impressed with the maturity and ease of use of the whole javascript stack, particularly the Node package manager. We basically got from inception to a deployed audio web app in the course of a working day. Node.js is an awesome technology, and we highly recommend anyone who hasn't done so yet to go and have a play with it.

It was an awesome hack day, and we were quite proud to come away second place with our little implementation of the distributed device orchestra.

Extensions

Where do we want to go from here? We see a lot of potential in the technologies we've put together here, and we hope to get some more time to polish the overall experience as well as looking more into:

  1. BluetoothLE +iBeacon: Relative, location aware grouping/low-level heartbeat sync
  2. Add more instruments to create a richer sound
  3. Integrated visualisations with the sounds being played
  4. Create next-level music immersion
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment