Skip to content

Instantly share code, notes, and snippets.

@owengc
Last active August 29, 2015 13:57
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save owengc/9da4715e2cfa718605d5 to your computer and use it in GitHub Desktop.
Save owengc/9da4715e2cfa718605d5 to your computer and use it in GitHub Desktop.
MAT240E Final Project Documentation
{
"metadata": {
"name": ""
},
"nbformat": 3,
"nbformat_minor": 0,
"worksheets": [
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**MAT 240E Final Project Documentation**\n",
"===\n",
"\n",
"https://github.com/owengc/afv\n",
"\n",
"Project Description\n",
"---\n",
"\n",
"The goal of my project was to extend an existing real time music visualization system I had written using OpenFrameworks. The original visualizer was controlled entirely by the RMS of the input signal, so I wanted to incorporate a number of the other low-level audio features we covered in class in order to add more depth to the visuals. I used the OpenFrameworks addon for Aubio written by Paul Reimer (https://github.com/paulreimer/ofxAudioFeatures), which despite its lack of documentation still helped me integrate Aubio with my existing code. Aubio offers a suite of higher level feature detection functions as well, including onset and pitch. However, because of the way my existing code was structured, I chose to stick to the low level features for now. \n",
"\n",
"My strategy for expanding the number of different control parameters was to break down the behavior of the visualizer and reassign what had previously been controlled with different thresholds and short- and long-term histories of the RMS. So for each feature I stored all the same information I had been collecting for RMS for overall spectral energy, high frequency content, and the flux, spread, skewness, kurtosis, and slope of the spectrum. These features update on every FFT frame, synced to the audio callback in OpenFrameworks. A large part of the still ongoing process of using these features is determining useful ranges for their output and encapsulating that information in a way that makes it easy to swap features between different characteristics in the visuals. These visual features I'm mapping the audio features to so far are the lifetime and size of the particles, a spawning control, a trigger to scatter the particles, and color, opacity and 'texture' features which I haven't fully implemented. The texture feature determines the number of sides of each particle, shifting them from triangles to squares to pentagons all the way up to full circles. However, the OpenFrameworks addon I used to achieve this effect (https://github.com/openframeworks/openFrameworks/tree/master/addons/ofxVectorGraphics) does not seem to support alpha blending, which is a key improvement on the asethetic of the original visualizer. \n",
"\n",
"Future Directions\n",
"---\n",
"\n",
"The process 'gluing' together the ofxAudioFeatures code and my highly restructured code has had mixed results so far. The responsiveness of the visualizer to percussive sounds has decreased somewhat compared to my old code, which is a major problem I need to overcome as I continue to improve this system. Furthermore, the process introduced a number of bugs having to do with memory management. I have yet to determine if this problem stems from bugs in the ofx addons, my own code, or some sort of compatibility issue among the various components. This manifests in intermittant 'sig abort' errors, but it seems that when this does not happen immediately upon running the executable, the code seems stable. \n",
"\n",
"I hope to refine this system to the point that the feature abstraction is reliable enough to map to arbitrary visualizations, not just the one I've developed. I also hope that in time the low level features can be combined with higher level audio features that will map more intuitively to note-level musical features. \n",
"\n",
"\n",
"\n",
"\n"
]
}
],
"metadata": {}
}
]
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment