Almost about a month ago I made a small audio reactive composition that tries to attain visual richness through a simple concept like brownian motion. The piece was wrote with a particular soundtrack in mind — Kriespiel by Patrick Wolf. As the composition reacts to any audio input, its feeling and timing might not be the appropriate for other audio sources.
Due to the complexity of the piece, an offline rendering is required mainly because of the most prominent moments. The composition simply reacts to the energy of several sound frequencies. I used Processing’s sound library Minim. As this library doesn’t have any sort of synchronization mechanism with Processing’s actual rendering frame rate, the audio was first pre-processed and later fed to another sketch that simply saves each frame of the composition. Audio and video were later mixed outside Processing.
Having that, you are free to scratch, reuse or even rip to pieces the code behind the composition. Please note that the code wasn’t thought for distribution and with that I state that it’s buggy, not documented and not very elegant.
A note about the distribution: you’ll find two sketches. The process_audio takes an audio.mp3 — not distributed — on the data folder and generates a fft.txt of the analyzed sound. Just let it run until de sound finishes. The brown3 sketch takes a fft.txt and starts the rendering! You have already an example fft.txt — A Boy and a Portrait from Yoko Kanno — doesn’t matter really.
Another small note. You could notice that the original Kriespiel video has some flickering. This wasn’t intentional, although it happens to suit very well. The problem — I did some operations that modified structurally (adds and removes) the data structures while at the same time I was drawing shapes corresponding to the elements of those data structures. The solution — always update first all your data structures and after you can draw the shapes! Pretty dumb straightforward.