The Data Lenses is a visual exploration tool for several attributes of the same dataset: buses in Singapore. As described in the video, it enables to select various types of data, while speeding up and slowing down the simulation. The data that it reads is not pre-processed and therefore is aggregated on the fly on a bus level.
Nevertheless, what interests me more here is the experiment that his tool is on visualization per se. I wanted to provide a great amplitude of zoom levels without the classical pan and zoom that often gets me lost. The classical solution for this is the fish-eye lens. The problem is that the typical fish-eye does not carry a zoom level as great as this one: from the island overview to the narrow contemplation of the street. Other solutions can pass by just mapping a magnified circle over the interest point but this obviously brings the occlusion of the periphery of the magnified location, destroying the experience of surroundings’ orientated browsing.
After trying to distort the space around a point in all sorts of ways, I came up with a distortion strategy that implements a lens equation of a somehow surreal nature. A point is distorted in function of its current radius to the center of the lens. This distortion rate varies with an arctangent and a square root (after trying all sorts of combinations of gaussians and quadratic functions). In the end what I was looking for was a function with a nice sexy shape, that starts in zero, escalates to a maximum and decreases with an almost gaussian shape and then quickly approaches zero like an hyperbolic. My quest resulted in the following function that expresses the distorted radius to a lens center (instead of the non-distortion in blue: y=x).
This implementation results in a much more diverting and interesting browsing of the space, where I can direct the lens to a clutter of points, and fluidly unveil and understand each of its constituting parts.
A purely technical aspect of this tool is that even the map is not pixel based, but node based, meaning that every distortion is applied to a vertice and not a pixel coordinate. This enables to apply high magnification levels without getting any pixelation on the map. Nevertheless, since we are dealing with large quantities of vertices, the map is pushed to the graphic card’s memory using Vertex Buffered Objects and the distortions are computed on the GPU’s Vertex Shader, described with GLSL and using GLGraphics.
This tool was developed in the framework of visual explorations of urban mobility of LIVE Singapore! project. Special thanks go to Kristian Kloeckl for the supervision and trust, Till Nagel and Inês Dias for the untangling suggestions, Nicholas Marchesi and Prudence Robinson for baring with headaches while doing the showcase video.