This isn’t the advertising for Kinect, I think they are pushing a new way of playing/interacting.
For some time I have been interested in how our senses other than sight work and what if we could visualise them what would we see. A dog goes for a walk and can tell another dog had passed along just a few minutes earlier. We recognise people mostly from appearance if we could function like a dog a get a glimpse of what had happend a while ago how could that look ? maybe a ghosted image trail ?
The Kinect can see in 3D it sees depth, closer parts of an object are brighter objects further away are darker. Its simple for us to understand the greyscale imagery that is formed but it sets me wondering about how we would see if other sensors were wired to our visual cortex.What would it mean to be able to see something other than light ?

One of the first things that surprised me when first accessing the Kinects depth imagery was just how organic it is. I was expecting something for more blocky and pixelated not raggedy pulsing edges. I was captivated by this aesthetic and by the idea that this computers way of seeing was very different to those that had gone before it. If I was going to embrace this I should embrace it differences. I issue i had was that even though the image looked organic, if I wanted to enlarge this imagery I soon came upon pixelation, which went against the look that I liked. At this stage I decided that I was going to enhance the ragged look and build another layer of texture on top of that which the kinect gave me.
At this iteration I was happy with the detail I could create. It wasn’t quite the aesthetic I was after but a starting point.
One of the issues with generating this texture was the time it took. For a this to work in the way I intended it would have to be able to update at a good frame rate. This wasn’t the case yet. I decided to render the image in strips from top to bottom to get more of an idea of how long it took to draw.
