The Kinect uses an infrared red projector /infrared camera to map out its depth data, it turns out the sun as well as all the other light it puts out also has a good (well bad) amount of infrared in it, the Kinect performed very badly in the sun (I guess usually playing on the xbox ins an inside kind of thing). I needed to wait for night time before giving my roving Kinect a proper workout.
The night test went much better, do do it properly though I would need some sort of a rig, kinect in one hand laptop in the other battery in my pocket stumbling around in the dark while loosing any night vision by looking at the glaring laptop screen wasn’t the best approach. But it was working really well. I could use have used the Kinect for night vision if I wasn’t busy capturing other things, the base images were coming through just fine.
It was interesting to see what things were interesting when dropped down to just their depth information and what wasn’t trees really opened up enhancing the 3dimensional structure of their inner branches, but tightly foliaged bushes just became dull blobs. And old rust concrete mixer became clean and smooth. Framing an image was far more difficult as elements too far away would drop out of the image leaving dark holes, what you would have assumed would be a good focal point was losed once light was traded for depth.
With a few images captured I decided to have a play with them and see what would eventuate.