
experimentations with point clouds, Kinect and a 3d printer
Around march last year I started working with Alizée de Pin on experimentations with 3d printing using a Microsoft Kinect as the source. In a previous article, I wrote on generating shapes from an image or an audio waveform, but this post will be specifically focused on importing point clouds from the Kinect and displaying and printing them with a 3d printer.
One of the first thing I wanted to try was concatenating time into a shape: is it possible to create a single 3d shape that, over a definite period of time, would be able to express the intensity of the movements happening in front of it? This would be akin to long exposures in photography, where the light keeps hitting the film/sensor for a few seconds, minutes or even hours.
I always wanted to see how our body reacts to dreams, and what (if anything) can be inferred from these movements regarding our state of mind on the following morning. Are there patterns that link these two? And is this true that our body “acts” our dreams? Starting with these hypotheses, it seemed almost natural to use a Kinect to stalk one night’s worth of movements to see what insights could be gathered. Also, the Kinect uses infra-red light to see in depth, which means that I could keep the perturbations caused by its presence at a minimum.
The idea is to develop a series of prints depicting many nights from the same person and their accompanying dreams. The system is still cumbersome and a bit intrusive, so I only experimented it on myself so far.

I recorded these images on 2013-03-12, from 2:33 AM to 8:03 AM. Each image was exposed for 15 minutes and they are shown in chronological order. The head is always at the top, in the corner, and on some you can guess the arms, shoulder, chest and legs.








This is the dream as I wrote it right after waking up:
I am applying to get into a university in Japan with Max. He is writing on the form with a quill pen in black ink, but since the sheet is glossy the ink starts dripping everywhere. I am showing my personal website to a man with dreadlocks, who is registering candidates. He is using Internet Explorer 5 or 6, so it doesn’t load, and I explain to him that you can not create websites with IE 5 or 6 in mind otherwise you lose too much time. I am now in my parents’ hometown. I keep losing the braces on my teeth because I move them around. Walking in the street, I meet my ex-girlfriend and her friends and we are going to the pool. There’s a bomb alert in the building, and since she is the closest to the lifeguard’s office, they are suspecting her of being responsible. I don’t know if it’s a real bomb or a prank, but I am present for the interrogation.
And that’s when I woke up. The dream doesn’t make much sense, though (does it?)… From this first set, Alizee selected one image and printed it in various sizes with different settings. The aim was to see how small we could go while still being able to pick out a human face in the finished object.



A second idea we explored was to find the equivalent to bokeh in 3d models. Photographers use bokeh (or out-of-focus blur) to get their models to stand out from the background. Some lenses or brands are known for their soft or harsh bokeh — for example, the bokeh of the first version of the Leica Noctilux is particularly soft and has a very unique texture.
What we tried was to create variable levels of details in one mesh. Instead of using all the points that the Kinect outputs, we can skip over some of them depending on what we are trying to bring to focus.









These tests are promising but are just tools that haven’t been used to address real projects. That will come later, hopefully.
