
experimentations in parametric 3d printing
About a year ago, I got a great book on using the Kinect for art and design work : “Making Things See” by Greg Borenstein↓. This book gradually introduces working in Processing with the kinect infrared and rgb cameras, creating a point cloud in space and even tracking people and body gestures. The last chapter is dedicated to exporting STL files with a Makerbot 3d printer in mind.
Taking printable 3D pictures with a kinect is actually quite simple thanks to a library for Processing called Modelbuilder made by Marius Watz as part of his residence with MakerBot Industries. This detail is important, because there is different types of 3d printing, each of them with specific constraints regarding precision, cost, speed, size and shape. MakerBot specializes in fused deposition modeling (FDM), the most common type of technology in the home market. In FDM printing a thin filament is heated and pushed out of the suspended and moving extruder of the machine, building the model layer by layer from the bottom to the top. This is one of the defining characteristic of FDM: since the filament stands on itself, parts can’t “float” or expand outside the base shape without some sort of structure to support them. There are a few workarounds but still, this is a bit limiting.
Using Modelbuilder with a Kinect the Borenstein way is straightforward: get the point cloud, iterate for each point and create a triangle from this point to two other immediate points. This way you get a closed surface, not unlike a sheet placed on an object. This process is much simpler than it could be thanks to the way the Kinect works : it reports a regular grid of pixels that are just awaiting to be connected to form triangles. On the other hand, creating a mesh from a 3d shape is much more complicated: first you would need to generate a batch of random points on the surface of the shape, and then connect them by triangulating to the most appropriate points around them, probably with a Delaunay triangulation. See Nervous System’s excellent article on the subject for more informations↓.

Working from these first programs (I have a bunch of Kinect experiments I’ll show in another article), it is possible to imagine new parameters to use in order to generate shapes. For example, any bitmap image is just a list of values corresponding to the color of each pixel. Here, instead of using the pixel’s color as the basis for drawing an image, we can use it as a reference to draw in space: dark areas will be at the top of the model and light ones at the bottom.

The code for generating this shape is derived from chFAB_scanner on Borenstein’s github and the translation of the image’s pixel to a height is done with this line:
values[j][i] = int( brightness( img.pixels[i*img.width + j] )) / hauteurMax;
The “brightness” function gets the brightness of a specific pixel in the image called img. Brightness ranges from 0 to 255, so it needs to be remapped to an appropriate interval, which is hauteurMax’s role. It works pretty nicely with hauteurMax = 3 by the way.
From this moment on, I worked with Pauline and we opted to project images on a volume instead of a flat surface. Next step was then to keep the image value but place the points along a cylinder in space. There is a great example of the formulas needed to map points along a cylindric space in Processing here. The formulas look like that:
float tubeX = cos( radians(y * angle) ) * values[x][y];
float tubeY = sin( radians(y * angle) ) * values[x][y];
depthPoints[i] = new PVector ( tubeY, x, tubeX);
The variable “angle” being the angle between one column and the next. This is what the original image looks like, and underneath is the rendered image in a cylindric shape.


For printing, this shape needed some support to stand the “cliffs” on the side. The printer creates very thin beehive-like forms on the side of the shape, in anticipation of expanding forms on top. At the end of the printing process, it is easy to remove the sides and just keep the solid part. Everything is done transparently without any parameters to play with so it is a bit frustrating but so far it has worked every time.





Transforming sound to shape is a bit more complex than doing so with an image. There are a bunch of possible approaches, from sorting by frequency, loudness, harmonics, etc. I am using FFT inside Minim (a Processing library), which stands for Fast Fourier Transform. From what I can gather, Fourier Transform is an algorithm made to transform a signal in time↓. It applies to audio analysis but also to a bunch of engineering disciplines. In the context of audio recordings, Fourier Transform can display the spectrum of the sound along specific bands that are created by the program. The ones I use are 512 Hz wide, this way I get just enough information across a 44kHz sample.
What follows is a video of a first version of the program. The lateral altitude of the shape corresponds to the amplitude of the band. From one end to the other is time, so one complete shape is about 20 seconds high.




This project is not exactly finished but it is really exciting to see how easy it is to generate and export a shape and have it printed in 3D. Every new generation of 3D printers is cheaper, faster and overall more efficient. At some point, it is going to become a commodity for most people, affordable and useful enough to justify it’s initial acquisition cost. The potential is huge, not just for functional replacements parts for washing machines but also for a whole class of custom, singular, and beautiful objects like a home-made music instrument, a personalized jewelry set or a one-of-a-kind cremation urn.
↑ — Making Things See, by Greg Borenstein