This project came to life during some experimentation in Blender with displacement mapping. So what’s Blender? And what’s displacement mapping?

Blender is a free 3D modelling program, super awesome if you want to make crazy 3D stuff, unfortunately absurdly un-user-frienly if you want to learn 3D modelling. Displacement mapping is the process of using an image, projected on a surface, to displace that surface. In more detail; the black and white values of the image are used to determine how much, and in which direction, the surface should be displaced. 

Une example:

In the example above the base model, a sphere, has a ‘rocky’ texture applied to it, note that this texture is not modelled by hand, it is generated from the image or texture applied to the model. This is a easy and quick way to add ‘sub-detail’ to a 3D model. Here is another example:

So in the most basic sense displacement mapping let’s you define a 3D shape with a 2D drawing. This is what I found most interesting, anyone can make a 2D drawing using pen and paper, so anyone should be able to produce a 3D drawing right? What if used a webcam to make a picture of a drawing, send it to blender, displace a 3D model, and display the output on a screen? When the 3D model is satisfactory I could 3D print it. During some further experimentation in Blender I realized that all the separate steps took quit a while to make a picture and seeing it’s affect on the 3D model, would there be anyway that I could speed things up, let’s say, make this real-time?

Schermafbeelding 2015-10-27 om 09.07.37

So this is screenshot from Blender showing all the complicated bits and pieces. On the left is a python script that is executed every frame, basically it gets a new image, maps it onto the Ikea lampshade, displaces the 3D model and turn it a couple of degrees. In the center you see the displaced 3D model (and drawing). What you don’t see is a Processing script working in the background talking to the webcam to make pictures and getting those to Blender. To produce a 3D file that is ready for 3D printing the entire program is paused and the most recent displaced 3D model is exported to an .STL file. This al works at a reasonable framerate (considering I don’t own a super computer). It’s fun to see your own pen (and sometimes hand) also being displaced on the surface. Anyway, below are some pictures of the installation in action.

 

So what could I improve?

Well, anybody that is reading this with a bit of spatial visualization skill will realize that a flat image is mapped on a spherical shape, meaning that the image is distorted. This actually turned out to be not so much of a problem, it’s more like a fun challenge. Anyway I had done some experimentation using kaleidoscope effects to map the image on the Lampan lampshade but didn’t succeed in anything concrete, maybe something for a future project!