Digital Fabrication Final: Clay VR Future Nudes

Continuing from our explorations with base64 and clay, Kim and I created a VR/physical sculpture that mimics cyborg art in the future.


We meditated the whole semester on the relationship between our physical and the digital body, explicitly on the space between physical and digital object. We tried to deconstruct the notion of sculpture as a form of human memory to the blueprint of code - only to re-imagine it in VR.

We called our piece “future nudes”, as only cyborgs in the future will be able to see the real body behind the sculpture without the help of VR goggles as we imagine them to be able to decipher code in real-time.


A depiction of the artists body gets laser-etched in binary code into wet clay, spread out on 12 clay tablets. These tablets get fired in a traditional kiln like the earliest forms of human written media, Babylonian cuneiform. Images of these tablets get put into a immersive VR environment and when the audience is touching the real clay tablet they can see their hands interacting with the physical object of the tablet - which in itself is the depiction of a real person converted into a digital blueprint, binary-code. In the final version the pixels of the body parts on each tablet pop up in the VR environment when the fingers are touching it.

In a first step the highly pixelated image (to minimize the binary file size) gets transferred to binary via a python-script (here one part of the image is shown):


After that we prepared the low-firing raku-clay for laser-etching and kiln-firing. After applying manual focus on the 75 watt laser we used the following settings:

  • 600 dpi raster etching

  • 90 % speed

  • 30 % power

We got the best results when using out-of-packet fresh clay without adding any water to it. The clay fired well at cone 04 for the bisque fire.

Here a few pics from our process:


And finally the experimental and still unfinished port into VR using Unity VR, HTC Vive and Leap Motion (in the final version the pixels of the body part pop up in the VR environment when the fingers of the audience touch the tablet):


Machine Learning for the Web: Code Bodies with tensorflow.js


We are code. At least in a great part of our lives. How can we relate to this reality/non-reality with our bodies in the browser?

tech setup

I experimented with tensorflow.js and their person-segmentation model running the smaller mobile-net architecture. I used multiple canvases to display the camera stream base64 code in real time behind the silhouette of the detected person from the webcam. Given the fact that I do pixel manipulation on two different canvases and run a tensorflowjs model at the same time, it still runs relatively fast in the browser - although the frame rate is visibly slower than a regular stream with just the model.


A brief screen recording:

Another version with a bigger screen:



Digital Fabrication Final: Future Nudes

For our final project, Kim and I were experimenting with laser-etching wet clay this week - a poetic and exciting exploration!

Too make it short - so far the results were surprisingly great. That said we still need to fire the clay in a kiln, then we can give a final verdict on it.

Here the basic setup for cutting the clay:


We used low firing white raku clay as we will use a kiln that fires at cone 06. It had a nice plasticity and was easy to work with. The tricky part was to get a consistent height for the clay slab. To achieve that, we used a kitchen roller and two pieces of plywood with similar height to roll it evenly over the clay. We then cut it in shape.


To keep clay particles from falling through the laser bed we used felt.

After applying manual focus on the 75 watt laser we used the following settings:

  • 600 dpi raster etching

  • 90 % speed

  • 30 % power

and it worked well with two goes. Now we just have to let it dry for two days do a first firing, glaze one part and then fire it again. It’s still not clear how it will turn out - something to look forward to!


Digital Fabrication Final Proposal: Future Nudes

Kim and I want to create nude portraits/sculptures for the future (see here more detailed blogpost on Kim’s website .

After laser-etching our portraits in Base64 code on mat-board for our midterms, we decided to continue with this topic and play more with form and materials:

We want to either use Base64 or binary code for the encoding of the image. Regarding the base material for the project we want to iterate with etching wet clay. It seems to be the best material as it preserves our portraits pretty much forever (the earliest form of human writing, cuneiform, was “hand etched” into clay). It has the advantage that we could form it later into a sculpture or play with the shapes before burning. And it has a lot of symbolic meaning regarding the human body in different contexts (bible, Kabbalah, …).

It is pretty experimental to etch into clay and then form it, there are only very few sources online that have tried etching wet clay with mixed success. So we gotta play!

Why nude portraits? We like the idea that the portraits of our bodies will only be visible in a distant future - once humans are able to decipher code naturally, maybe as human-machine hybrids. We abstract our bodies into code and preserve them for a time where our bodies will have changed a lot: we might be half human, half machines by then. This aspect of preservation for a distant future reminds of the skor-codex from a few years ago.

For the looks of ceramics we are inspired by rough surfaces of dark stoneware, dark clay and playful sculptural explorations.

An interesting iteration would be a collecting a couple of different nudes, then cutting up the pieces into a row of beaded curtains that people could walk through - so they could interact with our bodies in code.

Digital Fabrication Midterm: Future Portraits

“In another reality I am half-human, half-machine. I can read Base64 and see you.”

final iteration

Kim and I created self-portraits for the future.


The self-portraits are Base64 representations of images taken by a web-camera. The ideal viewer is a human-machine hybrid/cyborg that is capable of both decoding etched Base64 and recognizing the human element of the artifact.


We went through several iterations with felt, honey and rituals: Our initial idea of capturing a moment in time and etch or cut it into an artifact morphed into an exploration of laser-cut materials with honey. We were interested in the symbolic meaning of honey as a natural healing material. Our goal was to incorporate it into a ritual to heal our digital selves, represented by etched Base64-portraits that were taken with a webcam and encoded. We used felt as it is made out of wool fibers that are not woven but stick to each other through applying heat and pressure. This chaotic structure seemed a great fit for honey and clean code:


Soon we dropped the idea of using felt as it seemed to be too much of a conceptual add-on and reduced it to honey and digital etchings - the healing should be applied directly onto the material in a ritual.


After gathering feedback from peers and discussions about the ritualistic meaning we struggled with a justification for the honey as well: most of the people we talked to liked the idea better that only human-machine hybrids of a near or far future are technically able to see the real human portrait behind the code. After a few discussions about both variations of our concept dealing with digital identities and moments in time we favored the latter one and dropped the honey.

So we finally settled on etching timeless digital code onto a physical medium that ages over time - self-portraits for the future: Maybe we look back at them in 20 years and can see through the code our own selves from back in the day?

A little bit about the technical process: The image is taken with a nodejs chat app that I created for LiveWeb: It takes a picture with the user’s webcam every three seconds and shows the Base64 code on the page - again an example of an interface for human-machine hybrids or cyborgs of the future.


After taking portraits with my webapp, we copy/pasted the code for each of us into MacOS TextEdit, exported it as a pdf, copy/pasted the contents of it into Adobe Illustrator and outlined the font, Helvetica. We chose this font as it is a very legible font, even at a very small font size. Our laser computer did not have Helvetica installed, therefore we outlined all letters.


The files were very large, as portraits varied between 160,000 and 180,000 characters.

These digital preparations were followed by 2 days of test-etchings on crescent medium gray and crescent black mounting-boards and experiments with different laser settings (speed, power and focus).


We discovered that getting the focus right proved to be difficult: The laser beam seems to get weaker once it travels all to the right of the bed. This makes the etching illegible on that side, whereas the far left it is fine. Focusing a bit deeper on the material with the manual focus produces satisfying results with white cardboard fixed this issue, whereas the fonts looked blurry on the black cardboard on the left side - it was etching too deep. Finding the right balance for the focus fitting both sides equally well took a long time and a lot of trial and error.


Once we got the focus right, we started with the the final etchings: Speed 90, Power 55, 600 dpi and focus slightly closer to the material than usual turned out the best results on our 75-watt laser.

Each portrait took 1:33 hours to etch, in total we completed 4 portraits.

We see them as a starting point for further iterations, as “exploration stage one”: The concept of creating physical artifacts for the future that preserve personal digital moments in time with code and laser is very appealing to us.

We will iterate on our portraits for the future probably for the final.