Live Web / Machine Learning for the Web Finals: Peer to Peer Meditations

For this final I combined the techniques I learned in “Live Web” with the ones from “Machine Learning for the Web” and built a peer-based meditation piece:

Users can experience the base64-code from a remote web-camera streaming through their own body.

Try it with two browser windows or two computers/mobiles: Peer to Peer Meditations

All of this runs with webRTC, tensorflow.js and node.js as main parts on the backend: peer.js handles the web-rtc peer connection with a remote peer-server, node.js organizes the automatic connection to a random user in the peer-network, tensorflow.js runs person-segmentation (now renamed to “body-pix) on the local-users webcam - all the canvas-handling and pixel-manipulation is done in javascript.

The result is a meditation experience that reminds of the calming effects of watching static noise on tv screens. As it is peer-based, the users body becomes a vessel for the blueprint and essential idea of something or somebody else. This hopefully creates a sense of connectedness and community while meditating.

While staring at a screen for meditation might not be everyones preferred way of doing it, it is worth exploring this from a futuristic perspective: we probably will be able to stream information directly into our retina, therefore an overlaid, peer-based meditation experience might be worthwhile considering in the future.

Here a short video-walkthrough streaming video-frames from my mobile phone camera as base64-code through my own silhouette (the latter is picked up by the camera on my laptop and run through a body detection algorithm). All web-based with machine-learning in the browser powered by tensorflow.js and webRTC peer-to-peer streaming (code):

LiveWeb / Machine Learning for the Web - Final Proposal


For my final I want to start building an interactive, decentralized and web-based devotional space for human-machine hybrids.


Heart of the project is the talking stone, a piece I have been working on over the last semester already and showed at ITP Spring Show 2018. Now I would like to iterate on it in a virtual space, with a built in decentralized structure as a form of a “community”-backbone via the blockchain and as an interface geared towards future human entities.

system parts

The connection to universal randomness: a granite rock.


Rough draft of user interface (website):


And here the full system as an overview (as well a rough draft):


iteration with tensorflowjs

I built a web-interface as an iteration on the idea that is fed by the camera input and displays it as Base64 code within the silhouette of the users body:

project outline

Here the basic setup: The rock is displayed via sockets as a Base64Stream on the website for logged in users. Users have to buy “manna” - tokens with Ethereum (in our case the Rinkeby-Testnet blockchain), then they get an audio-visual stream of decaying particles of the rock to start their meditation/devotion: The rhythm of particles decaying is happening - according to quantum theory - in a true random manner. Therefore the core of this devotional practice of the future is listening to an inherent truth. The duration of each “truth” - session is 10 minutes. I will have to see how “meditative” the random decay actually is - I remember it as pretty soothing to just listen to the particle decay that gets hammered into the rock with a solenoid, but I will have to find a more digital audio coloring for each particle decaying. That said - this piece is pure speculative design. I am interested in the future of devotional practice, especially for people who consider themselves non-religious. So trial and error is probably the way to go with this project as well.

If the user moves significantly from the prescribed meditation pose (Easy Pose — Sukhasana) in those 10 minutes, tokens get automatically consumed. The same happens if the user does not open a new ‘truth’ - session within 24 hours after finishing the last session.

On the screen, little dots display active users that are in a session as well. The size of the dots changes according to how much time they have left in their daily meditation practice.

The goal of this experimental devotional space is to give users an audio stimulation that is scientifically true and therefore easier to identify with - in a sense a random rhythm of the universe to meditate with. By buying into the practice with tokens, only dedicated users are using the space and their motivation to actually perform the mediation is higher as they paid for it. They only have to pay again if they do not perform the practice in a regular manner or interrupt a session.

The visuals will be dominated by code - as this is a devotional meeting place for future human-machine-hybrids that find peace and solitude in true randomness (the opposite of their non-random existence).

tech specs

  • granite rock with geiger-counter attached to it

  • raspberry pi with camera running sockets to display rock and send random decay patterns

  • remote server running node.js and sockets

  • tensorflow.js running posenet

  • ethereum contract (solidity based) running on Rinkeby-Testnet (dev blockchain)

  • MetaMask chrome extension (to access blockchain from browser with a secure wallet)


That sounds like a lot to do in 6 weeks, but I want to give it a try. I experimented with blockchain and true randomness last semester already in two different projects and the current LiveWeb / Machine Learning for the Web classes seem a great framework to give this a virtual and AI guided setting. I am still a bit uncertain about the blockchain backbone as this is the part where I feel the least at home at the moment. I only remember fragments of Solidity, web3js and MetaMask, connecting all layers together was tricky and the documentation sparse. Well, we’ll see!

LiveWeb: Midterm Project Idea

Quite often the rapid prototyping process that we celebrate here at ITP (and can be really difficult to get used to as I like very much to spend more time with a project before going on to the next one …) has the great side effect that you go through a lot of different ideas, topics and digital narratives that can be told. This sometimes means you end up finding a hidden gem that really sticks with you for a while.

Last semester it was the rock-project and its devotional aspects that kept me occupied (and still does).

This semester I am fascinated by creating apps for human-machine hybrids for a probably not so distant future.

For the last Live-Web class I developed an image-chat app that shows bare Base64 chat images instead of decoded (human readable) images. This means only a human-machine hybrid can fully enjoy the chat: a machine itself is limited to the non-conscious decoding of data and can’t enjoy the visual image, a human cannot enjoy it either as the other participants are hidden behind a wall of code (Base64). Only a human with machine-like capabilities could fully participate and see who else is in the chat.

So far so good.

But the prototype was very rushed and lacks a few key features that are conceptually important:

  • real continuous live stream instead of image stream

  • hashing/range of number participants/pairing

  • audio

  • interface

  • further: reason to actually be in such a chat/topic/moderation

I would love to try to address these with the following questions for my midterm (and possibly as well final as this seems to be a huge task):

  • can this be live-streamed with webRTC (as code) instead of images every 3 seconds?

  • how and by who can it be encoded that the stream is only readable by a select circle that possesses a key? Is the rock coming back as a possible (true random) moderator?

  • how would the audio side sound like? or is there something in the future that is kind of like audio, just as a pure data stream that opens up new sounds? and how does data sound like?

  • how to compose the arrangement of interfaces for the screen?

  • which design aesthetics can be applied to pure code?

  • a bit further out there: how would a chat look/feel/sound like in the future with human-machine hybrids? what lies beyond VR/AR/screen/mobile as interfaces?

Let’s get started!

Another iteration on this for Digital Fabrication:


LiveWeb: MachineFaceTime

I created an image chat application that can be fully used or seen only by machines. Or you need to be fluent in Base64 to see who you are connected to. Every three seconds an image from the chat participants is displayed and sent around via sockets - it is kept in the original data format for image encoding on the web, Base64 and shown in a very small font. The focus of the eye shifts on the code as an entity and creates rhythmically restructuring patterns. The user-id is displayed along on top with each received image.

To make it work I had to hide various elements on the page that are still transmitting data via the sockets. It works as well on mobile. Just in case you are in need of some calming Base64 visuals on the go.


Live Web: Collective Drawing in Red and Black

I modified our example from class a little bit … and added a button to change the collective drawing color: it is either red or black, everybody is forced to adapt to the new color but can choose the canvas they want to work in. As the canvas is split between a red and black background the “pen” can be either used as an obscurer (black on black/red on red) or a highlighter (red on black/ black on red). The results look strangely beautiful as the dots seem to have depth or resemble a point-cloud:

Screen Shot 2018-10-02 at 4.57.17 AM.png

… and here some code. And more random drawings:

Screen Shot 2018-10-02 at 11.26.58 AM.png