PCOMP FINAL: KNOB @ ITP WINTER SHOW 2017

After a few weeks of fabrication, code and iteration our team (Anthony, Brandon and me) showed our KNOB-project to the public at the ITP Winter Show 2017. Here a little update on the latest developments and iterations since our submissions to the pcomp finals.

code / pcomp

The code/pcomp setup stayed more or less the same for this last iteration: We used a 8266 node mcu wifi-module in combination with a raspberry pi inside the knob that picked up the data from the rotary encoder at the bottom of the metal pole counting rotations of the knob. The rotations were sent to a remote server (in our case heroku-server) via a node script (proofed to be easier than python for this use). On the server-side we reprogrammed the rotations to match 1000 rotations for full brightness of the LED. The pwm-data was sent to another pi inside a white pedestal with the LED on top of it. The pi controlled the brightness of it directly. For all remote connections we used itp-sandbox as wifi-network.

fabrication

For the fabrication part of the project we iterated mostly on the outer materials of the knob and the resistance of the rotational movement of the bearings. After priming it with white primer we initially wanted to paint it in rosegold - as a nod to Jeff Koons and Apple product fetishism. Then we realized that this might be a too distracting and cynical take on a generally playful concept and decided to use felt as the main outer material of the knob. The audience should want to touch it and and immediately feel comfortable with the soft material. It proved to be a good choice - the feedback from the audience was great, everybody liked the softness and feel of the felt. We chose black as it seemed to show the use by a lot of hands less fast than grey or a brighter color. It accented the general black and white color scheme of the arrangement as well. 

IMG_2253.JPG
IMG_2307.JPG
IMG_2256.JPG
IMG_2259.JPG

For the LED we built a wooden pedestal to house the raspberry pi and battery pack and painted it white.

IMG_2249.JPG

 

To add more resistance to the movement we added 4 coasters to the bottom of the rotating part of the knob and padded the wooden rail in the base with thick foam. The coasters were rolling on the foam, the compression of the foam by the casters produced a slight resistance for a single caster. Multiplied by four the resistance was big enough to keep the rotating part from spinning freely when a lot of force was attached. We were initially worried about the creaking noise of one coaster, but during the show this was irrelevant as the general noise of the audience covered this. 

IMG_2250.JPG
IMG_2254.JPG

concept & show feedback

We changed our concept fundamentally: On Sunday, the first day of the show, we turned the LED on to full brightness with 1000 rotations clockwise on the knob. On Monday, the second show day, we reversed this tedious process and turned it slowly off with 1000 rotations counterclockwise. 

On a separate iPad screen running a webpage the audience could keep track of the number of rotations. Why a thousand? We just felt it was the right number. The direction of the rotation for the day was printed on a simple black foamboard suspended from the ceiling - it should look as simple, intuitive and familiar as a daily menu in a restaurant. 

IMG_2300.JPG

We felt that this scaling of the interaction itself was a natural fit to the scaling of the knob: Not only the physical scale changed but as well the procedural. This focused the perception of the audience stronger on the core of the concept: to enjoy an interaction for the sake of the interaction itself - to invoke a meditative and communal state of action as the knob is usually turned with a group of people. 

In the show this iteration was well received. Not only because of its conceptually balanced approach towards the timing of the reward of the interaction, mostly the audience described a feeling of comfort in talking to strangers while performing a repetitive manual task together. One group compared the experience to a fidget spinner for groups that could be used for brainstorming activities in a board-room. Another participate recounted childhood memories of sorting peas together. 

While a few participants, mostly children, tried to raise the number of rotations, and therefore as well looked at the iPad showing the current number of rotations, the LED as main output was generally received as a minor part of the process of creating a communal experience. 

Our installation definitively hit an important aspect of technology: an interaction can create a meaningful and satisfying experience as interaction itself when it helps creating a sense of belonging and community - even without an instant gratification or a short term purpose. [link to video]

We decided not to use AR as an output as there is still a device needed by the user. This shifts the focus of the audience to the output and distracts from the physicality of the object and the interaction with this physicality - something we wanted to avoid. AR still is conceptually stronger as an output as it is in itself non-existent and weightless. It was a difficult decision but in the end a simple output as an LED and the exaggerated scale of the interaction over 1000 rotations prooved to be stronger in the context of the winter show and its abundance of interactive pieces on a small space.

We felt very honored to be part of the show and the ITP community. Thanks to whole team behind it, especially the curators Gabe and Mimi. And big thanks to our PComp and IFab professors Jeff Feddersen and Ben Light for the great guidance and feedback during the whole process. 

Here a few impressions from the show: 

PCOMP FINAL: KNOB_AR / SIMPLE UNREALITIES SUMMARY

 
knob_pyramid.png
 

concept

A giant physical knob controls a tiny virtual light-sculpture in AR. Both are separated from each other, the audience has to make an intellectual connection between the two of them.

keywords

weight, presence, physicality, virtuality, space, augmentation, contrast, minimalism, reduction, haptic interaction, visual perception

technical setup

glossolalia_schematics (1) (1).png

code

We used a github repo to share our code - I never used it for sharing before but as a backup for my own code. It worked pretty well for collaborative coding. The code was written in C++ (Arduino) & Javascript (NodeJS, threeJS, ARJS). 

user feedback

The term "user" might be misleading, we think more of an audience rather than users for our project. Key takeaways from two play-tests:

  • audience likes to turn the piece
  • sheer physicality/scale of the input object is regarded as a plus
  • piece is turning very fast - this can dilute the original concept of an oversized knob
  • audience does not make an immediate connection between AR-object and physical object
  • AR object is very generic, no clear formal connection to the physical object
  • separation of the two objects into different spaces is necessary to avoid overshadowing one piece with the other
  • if audience is confused initially and understands the connection between the two objects later the whole piece is stronger as the impact of a delayed gratification is more powerful
  • the idea of a physical idea is generally welcomed as well, it more seen as an ironic statement than a concept - AR as output fits better conceptually but is formally more difficult to execute and understand (AR is by itself so new that the formal language is not set yet)
  • audience expects initially a greater variety in the output and understands the contrast between the two objects only after explanation of the concept

process

key takeaways

  1. Wood is an organic material, it does not compute.
  2. A change in scale affects pre- and post-production of a piece. They mutually influence each other.
  3. Stick to the original concept, keep iterations in mind for future projects.
  4. Collaboration works best when everybody is on the same page and knows what to do.
  5. Browser based AR is still in its early stages but already very promising.

Our project was at a bigger scale than we were used to and involved a lot of fabrication. We learned a lot in terms of project management, sourcing materials and fabrication (e.g. using the CNC router). The tiny and fragile PCOMP parts had to be integrated in the large scale mechanics of the knob and read those movements correctly. Server side programming and the JS frontend took a lot less time than fabrication. 

Our goal was to create an interactive installation piece for a gallery environment at a larger scale that raises questions about physicality, technology and our joy of play. 

The collaboration proofed to be the best way to create such a piece in a limited timeframe. We plan to exhibit the piece at the ITP Winter Show 2017 - there is still a few improvements to do for this event: re-do the CNC cuts in a hexagon shape, re-engineer the outer layer of the knob in wood, create a backup mechanism to the rotary encoder in case of mechanical failure, slow down the movement of the knob and improve the AR-model. 

 

 

 

PCOMP FINAL: KNOB-AR

concept

Users turn a giant knob on the 4th floor of ITP and control the brightness of a virtual LED in AR positioned on a physical column in the Tisch Lounge. 

electronics

Inside the knob will be a rotary encoder to track the exact position of the element. These values are read and continuously sent to a remote AWS server from a nodeMCU 8266 wifi module inside the knob via serial/python/websockets. On the server they are stored in a txt-file and read into a publicly accessible html-file serving the AR-LED in an AR version of three.js. We will have to test how much of a delay we will have using this communication. A public server seems to be the only way for the audience to access the AR site from outside the local NYU network. The delay might still be acceptable as knob and AR-column are situated on different floors. With iOS 11, AR on mobile is possible now on all platforms using getUserMedia/camera access. 

Here a quick test with the github-example library, a target displayed on a screen and an iPhone running iOS 11:

IMG_2101.PNG

 

mechanics

We found a 3ft long rotating steel arm with bearings and attachment ring in the shop junk-shelf. We will use it as the core rotating element inside the knob. 

IMG_8318.JPG
IMG_8319.JPG

This rotating element will be mounted on a combination of wooden (plywood) bases that will stabilize the knob when it is getting moved. The weight of the knob will rest on wheels that are running on a rail/channel in the wide base that features a wave structure inside so that the 'click' of the knob feels more accurate. Inside the knob we will build a structure with 2 solid wooden rings that are the diameter of the knob and are attached to the rotating element. On the outside we will cover the knob with lightweight wooden planks or cardboard tubes. 

IMG_2103.JPG

fabrication

We worked on the wooden base for the knob/metal pole using multi-layered plywood to keep the rotary encoder within the same wooden element as the pole - this prevents a damage of the electronics/mechanics once there is a push or tilt towards the sides of the knob.

IMG_2116.JPG
IMG_2114.JPG

collaboration

In collaboration with Brandon Newberg & Anthony Bui.

BOM

Screenshot from 2017-11-20 11-13-35.png

 

 

timeline

  • Tue. 21 Nov. 2017: tech communication finished, fabrication parts ordered

work on automation + fabrication (knob), work on AR script

  • Tue. 28 Nov. 2017: all communications work, knob basic fabrication finished

work on fabrication finish, column, code cleaning, documentation, 

  • Tue. 5 Dec. 2017: final presentation

PCOMP FINAL UPDATE: AN INSIGHT, A FINISHED PROJECT & NEW IDEAS

An Insight

After play-testing with the original-concept my focus for my pcomp final has changed a lot - and the project looks entirely different.

Two things came to my mind after the feedback session:

  • Is this a project that engages the audience in a direct interaction?
  • Is this a project that engages me in direct interaction with somebody else while building it? 

The answer in both cases tends towards a "no" - I do this pretty much solo, there is no direct interaction with the audience. While this is still a PComp project, I thought about switching the topic completely: I would like it to be interactive and build it in collaboration with somebody else. I think I can learn more from this experience for my studies than from flying solo with a non-interactive piece - no matter how great it is.

A Finished Project

In my cuneiform-project I realized that my original plan to laser-cut the neural-network generated tablets was the conceptually strongest:  The journey from human-made cuneiform on clay tablets would be completed with the creation of a contemporary physical object that is as well an artifact of the digital machine-generated cuneiform. Both versions of the cuneiform are now physical manifestations of dreams - one human, the other artificial. Cuneiform was never more alive than now. 

Finishing the project with this physical representation of the output of a neural network made much more sense than trying to squeeze in some human interaction / intervention. 

Here are a few results:

IMG_2090.JPG
IMG_2091.JPG

New Ideas

I spoke to my classmates Antony and Brandon about my latest insights and offered them to collaborate on a project. We came up with the idea of having an oversized (bigger than human) knob situated on the 4th floor adjusting the brightness of a tiny LED installed in the Tisch window downstairs. The audience member has to use her full body to turn the knob. While this interaction requires a lot of effort, the result is minuscule: The interaction itself becomes the main focus. It’s a classic pcomp interaction in an entirely different setting. We would use serial via local wifi for the communication for this. Brandon would help with the fabrication part as he is involved in another project already, the core team would be Antony and me. 

turn_knob_led.png

 

While thinking about this more over the past few days, I tweaked the idea a bit and came up with a possible modification / iteration:

What if the big knob would be a piece of furniture for a few people to sit on? What if the output would not be a LED but a live audio broadcast from this conversation / room - the knob would regulate how “public” it would be by mixing in more or less audio noise(text to speech of latest tweets)? The focus of the audience would balance between the object itself (giant knob to sit/lounge on) and the question how public we want to make our conversations. The twitter “ noise” should highlight the defragmentation of conversations online - if we make something public, how public is it when everybody is public? By focussing on audio the audience would not be distracted by playing with the video feed, the visual stimulus is coming from the physical object of the giant furniture know instead. We would use webrtc for streaming the audio on a webpage.

turn_knob.png

PCOMP & ICM: FINAL PROJECT OUTLINE

update:

laser cut tablet of neural network generated cuneiform

laser cut tablet of neural network generated cuneiform

IMG_2091.JPG
 
glossolalia setup.png
 

title

glossolalia / speaking in tongues

question

Are neural networks our new gods?

concept 

My installation piece aims to create a techno-spiritual experience with a neural network trained on ancient cuneiform writing for an exhibition audience in an art context. 

context

The title 'glossolalia / speaking in tongues' of the piece refers to a phenomenon where people seem to speak in a language unknown to them - mostly in a spiritual context. In the art piece both "speaker" (machine) and "recipient" are unaware of the language of their communication: The unconscious machine is constantly dreaming up new pieces of cuneiform tablets that the audience cannot translate. Two things to mention: First, after 3000 years, one of the oldest forms of human writing (means encoding of thoughts) becomes "alive" again with the help of a neural network. Second, it is difficult to evaluate how accurate the new cuneiform is - only a few scholars can fully decode and translate cuneiform today. 

original cuneiform tablet ( wikimedia )

original cuneiform tablet (wikimedia)

part of original Sumerian cuneiform tablet (paper impression)

part of original Sumerian cuneiform tablet (paper impression)

cuneiform generated by neural network

cuneiform generated by neural network

Observing the machine creating these formerly human artifacts in its "deep dream" is in itself a spiritual experience: It is in a way a time-machine. By picking up on the thoughts of the existing, cuneiform writing corpus, the machine breathes new life into culture at Sumerian times. The moment the machine finished training on about 20 000 tablets (paper impressions) and dreamed up its first new tablet, the past 2000 - 3000 year hiatus became irrelevant - for the neural network old Babylon is the only world that exists. 

In the installation piece, the audience get the opportunity to observe the neural network the moment they kneel down on a small bench and look at the top of an acrylic pyramid. This activates the transmission of generated images from the network in the cloud to the audience that hover as an extruded hologram over the pyramid.

cunei_bstract [Recovered].png
side, above: digital abstractions of generated cuneiform

side, above: digital abstractions of generated cuneiform

The audience can pick up headphones with ambient sounds that intensify the experience (optional).

It is important to mention that the network is not activated by the audience: The audience gets the opportunity to observe its constant and ongoing dream. The network is not a slave of the audience, it is regarded as a form of new entity in itself. When an audience member gets up from the bench, the transmission stops - the spiritual moment is over.

 

BOM

-- "H" for "have", "G" for "get" --

PCOMP parts:

  • raspberry pi or nvidia jetson tx2 -- G
  • if raspberry pi: phat DAC for audio out -- G
  • 8266 wifi module -- G
  • 2 x buttons / switches -- H
  • headphones -- G
  • audio cables -- G
  • screen -- G
  • hdmi cable -- H
  • local wifi-network (phone) -- H

ICM parts: 

  • GPU cloud server trained on 20 000 cuneiform tablets (scientific drawings, monochrome) -- H
  • processing sketch -- H

Fabrication parts:

  • wood for kneeling bench -- G
  • wood for column -- G
  • acrylic for pyramid -- H
  • wooden stick for headphone holder -- G
 

system diagram

glossolalia_schematics.png

 

timeline

  • Tue. 7 Nov. 2017: playtesting with paper prototype / user feedback

work on server-local comm, processing sketch, order/buy fab parts

  • Tue. 14 Nov. 2017: server-local comm works, all fabrication parts received

work on input-local comm, processing sketch, fabrication (peppers ghost)

  • Tue. 21 Nov. 2017: processing sketch finished, input-local comm works

work on automation, fabrication (bench, column)

  • Tue. 28 Nov. 2017: all comms work, basic fabrication finished

work on fabrication finish, code cleaning, documentation, 

  • Tue. 5 Dec. 2017: final presentation