For my talk at the RunwayML x Paperspace ml4a meeting I created a special rave version of the EDEN project with the AI agent as DJ/VJ partner. I am controlling the beats in Ableton Live, the agent triggers sounds and visuals in Unity.
I spent the last days of thesis working on my presentation, tying everything together and making it understandable for the outside world:
In the last weeks of thesis I iterated one more time with the agent simulation and how we as humans can perceive autonomous AI: I finally settled on sonifying the experiences of the agent in a quadrophonic speaker setup. Music and sounds are strong emotional components in experiences, focusing on them without showing a screen with the simulation creates a stronger bond to the AI agent walking through its paradise. As music affects our feelings directly, in a way your heart “visualizes” the simulation better than your eyes:
“And now here is my secret, a very simple secret: It is only with the heart that one can see rightly; what is essential is invisible to the eye.”
(Antoine de Saint-Exupéry, “The Little Prince”)
Therefore the audience can hear exactly what the AI hears in real time, the simulation of paradise is mapped to a real 3D space. Each element in the unity simulation emits a unique sound the moment the agent sees it (using raycasting for detection of surroundings). Unity translates this sound into a surround sound experience as the agent moves through the simulation environment. The audience listens to it through an audio listener situated on the agents head. While listening the audience is sitting in a physical booth with 4 speakers in each corner of the room. The audience chair is placed on an inflatable boat to create a third space and the experience of floating through sounds.
To create an even more intimate connection to the agent, the agent can only explore its paradise if there is human life in the installation: I wrote a little Apple Watch App that detects the heartbeat of the audience (or artist) in realtime, sends it via a node js server to the unity server (using a simple RESTful API I set up) and ties the steps of the agent to it. As an extra feature and control mechanism the watch displays the detected element in the simulation as well, just as a simple word, for example “birds” (useful for the artist, distracting for the audience though - something to improve …).
Here an overview of the tech setup:
And a short video:
This week I explored digital locations for my installation piece as I wanted to build not only a smaller version in real life of the honey pond cloth dragging frontend - but as well experiment with the notion of pure "digitality" of an art piece.
After trying to port it into Decentraland (a VR space entirely running on the blockchain) and running into troubles with porting the PLA-animation from C4D to the decentraland-editor (it only accepts armature animations at the moment), I successfully ported the animation including a baked shader from arnold for testing into AFrame and WebVR (ignore the bare bones walls and the extra cloth in the web-version ... something I need to fix).
Really helpful was the babylonjs sandbox to play with the different elements of the animation.
Baking textures in C4D takes a very very long time on my mac, so I baked only one texture out (which comes with all its own troubles regarding exact lighting). To get a hyper real animation on the web, I technically would need to bake out the cloth texture for every frame of the animation - which would make it way to big for web use. So for now I will stick to the workaround, it looks ok.
These experiment have the goal to host the entire agent-simulation and the installation piece online. Accessible 24/7, explorable from every angle with webVR.
Another idea was to port it to iOS and the apple watch. That would mean the simulation and installation would be always on a human wrist - which is conceptually very interesting. A sort of wearable installation that deals with our expulsion from paradise ...
While doing a lot of digital, I am finalizing my orders at McMasterCarr for a small railing system for linear motion. I want to build an exact replica of the bigger mechanics so that I can show that they would work as well on a bigger scale.
SetupAt our ITP Quick & Dirty Show I tried a new setup that involved some user interaction. This meant a different direction to the original setup as an art piece. I wanted to see if people would be interested in interacting with the piece more directly or wanted to be silent observers.
In that case, the users where required to drag the piece of fabric over the honey-pond instead of a robot arm doing it.
Feedback/Reactions/Challenges*leaking is an issue - at the end of the 2 hours the pond leaked on 3 sides as the honey water pressure from moving the cloth affected the seals
*in case I want user interaction there has to be immediate feedback to the users - people did not understand how their actions affected the simulation
*people wanted to see the animation as well
*some people thought the honey water is a digital simulation
*the question came up if there is a way to let the agents "break out" of paradise - how do they perceive the world then?
*nobody understood why I would deconstruct the simulation
*a long artist statement would be necessary and wanted(one visitor referred to the long statements of Ian Cheng and his simulations)
*people liked dragging things over a digital screen through thicker liquids
Other DevelopmentsNo lead on the venue yet (Barak, Tong & I contacted a space in Manhattan about two weeks ago, so far no feedback)
Takeaways/Next Steps*Building a real size structure doesn't make sense without a venue housing it -> I will continue working on a small version of the installation piece, as a small devotional device in a nice housing, more of a design piece + render out a digital version of it
*I want user interaction for both versions now: I have to improve the feedback for both the digital version and the small physical devotional object - waiting for "exile" to pop up takes too long and people get bored.
To think about and decide asap:
*I have to decide if I want an extension of the piece into real life: based on the user feedback, should the agents be able to break free, meaning generate their own worlds after being exiled? I could build an image generation into the piece, or let them roam as "robot" cars on the floor
*I might think about using a screen + a honey simulation instead of real honey for the devotional device - users will drag cloth over it, or swipe a real cloth, this will trigger simulation