invite users to contemplate on their online selves and identities by creating a semi-realtime chat with pre-recorded videos and actors - and the possibility for users to participate in this fake setup.
- how do we establish trust online with strangers?
- how do we perceive ourselves in a group environment online?
- what are the rules of communications in a video chat with strangers ?
- how is time related to communication ?
inspiration / early sketches
We started with early sketches that were playing with video feeds in VR. Initially we wanted to give users the possibility to exchange their personalities with other users: We had the idea of a group chat setup where you could exchange "body parts" (video cutouts) with different chat group participants. This should be a playful and explorative experience for participants. How does it feel to be in parts of another identity? How do we rely on our own perception of our body to establish our identities? How do we feel about digitally augmented and changed bodies? How does that relate to our real body perception? Does it change our feeling for our own bodies and therefore our identities? How close are we to our own bodies? Do we belief in body and mind as separate entities? How is body and mind separated in a virtual and bodyless VR experience?
Close to our initial sketches we constructed an online environment that should fulfill the following requirements:
- group experience with person-to person interaction
- video feeds like in a webcam-chat
- insecurity, vagueness and un-predicability as guiding feelings for the interaction with strangers
- fake elements and identities to explore the role of trust in communication
- a self-sustainable environment that has the potential to grow or feed itself
To achieve that we built a welcome page that simulates a video-chat network. The aesthetics were kept partially retro and cheap. The website should look not completely trustworthy and already invoke a feeling of insecurity - but still stimulate the interest in the unknown.
The main page features 3 screens with the user's webcam feed in the center - right between two users that seem to establish a communication. The user should feel in-between those two other users and feel the pressure to join in in this conversation. Both users on the left and right are actors, the video-feeds pre-recorded and not live. The user in the middle does not know this - they should look like realtime chat-videos.
While trying to establish a conversation with the fake video-chat partners, the webcam-feed of the user gets recorded via WebRTC. After 40s in this environment the recording of this feed pops up suddenly and replaces the video-feed with the actor on the left side. The users should realise now that reality is not what it seems in this video-chat. It is fake, even time seems to be unpredictable. After 5s looking at their own recorded feed, a popup on top of the screen asks the user if she wants to feed this recording of her into a next round to confuse other people. The question here is why a user would do that. In the user testing most users wanted to participate in the setup in the next round. As the users dynamically replace the videos on the left and right this could be a self-feeding chat that is never real time - you are always talking to strangers from another time. But for a few seconds they exist for you in real-time with you in this world - until you realize that this was not true. At least according to our concept of time and being.
As mentioned before we used three.js as the main framework. On top of that we used webRCT extensively, especially the recording and download function of webcam feeds. On the backend python helped us to move around the recorded and downloaded files dynamically from download-location on the local machine to a JS-accessible folder. Python as well helped us to keep track of the position of the videos (left or right) when the browser window gets re-loaded between different users. This was a hack, a node - server would have probably been better for this task - but python was simply quicker.
We did not use the app in a live setup as we felt we need to refine the code and as well experiment further with porting it to VR.
So far it was very rewarding for me as I could explore a lot of three.js while working with Chian on the project. WebRTC proofed again to be a great way to set up a life video chat - with a little help from python on the backend it worked as a prototype. The VR version will probably have to run exclusively in unity. This is mainly C# - a new adventure for the next semesters!
Here a video walkthrough:
On the code side we used a github-repo to backup our files. Here you can find all files including a short code documentation as readme.