I decided to write a little bit about my process regarding thesis every day and put everything together in a weekly post.
day 1/night 1
After yesterdays session with our advisor Kat Sullivan, it’s time to go through things for thesis today: A little bit of concept, a little bit of research and a bit of planning for this week. Not too detailed as my concept depends heavily on prototyping, so I will have to come up with milestones after each week - and decide on how to continue.
How can art and technology enrich our spiritual search?
“Even the most perfect reproduction of a work of art is lacking in one element: its presence in time and space, its unique existence at the place where it happens to be.”
(Walter Benjamin, The Work of Art in the Age of Mechanical Reproduction)
real time, real space art performance
create a spiritual performance for robots around a central artistic AI: robots and AI try to save human mankind with art
based on Joseph Beuys and greek mythology my thesis tries to “heal” - honey is used as a metaphorical tool for this
stage 1: create a viral AI that helps humans to be artistic every day
stage 2: destroy any digital code of the virus, only encrypted code on clay tablets survives
stage 3: create a physical sculpture as a devotional object made out of tablets and AI
stage 4: create a ritual for commercial mini-robots (anki, alexa, …) where they try to decrypt the tablets (e.g. do fluxus performance around the sculpture) to save human mankind
this week is devoted to choosing and building the right network architecture, training data (performance art in video form, mostly fluxus) and a first trained network as a draft (probably image based, frame-by-frame object detection / transfer learning + image net as a starter)
Here the very basic and transfer-learning network structure I am experimenting with (based on the Udacity AI Programming with Python online class I took last year):
classifier = nn.Sequential(OrderedDict([ ('fc1', nn.Linear(25088, 200)), ('relu1', nn.ReLU()), ('dropout1', nn.Dropout(p=0.5)), ('fc2', nn.Linear(200, 120)), ('relu2', nn.ReLU()), ('dropout2', nn.Dropout(p=0.5)), ('fc3', nn.Linear(120, 102)), ('output', nn.LogSoftmax(dim=1)) ]))
I am currently hosting everything on Google Colab for quick prototyping, back to watching the loss going down:
While doing this, playing with particles in c4d offers superb distractions:
if there is additional time I will try to research more on computer viruses / worms
day 2 / night 2
Today was a day for classes at ITP, so less time to play with the network - but time to hear some amazing things in Gene Kogan’s machine learning class “Autonomous Artificial Artists”. The goal of this class is to create exactly what the title implies: an autonomous artificial artist. A neural network “living” and “working” on the blockchain. This means as well that the training will be distributed and ideally the network will continously retrain itself. This sounds very very up in the air - but is worth pursuing in this 6 week-class. I am very excited about this, especially in the light of my thesis. Here I wanna create a more evolutionary distributed neural network as a form of a positively viral entity. Maybe I bring on the blockchain as well, this might distribute power more evenly. Something to consider.
An interesting and refreshing look on my thesis from a performance (and technology) perspective is this lecture I recently stumbled upon: From Fluxus to Functional A Journey Through Interactive Art by Allison Carter.
Looking at my own little network I have to be a lot more humble: the architecture I chose seems not to be working well, I cannot get the error down fast enough. Here my architecture for the classifier:
And this is the result after an hour of training and 18 epochs:
This is not great. The loss is still high, and stuck around 2.6 - something needs to be fixed, either in my code or my hyper-parameters. Maybe I just need to train longer? Or switch the pre-trained network that I use as a basis for my transfer-learning network? So far I was using vgg16, I want to use densenet tomorrow and see if I get better results. I will need to experiment over the weekend to get an accuracy around 80%.
I still want to avoid to just get an off-the-shelf working network from github and train it with my data. I have done that too often in the past. This time I want to build it from scratch on my own.
day 3 & 4
On Friday I had Designing Meaningful Interactions with Katherine Dillon. Another great class and another great (and maybe different) perspective on my thesis - this time through the lens of design. I technically always saw my thesis project as an art project, not a design project. Since that class it has changed a bit. At least I am trying to think of it how it would shape as a design endeavor. Well, how would it look like? I am trying to think of users now, potential owners of intelligent & connected objects, performers of artistic actions. And I am somehow back to my rock that I created last year: there is something to the idea of designing an object of devotion for people that are not religious but spiritual. I have been thinking a lot about that in the past two days. Will it be for humans only? Or would it be for robots? How would such an object look like and what would it actually do?
Well, lots of questions come to my mind and my original plan is a bit put on hold to answer them, first. In the design class, the main point of designing an interaction is to think of how an object makes the user feel. I like this way of thinking. Maybe that helps me as well to write the fantasy-dream review we are supposed to write for Tuesday.
So I came up with a few emotions that my object and the connected action should trigger:
sensuality (digital and physical)
In that light I just tried to sketch a few objects that should be interconnected and maybe triggered by true randomness? Maybe controlled by one centralized neural network? Ideally those objects should change their shape, the cubes should move around, maybe glow, like a clock that is running on true randomness instead of seconds/minutes/hours. Ideal size: it should fit into the palm of your hand.
On top of this I was reading about the meaning of devotional objects in Renaissance Italy, the ubiquity and use of these animated objects in domestic devotion and a (possible) history of atheism - while moving to a new studio. What a journey so far!
day 5 & 6
After mixed results with vgg16 I tried out a different pre-trained network, densenet121, for image recognition to do transfer-learning on my experimental flowers-test set. Technically I would switch this test set to frames from Fluxus-performance footage. I still wonder if this is the right approach and have to talk to Gene Kogan, our resident here at ITP about ideal training setup. For my test-example gives me a better understanding for the network architecture and transfer learning in general - something I wanted to explore a bit more in depth this semester. I changed the classifier architecture a bit and used only 1 hidden layer:
This leads to faster drop of the loss, but then the loss got stuck at around 1.2 (65% accuracy) - maybe due to this very minimal classifier structure:
On top of that I continued with my c4d-renders for ideation on a certain design-framework for a physical representation of my spiritual art-AI. I just got a post card from a good friend of mine in Germany that summarizes this activity (diving into color and shapes, happiness, obsession and unconditionality):