How Deep is Your Dream
Lecture series at Merz Akademie, Stuttgart
curated and moderated by Prof. Olia Lialina
Artificial Intelligence proved to be a very unstable field, its 60 year history is a chain of periods filled with excitement and anticipation of the singularity, taking turns with times of total ignorance known as AI winters, when donors lose hope in replacing people by machines, when the general public is satiated with chat- and chess- and fridge-bots.
But this time* everything is different. The decade started with a lavish AI spring, flourished with Big Data and the Internet of Things, doped the world with an almost forgotten scent of smart homes and smart cities. This set the stage for something bigger, a new AI summer, a hot and deep summer of Neural Networks: NN based AI, or ANN, also referred today as "real" and "strong" AI: manufactured systems are imitating a network of brain neurons, algorithms are rewriting themselves, computers learning from their mistakes, and showing off with the first successes in science, business, security ... and art.
In July 2015 Google released their Deep Dream, a restless algorithm that learned to stare at an image until it sees the dog inside. The images it produces are ridiculous, scary and recognizable. Pattern recognition software made itself visible. It provoked developers and artists to exploit and explore neural networks, to dive deeper into its layers, looking for the source, uncovering and reverse engineering styles of epochs, art movements and particular names, coming back to the surface with visuals that would not be possible without an artificial brain.
How Deep is Your Dream is an invitation to talk about machine learning, educating machines, and most importantly how to resist algorithmic authority; about the merging of AI and the arts, the role of media artist in teaching computers to become one.
* as every time
You may also want to read an extended and polemic version of the announcement by Kyle McDonald
In German and with phenomenal posters designed for the event by Stefanie Ackermann
8 November 19:30
Trapped In The Body Of A Machine
Sebastian Schmieg, Berlin
artist, author of Segmentation.Network
It's becoming increasingly hard to tell computers and humans apart. While this might mean that we're entering an exciting age of exponential growth of intelligence and eternal life, I will however approach the blurring of this distinction by focusing on the invisible manual labor and the constant exploitation of our cognitive and bodily capacities that are necessary for building and running so called artificial intelligence. In doing so, it might turn out that telling apart artificial intelligence and an efficiently run mega corporation is becoming even more difficult.
My dreams are haunting me. But I like them. I like them much. I am giving speeches in my dreams, with my actual voice. As we all know, reality now appears in two modes, virtuality & actuality. Well, they have always existed side by side. Only now, the semiotic machine has entered the game. And this makes a lot of a difference. The semiotic animal, the human, now believes in its other self. Development of computers make people believe in Artificial Intelligence. Its problem are really those who claim to bring it about.
What I want to tell you, is this: the first moments of the birth of algorithmic art, which happened on the 5th of February, 1965, in Stuttgart; and an example of truly Artificial Art, Harold Cohen's expert system, AARON. I'll embed this into indications of principles of Generative Aesthetics. We'll see that interdisciplinarity exists in individuals only.
The field of Artificial Intelligence is not a black box so much as it is a box filled with our fears, hopes, expectations and many more unexpected feelings. We all really believe in “technology." What I have discovered through a process of interviewing people and during trip to Silicon Valley, is that AI reveals our underlying tendency to bridge semantic gaps and fill in what we don’t understand with mystic beliefs, powers and deep-rooted metaphorical links, and to equate AI's operations to inexplicable magic.
Artificial intelligence, machine learning, and deep learning go back 60, 30, and 10 years, respectively. But their antecedents go much further back in time, drawing upon a centuries-old tradition of philosophical inquiry into the nature of intelligence, as well as a more controversial one over what is and is not a machine. But epistemology is only part of the problem; humans have often aggravated this gap in understanding by shrouding scientific knowledge behind ivory towers, paywalls, proprietary software, and private firms.
This talk will present a short history of this knowledge gap. It will peel off the glitter from these powerful machine learning algorithms, and show what kinds of (sometimes humorous, and sometimes dangerous) mistakes they are capable of, presenting case studies of neural nets in the wild: their interactions with people, and perhaps more interestingly, their interactions with each other. Adversarial nets, indeed!
Imaging and capture technology has digitised our bodies at a resolution never attained before, and this tendency is only accelerating. Be it for the purpose of digitally ageing Brad Pitt on screen, or India’s citizen registry program accumulating to be the largest biometric database worldwide. As imaging technology progresses, what are the future uses of these collected and stored digitised bodies for narrative creation? Avatars @ Echo Lake is the first episode of the fact fiction project The Contents, following a cluster of digital & real characters on their quest through real-time engines, stories and bodies.