Gene Kogan is an artist and programmer exploring autonomous systems, collective intelligence, generative art, and computer science. He is interested in advancing scientific literacy through creativity and play, and building educational spaces which are as open and accessible as possible. His work is all free & open-source, and he records many of lectures and tutorials for free distribution. Currently, he is leading an open project to create an autonomous artificial artist as well as compiling a free educational toolkit on machine learning for art.
WORKSHOP: Interactive Machine Learning


Machine learning is a versatile and extremely general technology, with broad applications to all the other sciences and humanities. It has reignited long-standing debates about the nature of intelligence and the limits of technology, reshaped notions of authorship and originality, and facilitated recent innovations in human-computer interaction. It has numerous creative applications and is rapidly becoming a default faculty in much of the software we interact with daily. Until recently, it has been a relatively obscure technology, requiring uncommon computer skills to work with in a hands-on way. Libraries like ml5 lower the barrier of entry to programmers without specialized knowledge of AI, while tools like Runway lower the barrier even further still to non-programmers and people in other fields.

This hands-on workshop introduces techniques from machine learning for real-time interactive applications. We will be using Runway, a tool which makes it easy to install and run models inside your existing workflows, as well as ml5.js, a JavaScript library which wraps neural networks into an intuitive high-level API. Each of these tools, in their own ways, make AI more accessible to non-specialists and people in creative fields who want to apply these state-of-the-art machine learning models to their own craft. We will cover a wide array of vision, sound, and language-based models which do everything from extract structured meaning from raw data to generate photorealistic images and paragraphs of coherent text. Each model will be presented along with its use cases and stubs of ideas for unguided exploration. The class is beginner-friendly and targeted towards artists, designers, and other creatives. Prior experience with programming is helpful but not required.

SKILL LEVEL: Intro / Intermediate

- An introduction to deep learning and how neural networks actually work.
- A review of general use cases and core applications across different types of media and interactive contexts.
- A tutorial on Runway, an application which makes it easy to run open-source deep learning models found on the internet.
- A tutorial on ml5.js, a JavaScript library which wraps GPU-accelerated deep learning into an intuitive interface.

• personal laptop

• Most recent RunwayML

• Optionally, also install Docker
*All participants will be given some free RunwayML credit for the workshop.


Collective Imagination

After years of being a niche subject of interest for AI scientists and code-based artists, "AI art" has spilled over into the mainstream with the arrival of "foundation models" like CLIP and GPT-3. These models are trained on crowd-sourced datasets containing hundreds of millions of images or billions of words, are capable of rendering realistic and compelling images and text, and can be "guided" by a human through a natural language interface. Whether we know it or not, we are building a global brain that may soon account for a significant fraction of all the media generated on and for the internet. If humans can be said to have something resembling a collective mind, I'm convinced these technologies provide a window to its imagination. This talk will summarize the state of the art, speculate on the future applications and ramifications of these techniques, and explore what they tell us about ourselves.