READ MORE ABOUT GENE:
On Motherboard - Artists Used Machine Learning to Turn Hand Drawn Maps Into Satellite Images
On Artsy - Open Source Could Let Computers Create the Next Mona Lisa
This workshop introduces the theory and application of machine learning for creative and artistic practice. It will focus on core algorithms used for parsing, visualizing, and discovering patterns in complex multimedia data, including images, sounds, and text. We will learn how to use neural networks to create real-time, cross-modal interactions for use in video and installation, as well as live music performance. We will also provide tools and code for clustering, visualizing, and searching through large collections of multimedia.
SKILL LEVEL: Intermediate
• Intro to machine learning, and a survey of critical issues demonstrating its relevance.
• Science and theory: how neural networks are designed, how they are trained, and what their low-level applications are.
• High-level applications to artistic and creative practice within design, visual art, sound, and physical computing.
WHAT TO BRING:
A laptop, Mac/Windows/Linux.
Any interactive sensors they wish to bring are welcome, e.g. Kinect, Leap Motion, BrainWave scanner, Myo armband, etc.
WHAT SOFTWARE TO HAVE INSTALLED:
Processing openFrameworks Wekinator
*Artists and musicians who use any software should come with it prepared: AbletonLive, AudioUnits, other DAWs, Resolume, VDMX, etc. People with an interest in physical computing may bring Arduinos or other microcontrollers.
Prior coding experience in a text-based (Python, Java/Processing, C++/openFrameworks) or patch-based (Max/MSP, PureData, vvvv) programming environment is helpful but not necessary.