David is a Research Scientist at Google Brain. His research interests include Recurrent Neural Networks, Creative AI, and Evolutionary Computing. Prior to joining Google, He worked at Goldman Sachs as a Managing Director, where he co-ran the fixed-income trading business in Japan. He obtained undergraduate and graduate degrees in Engineering Science and Applied Math from the University of Toronto.

In his most recent research paper he presents sketch-rnn, a recurrent neural network (RNN) able to construct stroke-based drawings of common objects. The model is trained on thousands of crude human-drawn images representing hundreds of classes. They outline a framework for conditional and unconditional sketch generation, and describe new robust training methods for generating coherent sketch drawings in a vector format.

Learning Abstraction with Neural Networks

In this talk I will discuss some of my experience with getting neural networks to do interesting things, without clear useful goals in mind. For example, I will show how we can make simple dataset interesting by getting a neural network to enlarge images without being explicitly being trained to do so. I will also discuss the use of neural networks to generate vector sketches, and finally, to generate entire worlds.