Dr. Jeff Clune — Research Team Leader, OpenAI & Associate Professor Computer Science, University of British Columbia

On the opening day of the International Conference on Machine Learning (ICML) 2019, I attended a tutorial on Population Based Methods by Jeff and his colleagues Ken Stanley and Joel Lehman. Some amazing research was presented, all centered around the importance of quality diversity and open endedness of our algorithms. Consider an algorithm that does not just solve a single problem within a single environment, but one that is also simultaneously creating more complex and diverse problems to solve. It definitely seems that this would be a fundamental ingredient to reaching general artificial intelligence. As the conference went on, it became apparent to me that a general theme in reinforcement learning research was the idea of not setting specific goals, but rather rewarding curiosity or learning rate amongst a wide range of possible tasks. By day 4, Jeff's tutorial was still the standout highlight for me and so I reached out to him to be on my podcast. Jeff is super passionate about his work, has an inspiring story about pursuing your dreams and was awesome to talk to. He also recently won the Presidential Earler Career Award for Scientists and Engineers from the White House. Enjoy the podcast!

Contents

🎧 Listen In

đź“– Read Along

đź’ˇ Key Messages

🎧 Listen In

Here is the podcast.

đź“– Read Along

As usual, the footnotes like you see here 1 will contain fun extra notes. I have also included some of the raw speech-to-text bloopers which are by far greater than the real content itself. These footnotes are slightly more boring 1 and will contain definitions and additional information.

All right Jeff! So I went through LinkedIn, I saw that started with an undergraduate in philosophy?
Yes
But then you took a hard left and went into a PhD in computer science. I was hoping you could speak about that transition from you're doing your masters in philosophy and going into computer science and what your PhD topic actually was?
Yeah sure. So I think that my whole life I've been on a quest to understand two things. One of them is how thinking happens, like how do how does the human brain work? how does it think about complex things? where did intelligence come from? And a very related concept is kind of where did this explosion of complexity and amazing engineering designs on earth come from? You know, not only the human brain but also jaguars and hawks and three toed sloths and whales, and how could a process create all of these amazing engineering marvels? And so I went to university and I thought you know who has the market cornered on thinking, if I want to learn about thinking where should I go? And I said oh it's probably philosophy. And so I went and I took a lot of philosophy classes and the first couple years were fantastic but then I started running into a wall because you never get to test your ideas and you never get to iterate and improve. So you never get to find out if you were right to some extent. And so I increasingly kind of learned that I wanted to do science
Something more quantitative?
Yeah more quantitative and empirical and iterative. I wanted to test the ideas out in models. There is this wonderful quote from Richard Feynman which is "What I cannot create, I do not understand"
Yeah I know the one
And I feel like you understand best by building and so I wanted to switch from just talking about intelligence to trying to build it as a way to better understand it. And I think that that's paid off in spades, I actually think the best place to study intelligence is in machine learning because you learn by doing and you learn by building and it really forces you to think deeply about intelligence in general. So that is kind of the reason for the switch from philosophy to machine learning
Did you have to teach yourself how to code from scratch at the start of your PhD?
Yeah look it's somewhat of an interesting story
Hahah
So when I finally realized that I wanted to get into machine learning I contacted a whole bunch of computer science programs and said I want in, I want to do this kind of research, can I join. And they said well you have an undergraduate philosophy that's not really how it works. But I refused to take no for an answer so I applied to 80 different universities and I think I got 78 no's but two places were interested in potentially-
Taking you in?
Yeah yeah and one of them said well you can't just go right to a PhD program in computer science and machine learning, but you could come do a master's in philosophy, we have philosophers here that are working with the computer scientists on these subjects. And so I kind of made an all in bet that I would go get a master's in philosophy basically as a gateway into the PhD program. So I did, I showed up at Michigan State, I did a master's in philosophy and the whole time in addition to the master's in philosophy I was also taking computer science and machine learning classes, doing research, joining the research group, meeting all the professors, you know trying to ace the classes. And eventually after the masters I had a couple of publications I had aced all the classes and I said okay now can I get into that PhD program? And they said sure and so I did that. And actually I skipped one chapter of the story which is that initially when I contacted all of these different universities, that was actually stage two. Stage one was I had read about a particular researcher who was doing research that I found fascinating because he was both into AI but also into studying how evolution can produce complexity explosions. And his name is Hod Lipson at Cornell. So I contacted him and miraculous he actually responded. And he said I'd love to have you in my lab but you can't skip into the Cornell PhD program without an undergrad in computer science. And so I said okay that's sad. So then I did the masters, I got the PhD and then after that I called him back up and I said hey now I have a PhD in computer science, can I join your lab? He said I'd love to have you but I don't have funding. So I went and wrote my own grant, got it and said now I have funding and he said come on in
Hahah
So it was eight years after I initially read about his work I was entering his lab as a postdoc and two years later I was starting as a professor and getting those emails myself from people that to join machine learning. So it's spent a bit of a long journey
Yeh wow so eight years from inception of your dream to actually getting to where you wanted to be. Is that when you came to the university of Wyoming?
So I originally read about his work around 2001 in San Francisco, quit my job, travelled the world for 15 months, came back and said I want to do that. Contacted him, didn't get in, got the master's at Michigan State, got the PhD at Michigan State and then in 2010 I joined his lab as a postdoc and then at 2012 I accepted a job at the university of Wyoming as a professor
And is that when you started the Evolving AI Lab
Yep, in January 2013
When you created the lab, what was your initial vision? What did you want the lab to achieve when you first started out?
Really it's the original vision, this twin goal of simultaneously trying to understand how to produce intelligence and thinking machines, but also try to understand how evolution produced all the complexity that we see on Earth. And could we harness those ideas in the production of AI. I think that it's really inspiring and interesting and powerful to look to what happened on Earth and see if we can capture those ingredients that made nature, make Jaguar, hawks and the human brain and get them into our algorithms. And that was the main subject of our tutorial
So for me that tutorial has been the biggest stand out so far, it was awesome
Thank you
I really enjoyed it, perfect way to start the conference. You were obviously able to show some amazing results, like the hexapod robot that loses its limb and then learns how to walk differently 2. And you did this by encouraging diversity throughout your algorithms rather than a fixed goal or fixed solution. Which is I suppose this is what you are talking about on Earth with how all these different animals could be created. When did this inspiration to looking to life and Earth start, was it back in your undergrad of philosophy?
I think it started even before that. Just growing up, Looking around at the natural world and trying to figure out where all this complexity comes from and where do we come from, how were we made and designed. But then at undergrad I did start looking into evolutionary theory and I was absolutely blown away by the simplicity behind the Darwinian algorithm 2 and the complexity of the results. So there seems to be kind of like a discord there, how could such a simple unintelligent algorithm-
Produce such complexity?
And the reality is, as we know, it's not just this simple algorithm, that's not enough. We code up simple Darwinian genetic algorithms all the time and they don't do anything interesting 3. So there's extra stuff in there, secret ingredients in the recipe that we don't know yet, and that's all we have to figure out. And so what you saw on Monday was basically the results of over a decade and a half of research by myself, Ken, Joel and others in the community of trying to kind of tease out what are these secret ingredients that lead to complexity explosions so that we can make our algorithms similarly generate endless diversity and capture serendipity. I think one thing that's very interesting that Ken helped me see is that it's not just evolutionary processes that have a lot of these principles, it's also human processes, cultural evolution, science, technology and art. They're all very similar in the sense that they have a set of current building blocks that are interesting and different and high quality and then they perturb them, change them a little bit, mix and match and combine them. This generates new sets, some are found to be interesting some are found not to be and the ones that are interesting get added to the collection and they keep going. You know how every scientist talks about standing on the shoulders of giants, you don't get Jackson Pollock if you don't have Monet and you don't get Monet if you don't have Renoir and back and back and back 3. So everything is building on what came before and kind of riffing off of it. And so we want our algorithms so similarly have that spirit
In the tutorial you spoke about the idea of creating new problems and then simultaneously solving the problems that you are creating. I mean that definitely seems like a fundamental step towards, as you said, a goal that we are all sort of implicitly joint in which is general artificial intelligence. You seemed to pose a lot of questions, almost like a call to action for the research community. What were the key messages that you wanted the people to leave that talk with?
Oh there's so many
Hahah
Let's see, some of them would be that we want algorithms that can endlessly generate new interesting, high quality solutions. We call that algorithms that are open ended. We want algorithms that can capture serendipity when it happens. So if the algorithm is trying to solve one problem and then ends up producing a solution to another problem, don't throw that out. Capture that and start working on that problem too. I think we want algorithms that generate their own problems while they solve them because we believe that if you are not generating new problems inside of your algorithm, your algorithm will ultimately run out of steam. The only examples we have, which are human evolution and culture evolution, are algorithms that ultimately solve extremely ambitious problems and keep innovating are because they keep making up new challenges for themselves to solve. So you think about it, if all of science and technology froze the current set of problems that we're working on and just worked on those for however long, eventually we'd be done right?
Haha yep
It would run out, it went end. But what's interesting about science and tech and art is that we keep inventing new problems on the fly as we go. And we want our algorithms to do that. So the final message that I wanted to leave the audience with is that I think that - it's possible - that the current machine learning community is going about trying to build general AI the wrong way. It's not a guarantee but it's possible. The current dominant paradigm seems to be what I call the manual path to AI which is that we're going to identify, by hand, all the building blocks data to AI. We need convolution and batch norm and resnets and...
Bayesian everything?
Yeah 4. And then we're gonna get all these pieces, we're gonna identify them and then at some point we're gonna take on the herculean task of putting them all together into some giant complex thinking machine
Like stitching it all together?
Yeah like building a building this Frankenstein Rube Goldberg thinking machine 4. And that may work and be the right way to do it and even if it's not the fastest way to do it we should do it anyway because it's interesting. But I think there's another path out there that are not that many people have appreciated and not that many people are working on and thats what I call AI generating algorithms. This is an algorithm that you turn on and initially it doesn't have much interesting going on, but it bootstraps itself up into tremendous complexity and ultimately it produces general AI as one of its artifacts
One of many
One of many. And so this is what happened on Earth, we have a simple algorithm that produce all this stuff and eventually produce the human mind. So could we have an algorithm like that? So I think that if we want that to work there's three pillars that would be required for it to be successful. The first one is meta-learning the architectures, because a fixed architecture is going to be hopeless. The second is you should learn the learning algorithm itself which is heretical in this community because we spend so much time trying to create learning algorithms. But I think there's an increasing group of people that are all looking into meta-learning that are pretty persuaded that we can do better by learning. And the final one is automatically generating the solutions in the learning environments as you go. I think that this view is actually very much in line with the lessons of machine learning. If you look to the trends in the history of machine learning, the overarching trend is that hand design pipelines give way to learned pipelines. We saw this with computer vision features, we are increasingly seeing it with architectures, we are increasingly seeing it with hyper-parameters and data augmentation and with meta-learning, we are now seeing it in the learning algorithms themselves. So this is an all-in bet that we should stop trying to basically make the hog and the shift pieces of the machine and stick them together. Let's just learn the whole damn thing
Yeah, doing it all at once. I like quote that you had, we should be trying to create an algorithm that in a billion years would still be interesting
Yeah. So that actually comes originally from John, a longtime friend and colleague and Ken was the one who said in the tutorial but I totally believe in it. There is no algorithm that we have today that you would want to run for a billion years and come back and find it interesting. But we believe that we're getting a little bit closer. If you think about the algorithm of POET, which was generating its own challenges and solutions
That was the walking agent?
Yeah, the 2D walker
Where the walker goes and solves a complex environment with elevated objects and then when you return to the simpler worlds it performs better?
Yeah and we're not doing that by hand. The system is just generating increasingly difficult sets of problems and then letting agents transfer between the problems if they're better. That algorithm is getting close to being something that might be interesting. It's ultimately not going to work because it's in a particular environment with a particular set of obstacles. But if you get a little bit more of an expressive environmental coding, something that could encode stairs and doors and elevators and lakes and trees and mountain ranges. If you could express a huge set of environments then maybe that thing starts to get to be something you'd want to see run for a million years. So it's a step in the right direction
Yeah definitely, thats awesome. This might be a hard one but when you look back on your career, and I don't mean now now I mean in the future, and you seem quite young and there might be a fair bit left
Haha, hopefully
Is there one thing that you would hope to say that you were able to achieve?
Hmm, that's a good question. Obviously I am very interested in making general AI and making massive progress in understanding how to make thinking machines. I also would be very interested if we could create open ended algorithms. So this is a running theme, and there's a couple of different reasons why I think those are fascinating things. One is that if we had algorithms that could be open ended and it could be applied in a variety of different domains, you could have something that endlessly generates poems or art or virtual creatures or even general AI itself. And what's nice about that is that you can understand the space of variety that's possible in those domains. So just take the example of general AI. If you had an algorithm that every time you ran it it gave you a generally intelligent species or agent, but if you ran it multiple times it gave you different ones
Which could all do different things, potentially better or worse?
True and that's interesting, but if you ran it a thousand times or a million times, you now have a million different flavours of intelligence. So think about how limited we are, the only example of intelligence we know of his ours. The only example of language we know of is ours. What do the alien cultures do when they go to the theater? Do they even go to the theater? What does their art looks like? What do their languages look like? If we got to travel to all and meet all of the different races of alien intelligence in this universe, we would have such an expanded notion of the idea of intelligence
Which you believe in by way?
Absolutely
Alright, just wanted to get that haha
I mean when we say believe it I believe that there is intelligent life elsewhere in the universe but it's probably certainly not in our lifetimes or even our children's or our grandchildren's to be possible to meet all of these alien cultures. So creating algorithms that could create general AI allows you to effectively create the ability to go visit many of the different alien races because inside of these virtual worlds they might be taught to grow up on water worlds and in gaseous cloud worlds, or just disembodied. I have no idea, and so going back to my original quest as a young child which is to understand how thinking happens and the space of thinking and what it might be like to meet entirely different alien civilizations that have totally different cultures and ways of thinking about the world. So this is a way to do that. I know it's extremely ambitious and I have no idea if we will be able to pull it off in our life time, but what could be more fascinating?
I suppose the point about that is that in that way, we aren't just creating one general AI, which at the end of the day is just based on our own right? It will just be based of us
Yeah and so I make this argument in this paper that I have called AI generating algorithms, that the manual path - while very interesting if it works - is still likely it had to be made in our own mould and so it'll probably resemble us and it'll be a single point. Yeah but if you had an AI generating algorithm, it could generate wildly divergent flavors and now you've got a million points, now we understand the full space of possible intelligent life. So I think that's interesting
Awesome. If you could talk to a younger version of self starting out their career today, what would you tell yourself?
Yeah I would say identify throughout your life every time that there's something you do and you say that was awesome I really enjoyed that. Find a way to keep going because what happened to me is that it took me a long time to find my ultimate calling and passion which is being a machine learning scientist and professor and in hindsight there were all these moments where I was like see you loved that, remember that? You absolutely loved that. And then somehow I kind of gave up on it. So find a way to keep going, find people who can help you nurture this spark of curiosity and turn it into a forest fire. I would also say the world loves to tell you to take to make very practical choices. Like go into this career because it's stable and it's a growing field that pays well or whatever. I think that's just all bad advice. You get one crack at life so don't do something boring just because it has low variance and high mean salary. Find the things that you can't stop telling people on a bus about or your parents about or your friends about. Things that you want to talk other people's ears off about because you're so passionate and interested about them. Things that you want to just gorge yourself on because you're so curious and just dive in. There is a career that allows you to do that. Just go for it
Just find it
Yeah and I think what's so amazing is that all of the people that have been in this field for 10 to 15 years, we all got into it because we were interested and it didn't work and nobody was paying us to do this and nobody was interested in funding this research. So we were just kind of like out there in the desert just hiking around because we loved it. And then look what has happened and now, and so I don't think you should justify our choice because it worked, because some other field might not experience that explosion
Yeah, you can't look at hindsight
But the idea is that if you're passionate about something then it will be rewarding whether it works or doesn't, because you get to do what you love. And I don't know about most people but I think that that's one of the greatest rewarding things in life, is to find something you absolutely love and just do it and the rest will figure itself out
Definitely. And I suppose you have this passion for this obviously. But I mean I saw on your web site you also have, not just a passion for artificial life but real life as well, ice climbing, kite surfing and all that. I suppose one of the things for me, I feel like I'm always fighting to keep my head above water with how fast this field is moving. How do you go about keeping the balance between staying cutting edge in this field but also being able to experience life and traveling and all those things?
Yeah it's even harder when you have another source of real life with your small children
Hahah
Which are taking an increasing amount of time. It's very difficult. It's just hard to balance your passions. I was given good advice as a junior professor by a more senior professor, who said if you're trying to debate whether or not to go surfing or kite surfing or for a mountain bike ride, he said always go because there's going to be an infinite amount of work on the other side. And so if you wait for the storm to clear and there to be no work in the queue, you'll never go
Yeah, there is always going to be more work
That's right and you know you have to be balanced and you have to do what you love. Some of my best ideas I have when I'm out in the middle of the San Francisco Bay kite surfing, maybe because the storm quiets and my brain actually can think about fun research as opposed to the next e-mail I have to respond to. Getting away from your computer and getting away from Twitter and these things, is valuable for creative exercises. The Greeks I believe said moderation in all things so you can't only do research or you probably won't be as good of a researcher
Perfect. Last question, if you could recommend one book to someone, what would it be?
... that's a great question
Haha you can do more If you really want to
So, two books that really moved the needle for me were the Selfish Gene by Richard Dawkins and Darwin's Dangerous Idea by Daniel Dennett. And I love modern machine learning and AI so much but most of my reading of that has been in paper form. I mean there obviously like the textbooks on AI, Russell Norvig is a wonderful read. But I think that the one that would probably be best for people that might be inspired are those two books
All right. Jeff thank you so much, it was awesome
Thank you

đź’ˇ Key Messages

Here are my key takeaways from the podcast.

  • Don't take no for an answer - Jeff knew from an early age that his dream and quest was to understand how complexity of life happened on planet Earth. After studying Philosophy he realised he had to transition to machine learning if he was going to be able to continue chasing this dream. He contacted dozens and dozens of universities, who all told him that he did not have an undergraduate computer science and so would not be allowed into the PhD programs. But he persisted to find a way into a PhD program in machine learning which has led him to be able to perform the research he loves today.
  • Draw inspiration from nature - The Earth is an amazing place that has produces many marvels, the human brain, the jaguar, the hawk. We should pursue algorithms that can produce a diverse range of high quality solutions, thus capturing amazing complexity like we see on Earth.
  • Capture serendipity - If you are trying to train an agent to walk and it learns to crawl, capture that learning and pursue that as well, do not throw it out.
  • Maybe there is an alternative path to general AI - At the moment, research is going into building all the individual building blocks of AI, then one day we will undertake the herculean task of tying it all together. This may work, but if it does then what we create will be in our own mould. There might be another way, through open ended algorithms which keep innovating and creating new problems for themselves to solve. This way, we can build an algorithm that eventually creates general AI as just one of its artifacts. Beyond that, we could run it over and over to create endless flavours of general AI.
  • Identify the things that you love and find a way to keep going - When you do something and love it, capture that spark and turn it into a wildfire. Find the people around you that will help nurture that passion.
  • Don't take the safe option in your career - Do not pursue a career because it has a high mean, low variance salary. You only get once shot at life, pursue something that you are so passionate about that you can't stop talking to everyone about it. There is a career out there that allows you to do that.
  • If you are debating whether to go for a ride, surf or whatever, always go - There will always be an infinite amount of work on the other side of you taking that time to pursue your hobbies and other passions. We need to be balanced and sometimes taking those breaks to do other things we love can lead to new ideas and new perspectives. As the Greeks said - moderation in all things.
  • Read Selfish Gene by Richard Dawkins - A book that explores the ideas of evolution and Darwinism. Forty years later, its insights remain as relevant today as on the day it was published.
  • Read Darwins Dangerous Idea by Daniel Dennett - A book that focuses his unerringly logical mind on the theory of natural selection, showing how Darwin's great idea transforms and illuminates our traditional view of humanity's place in the universe.
  • Read Artificial Intelligence by Russel Norvig - The long-anticipated revision of this best-selling text offers the most comprehensive, up-to-date introduction to the theory and practice of artificial intelligence.