lexicap / vtt /episode_035_small.vtt
Shubham Gupta
Add readme and files
a3be5d0
raw
history blame
150 kB
WEBVTT
00:00.000 --> 00:03.120
The following is a conversation with Jeremy Howard.
00:03.120 --> 00:07.080
He's the founder of Fast AI, a research institute dedicated
00:07.080 --> 00:09.760
to making deep learning more accessible.
00:09.760 --> 00:12.560
He's also a distinguished research scientist
00:12.560 --> 00:14.600
at the University of San Francisco,
00:14.600 --> 00:17.600
a former president of Kegel, as well as a top breaking
00:17.600 --> 00:18.800
competitor there.
00:18.800 --> 00:21.680
And in general, he's a successful entrepreneur,
00:21.680 --> 00:25.240
educator, researcher, and an inspiring personality
00:25.240 --> 00:27.000
in the AI community.
00:27.000 --> 00:28.680
When someone asked me, how do I get
00:28.680 --> 00:30.240
started with deep learning?
00:30.240 --> 00:33.360
Fast AI is one of the top places I point them to.
00:33.360 --> 00:34.120
It's free.
00:34.120 --> 00:35.520
It's easy to get started.
00:35.520 --> 00:37.600
It's insightful and accessible.
00:37.600 --> 00:40.960
And if I may say so, it has very little BS.
00:40.960 --> 00:44.160
It can sometimes dilute the value of educational content
00:44.160 --> 00:46.720
on popular topics like deep learning.
00:46.720 --> 00:49.440
Fast AI has a focus on practical application
00:49.440 --> 00:51.600
of deep learning and hands on exploration
00:51.600 --> 00:53.880
of the cutting edge that is incredibly
00:53.880 --> 00:57.960
both accessible to beginners and useful to experts.
00:57.960 --> 01:01.360
This is the Artificial Intelligence Podcast.
01:01.360 --> 01:03.760
If you enjoy it, subscribe on YouTube,
01:03.760 --> 01:06.920
give it five stars on iTunes, support it on Patreon,
01:06.920 --> 01:09.040
or simply connect with me on Twitter.
01:09.040 --> 01:13.280
Alex Friedman, spelled F R I D M A N.
01:13.280 --> 01:18.560
And now, here's my conversation with Jeremy Howard.
01:18.560 --> 01:21.680
What's the first program you ever written?
01:21.680 --> 01:24.800
First program I wrote that I remember
01:24.800 --> 01:29.200
would be at high school.
01:29.200 --> 01:31.240
I did an assignment where I decided
01:31.240 --> 01:36.240
to try to find out if there were some better musical scales
01:36.240 --> 01:40.640
than the normal 12 tone, 12 interval scale.
01:40.640 --> 01:43.680
So I wrote a program on my Commodore 64 in BASIC
01:43.680 --> 01:46.080
that searched through other scale sizes
01:46.080 --> 01:48.440
to see if it could find one where there
01:48.440 --> 01:51.880
were more accurate harmonies.
01:51.880 --> 01:53.040
Like mid tone?
01:53.040 --> 01:56.520
Like you want an actual exactly 3 to 2 ratio,
01:56.520 --> 01:59.400
where else with a 12 interval scale,
01:59.400 --> 02:01.480
it's not exactly 3 to 2, for example.
02:01.480 --> 02:05.080
So that's well tempered, as they say.
02:05.080 --> 02:07.680
And BASIC on a Commodore 64.
02:07.680 --> 02:09.440
Where was the interest in music from?
02:09.440 --> 02:10.480
Or is it just technical?
02:10.480 --> 02:14.640
I did music all my life, so I played saxophone and clarinet
02:14.640 --> 02:18.120
and piano and guitar and drums and whatever.
02:18.120 --> 02:22.200
How does that thread go through your life?
02:22.200 --> 02:24.160
Where's music today?
02:24.160 --> 02:28.320
It's not where I wish it was.
02:28.320 --> 02:30.200
For various reasons, couldn't really keep it going,
02:30.200 --> 02:32.560
particularly because I had a lot of problems with RSI,
02:32.560 --> 02:33.480
with my fingers.
02:33.480 --> 02:37.360
And so I had to cut back anything that used hands
02:37.360 --> 02:39.360
and fingers.
02:39.360 --> 02:43.920
I hope one day I'll be able to get back to it health wise.
02:43.920 --> 02:46.240
So there's a love for music underlying it all.
02:46.240 --> 02:47.840
Sure, yeah.
02:47.840 --> 02:49.480
What's your favorite instrument?
02:49.480 --> 02:50.360
Saxophone.
02:50.360 --> 02:51.000
Sax.
02:51.000 --> 02:52.840
Baritone saxophone.
02:52.840 --> 02:57.440
Well, probably bass saxophone, but they're awkward.
02:57.440 --> 03:00.120
Well, I always love it when music is
03:00.120 --> 03:01.760
coupled with programming.
03:01.760 --> 03:03.800
There's something about a brain that
03:03.800 --> 03:07.520
utilizes those that emerges with creative ideas.
03:07.520 --> 03:11.200
So you've used and studied quite a few programming languages.
03:11.200 --> 03:15.120
Can you give an overview of what you've used?
03:15.120 --> 03:17.920
What are the pros and cons of each?
03:17.920 --> 03:21.960
Well, my favorite programming environment almost certainly
03:21.960 --> 03:26.520
was Microsoft Access back in the earliest days.
03:26.520 --> 03:29.080
So that was a special basic for applications, which
03:29.080 --> 03:30.720
is not a good programming language,
03:30.720 --> 03:33.080
but the programming environment is fantastic.
03:33.080 --> 03:40.120
It's like the ability to create user interfaces and tied data
03:40.120 --> 03:43.720
and actions to them and create reports and all that.
03:43.720 --> 03:46.800
As I've never seen anything as good.
03:46.800 --> 03:48.920
So things nowadays like Airtable, which
03:48.920 --> 03:56.200
are like small subsets of that, which people love for good reason.
03:56.200 --> 04:01.160
But unfortunately, nobody's ever achieved anything like that.
04:01.160 --> 04:03.320
What is that, if you could pause on that for a second?
04:03.320 --> 04:03.840
Oh, Access.
04:03.840 --> 04:04.340
Access.
04:04.340 --> 04:06.320
Is it a fundamental database?
04:06.320 --> 04:09.600
It was a database program that Microsoft produced,
04:09.600 --> 04:13.440
part of Office, and it kind of withered.
04:13.440 --> 04:16.320
But basically, it lets you in a totally graphical way
04:16.320 --> 04:18.480
create tables and relationships and queries
04:18.480 --> 04:24.720
and tie them to forms and set up event handlers and calculations.
04:24.720 --> 04:28.680
And it was a very complete, powerful system designed
04:28.680 --> 04:35.000
for not massive scalable things, but for useful little applications
04:35.000 --> 04:36.400
that I loved.
04:36.400 --> 04:40.240
So what's the connection between Excel and Access?
04:40.240 --> 04:42.160
So very close.
04:42.160 --> 04:47.680
So Access was the relational database equivalent,
04:47.680 --> 04:48.360
if you like.
04:48.360 --> 04:51.080
So people still do a lot of that stuff
04:51.080 --> 04:54.120
that should be in Access in Excel because they know it.
04:54.120 --> 04:56.680
Excel's great as well.
04:56.680 --> 05:01.760
But it's just not as rich a programming model as VBA
05:01.760 --> 05:04.680
combined with a relational database.
05:04.680 --> 05:07.320
And so I've always loved relational databases.
05:07.320 --> 05:11.080
But today, programming on top of relational databases
05:11.080 --> 05:13.840
is just a lot more of a headache.
05:13.840 --> 05:16.680
You generally either need to kind of,
05:16.680 --> 05:19.040
you need something that connects, that runs some kind
05:19.040 --> 05:21.560
of database server, unless you use SQLite, which
05:21.560 --> 05:25.000
has its own issues.
05:25.000 --> 05:26.320
Then you kind of often, if you want
05:26.320 --> 05:27.760
to get a nice programming model, you
05:27.760 --> 05:30.440
need to create an ORM on top.
05:30.440 --> 05:34.360
And then, I don't know, there's all these pieces tied together.
05:34.360 --> 05:37.000
And it's just a lot more awkward than it should be.
05:37.000 --> 05:39.200
There are people that are trying to make it easier,
05:39.200 --> 05:44.480
so in particular, I think of Fsharp, Don Syme, who him
05:44.480 --> 05:49.320
and his team have done a great job of making something
05:49.320 --> 05:51.640
like a database appear in the type system,
05:51.640 --> 05:54.960
so you actually get tab completion for fields and tables
05:54.960 --> 05:57.840
and stuff like that.
05:57.840 --> 05:59.280
Anyway, so that was kind of, anyway,
05:59.280 --> 06:01.880
so that whole VBA Office thing, I guess,
06:01.880 --> 06:04.560
was a starting point, which is your miss.
06:04.560 --> 06:07.800
And I got into Standard Visual Basic, which
06:07.800 --> 06:09.840
that's interesting, just to pause on that for a second.
06:09.840 --> 06:12.600
And it's interesting that you're connecting programming
06:12.600 --> 06:18.200
languages to the ease of management of data.
06:18.200 --> 06:20.600
So in your use of programming languages,
06:20.600 --> 06:24.880
you always had a love and a connection with data.
06:24.880 --> 06:28.640
I've always been interested in doing useful things for myself
06:28.640 --> 06:31.880
and for others, which generally means getting some data
06:31.880 --> 06:34.600
and doing something with it and putting it out there again.
06:34.600 --> 06:38.400
So that's been my interest throughout.
06:38.400 --> 06:41.560
So I also did a lot of stuff with Apple script
06:41.560 --> 06:43.880
back in the early days.
06:43.880 --> 06:47.960
So it's kind of nice being able to get the computer
06:47.960 --> 06:52.960
and computers to talk to each other and to do things for you.
06:52.960 --> 06:56.600
And then I think that one night, the programming language
06:56.600 --> 06:59.960
I most loved then would have been Delphi, which
06:59.960 --> 07:05.960
was Object Pascal created by Anders Halsberg, who previously
07:05.960 --> 07:08.840
did Turbo Pascal and then went on to create.net
07:08.840 --> 07:11.080
and then went on to create TypeScript.
07:11.080 --> 07:16.720
Delphi was amazing because it was like a compiled, fast language
07:16.720 --> 07:20.200
that was as easy to use as Visual Basic.
07:20.200 --> 07:27.480
Delphi, what is it similar to in more modern languages?
07:27.480 --> 07:28.840
Visual Basic.
07:28.840 --> 07:29.680
Visual Basic.
07:29.680 --> 07:32.320
Yeah, that a compiled, fast version.
07:32.320 --> 07:37.080
So I'm not sure there's anything quite like it anymore.
07:37.080 --> 07:42.520
If you took C Sharp or Java and got rid of the virtual machine
07:42.520 --> 07:45.040
and replaced it with something, you could compile a small type
07:45.040 --> 07:46.520
binary.
07:46.520 --> 07:51.680
I feel like it's where Swift could get to with the new Swift
07:51.680 --> 07:56.640
UI and the cross platform development going on.
07:56.640 --> 08:01.600
That's one of my dreams is that we'll hopefully get back
08:01.600 --> 08:02.840
to where Delphi was.
08:02.840 --> 08:08.520
There is actually a free Pascal project nowadays
08:08.520 --> 08:10.320
called Lazarus, which is also attempting
08:10.320 --> 08:13.960
to recreate Delphi.
08:13.960 --> 08:16.080
They're making good progress.
08:16.080 --> 08:21.000
So OK, Delphi, that's one of your favorite programming languages?
08:21.000 --> 08:22.360
Well, it's programming environments.
08:22.360 --> 08:26.280
Again, say Pascal's not a nice language.
08:26.280 --> 08:27.880
If you wanted to know specifically
08:27.880 --> 08:30.360
about what languages I like, I would definitely
08:30.360 --> 08:35.480
pick Jay as being an amazingly wonderful language.
08:35.480 --> 08:37.000
What's Jay?
08:37.000 --> 08:39.600
Jay, are you aware of APL?
08:39.600 --> 08:43.520
I am not, except from doing a little research on the work
08:43.520 --> 08:44.080
you've done.
08:44.080 --> 08:47.280
OK, so not at all surprising you're not
08:47.280 --> 08:49.040
familiar with it because it's not well known,
08:49.040 --> 08:55.480
but it's actually one of the main families of programming
08:55.480 --> 08:57.920
languages going back to the late 50s, early 60s.
08:57.920 --> 09:01.720
So there was a couple of major directions.
09:01.720 --> 09:04.440
One was the kind of lambda, calculus,
09:04.440 --> 09:08.640
Alonzo church direction, which I guess kind of Lisbon scheme
09:08.640 --> 09:12.040
and whatever, which has a history going back
09:12.040 --> 09:13.440
to the early days of computing.
09:13.440 --> 09:17.360
The second was the kind of imperative slash
09:17.360 --> 09:23.240
OO, algo, similar going on to C, C++, so forth.
09:23.240 --> 09:26.960
There was a third, which are called array oriented languages,
09:26.960 --> 09:31.720
which started with a paper by a guy called Ken Iverson, which
09:31.720 --> 09:37.480
was actually a math theory paper, not a programming paper.
09:37.480 --> 09:41.520
It was called Notation as a Tool for Thought.
09:41.520 --> 09:45.320
And it was the development of a new type of math notation.
09:45.320 --> 09:48.560
And the idea is that this math notation was much more
09:48.560 --> 09:54.480
flexible, expressive, and also well defined than traditional
09:54.480 --> 09:56.440
math notation, which is none of those things.
09:56.440 --> 09:59.160
Math notation is awful.
09:59.160 --> 10:02.840
And so he actually turned that into a programming language.
10:02.840 --> 10:06.720
Because this was the late 50s, all the names were available.
10:06.720 --> 10:10.520
So he called his programming language, or APL.
10:10.520 --> 10:11.160
APL, what?
10:11.160 --> 10:15.360
So APL is a implementation of notation
10:15.360 --> 10:18.280
as a tool for thought, by which he means math notation.
10:18.280 --> 10:22.880
And Ken and his son went on to do many things,
10:22.880 --> 10:26.720
but eventually they actually produced a new language that
10:26.720 --> 10:28.440
was built on top of all the learnings of APL.
10:28.440 --> 10:32.800
And that was called J. And J is the most
10:32.800 --> 10:41.040
expressive, composable, beautifully designed language
10:41.040 --> 10:42.400
I've ever seen.
10:42.400 --> 10:44.520
Does it have object oriented components?
10:44.520 --> 10:45.520
Does it have that kind of thing?
10:45.520 --> 10:46.240
Not really.
10:46.240 --> 10:47.720
It's an array oriented language.
10:47.720 --> 10:51.400
It's the third path.
10:51.400 --> 10:52.760
Are you saying array?
10:52.760 --> 10:53.720
Array oriented.
10:53.720 --> 10:54.200
Yeah.
10:54.200 --> 10:55.480
It needs to be array oriented.
10:55.480 --> 10:57.480
So array oriented means that you generally
10:57.480 --> 10:59.520
don't use any loops.
10:59.520 --> 11:02.240
But the whole thing is done with kind
11:02.240 --> 11:06.360
of an extreme version of broadcasting,
11:06.360 --> 11:09.880
if you're familiar with that NumPy slash Python concept.
11:09.880 --> 11:14.240
So you do a lot with one line of code.
11:14.240 --> 11:17.520
It looks a lot like math.
11:17.520 --> 11:20.280
Notation is basically highly compact.
11:20.280 --> 11:22.800
And the idea is that you can kind of,
11:22.800 --> 11:24.760
because you can do so much with one line of code,
11:24.760 --> 11:27.720
a single screen of code is very unlikely to,
11:27.720 --> 11:31.080
you very rarely need more than that to express your program.
11:31.080 --> 11:33.240
And so you can kind of keep it all in your head.
11:33.240 --> 11:36.000
And you can kind of clearly communicate it.
11:36.000 --> 11:41.560
It's interesting that APL created two main branches, K and J.
11:41.560 --> 11:47.920
J is this kind of like open source niche community of crazy
11:47.920 --> 11:49.360
enthusiasts like me.
11:49.360 --> 11:52.120
And then the other path, K, was fascinating.
11:52.120 --> 11:56.600
It's an astonishingly expensive programming language,
11:56.600 --> 12:01.920
which many of the world's most ludicrously rich hedge funds
12:01.920 --> 12:02.840
use.
12:02.840 --> 12:06.640
So the entire K machine is so small,
12:06.640 --> 12:09.320
it sits inside level three cache on your CPU.
12:09.320 --> 12:14.040
And it easily wins every benchmark I've ever seen
12:14.040 --> 12:16.440
in terms of data processing speed.
12:16.440 --> 12:17.840
But you don't come across it very much,
12:17.840 --> 12:22.640
because it's like $100,000 per CPU to run it.
12:22.640 --> 12:26.240
But it's like this path of programming languages
12:26.240 --> 12:29.760
is just so much, I don't know, so much more powerful
12:29.760 --> 12:33.840
in every way than the ones that almost anybody uses every day.
12:33.840 --> 12:37.400
So it's all about computation.
12:37.400 --> 12:38.360
It's really focusing on it.
12:38.360 --> 12:40.640
Pretty heavily focused on computation.
12:40.640 --> 12:44.320
I mean, so much of programming is data processing
12:44.320 --> 12:45.640
by definition.
12:45.640 --> 12:49.000
And so there's a lot of things you can do with it.
12:49.000 --> 12:51.320
But yeah, there's not much work being
12:51.320 --> 12:57.080
done on making user interface toolkills or whatever.
12:57.080 --> 12:59.400
I mean, there's some, but they're not great.
12:59.400 --> 13:03.160
At the same time, you've done a lot of stuff with Perl and Python.
13:03.160 --> 13:08.320
So what does that fit into the picture of J and K and APL
13:08.320 --> 13:08.880
and Python?
13:08.880 --> 13:12.400
Well, it's just much more pragmatic.
13:12.400 --> 13:13.960
In the end, you kind of have to end up
13:13.960 --> 13:17.960
where the libraries are.
13:17.960 --> 13:21.320
Because to me, my focus is on productivity.
13:21.320 --> 13:23.800
I just want to get stuff done and solve problems.
13:23.800 --> 13:27.360
So Perl was great.
13:27.360 --> 13:29.760
I created an email company called Fastmail.
13:29.760 --> 13:35.200
And Perl was great, because back in the late 90s, early 2000s,
13:35.200 --> 13:38.160
it just had a lot of stuff it could do.
13:38.160 --> 13:41.840
I still had to write my own monitoring system
13:41.840 --> 13:43.840
and my own web framework and my own whatever,
13:43.840 --> 13:45.760
because none of that stuff existed.
13:45.760 --> 13:50.280
But it was a super flexible language to do that in.
13:50.280 --> 13:52.720
And you used Perl for Fastmail.
13:52.720 --> 13:54.520
You used it as a back end.
13:54.520 --> 13:55.800
So everything was written in Perl?
13:55.800 --> 13:56.520
Yeah.
13:56.520 --> 13:58.720
Yeah, everything was Perl.
13:58.720 --> 14:04.480
Why do you think Perl hasn't succeeded or hasn't dominated
14:04.480 --> 14:07.120
the market where Python really takes over a lot of the
14:07.120 --> 14:08.200
tests?
14:08.200 --> 14:09.640
Well, I mean, Perl did dominate.
14:09.640 --> 14:13.080
It was everything, everywhere.
14:13.080 --> 14:19.920
But then the guy that ran Perl, Larry Wall,
14:19.920 --> 14:22.280
just didn't put the time in anymore.
14:22.280 --> 14:29.680
And no project can be successful if there isn't.
14:29.680 --> 14:32.640
Particularly one that started with a strong leader that
14:32.640 --> 14:35.040
loses that strong leadership.
14:35.040 --> 14:38.040
So then Python has kind of replaced it.
14:38.040 --> 14:45.040
Python is a lot less elegant language in nearly every way.
14:45.040 --> 14:48.880
But it has the data science libraries.
14:48.880 --> 14:51.240
And a lot of them are pretty great.
14:51.240 --> 14:58.280
So I kind of use it because it's the best we have.
14:58.280 --> 15:01.800
But it's definitely not good enough.
15:01.800 --> 15:04.040
What do you think the future of programming looks like?
15:04.040 --> 15:06.880
What do you hope the future of programming looks like if we
15:06.880 --> 15:10.200
zoom in on the computational fields on data science
15:10.200 --> 15:11.800
and machine learning?
15:11.800 --> 15:19.440
I hope Swift is successful because the goal of Swift,
15:19.440 --> 15:21.000
the way Chris Latna describes it,
15:21.000 --> 15:22.640
is to be infinitely hackable.
15:22.640 --> 15:23.480
And that's what I want.
15:23.480 --> 15:26.920
I want something where me and the people I do research with
15:26.920 --> 15:30.360
and my students can look at and change everything
15:30.360 --> 15:32.000
from top to bottom.
15:32.000 --> 15:36.240
There's nothing mysterious and magical and inaccessible.
15:36.240 --> 15:38.600
Unfortunately, with Python, it's the opposite of that
15:38.600 --> 15:42.640
because Python is so slow, it's extremely unhackable.
15:42.640 --> 15:44.840
You get to a point where it's like, OK, from here on down
15:44.840 --> 15:47.320
at C. So your debugger doesn't work in the same way.
15:47.320 --> 15:48.920
Your profiler doesn't work in the same way.
15:48.920 --> 15:50.880
Your build system doesn't work in the same way.
15:50.880 --> 15:53.760
It's really not very hackable at all.
15:53.760 --> 15:55.600
What's the part you like to be hackable?
15:55.600 --> 16:00.120
Is it for the objective of optimizing training
16:00.120 --> 16:02.600
of neural networks, inference of neural networks?
16:02.600 --> 16:04.360
Is it performance of the system?
16:04.360 --> 16:08.440
Or is there some nonperformance related, just creative idea?
16:08.440 --> 16:09.080
It's everything.
16:09.080 --> 16:15.480
I mean, in the end, I want to be productive as a practitioner.
16:15.480 --> 16:18.440
So at the moment, our understanding of deep learning
16:18.440 --> 16:20.080
is incredibly primitive.
16:20.080 --> 16:21.520
There's very little we understand.
16:21.520 --> 16:24.200
Most things don't work very well, even though it works better
16:24.200 --> 16:26.200
than anything else out there.
16:26.200 --> 16:28.760
There's so many opportunities to make it better.
16:28.760 --> 16:34.360
So you look at any domain area like speech recognition
16:34.360 --> 16:37.720
with deep learning or natural language processing
16:37.720 --> 16:39.440
classification with deep learning or whatever.
16:39.440 --> 16:41.960
Every time I look at an area with deep learning,
16:41.960 --> 16:44.480
I always see like, oh, it's terrible.
16:44.480 --> 16:47.560
There's lots and lots of obviously stupid ways
16:47.560 --> 16:50.000
to do things that need to be fixed.
16:50.000 --> 16:53.320
So then I want to be able to jump in there and quickly
16:53.320 --> 16:54.880
experiment and make them better.
16:54.880 --> 16:59.320
Do you think the programming language has a role in that?
16:59.320 --> 17:00.280
Huge role, yeah.
17:00.280 --> 17:07.080
So currently, Python has a big gap in terms of our ability
17:07.080 --> 17:11.880
to innovate particularly around recurrent neural networks
17:11.880 --> 17:16.840
and natural language processing because it's so slow.
17:16.840 --> 17:20.200
The actual loop where we actually loop through words,
17:20.200 --> 17:23.760
we have to do that whole thing in CUDA C.
17:23.760 --> 17:27.600
So we actually can't innovate with the kernel, the heart,
17:27.600 --> 17:31.560
of that most important algorithm.
17:31.560 --> 17:33.680
And it's just a huge problem.
17:33.680 --> 17:36.600
And this happens all over the place.
17:36.600 --> 17:40.080
So we hit research limitations.
17:40.080 --> 17:42.840
Another example, convolutional neural networks, which
17:42.840 --> 17:46.800
are actually the most popular architecture for lots of things,
17:46.800 --> 17:48.920
maybe most things in deep learning.
17:48.920 --> 17:50.360
We almost certainly should be using
17:50.360 --> 17:54.600
sparse convolutional neural networks, but only like two
17:54.600 --> 17:56.800
people are because to do it, you have
17:56.800 --> 17:59.920
to rewrite all of that CUDA C level stuff.
17:59.920 --> 18:04.520
And yeah, just research, just in practitioners, don't.
18:04.520 --> 18:09.240
So there's just big gaps in what people actually research on,
18:09.240 --> 18:11.640
what people actually implement because of the programming
18:11.640 --> 18:13.240
language problem.
18:13.240 --> 18:17.560
So you think it's just too difficult
18:17.560 --> 18:23.480
to write in CUDA C that a higher level programming language
18:23.480 --> 18:30.520
like Swift should enable the easier,
18:30.520 --> 18:33.160
fooling around, create stuff with RNNs,
18:33.160 --> 18:34.920
or sparse convolutional neural networks?
18:34.920 --> 18:35.920
Kind of.
18:35.920 --> 18:38.520
Who is at fault?
18:38.520 --> 18:42.320
Who is at charge of making it easy for a researcher to play around?
18:42.320 --> 18:43.520
I mean, no one's at fault.
18:43.520 --> 18:45.120
Just nobody's got a round to it yet.
18:45.120 --> 18:47.080
Or it's just it's hard.
18:47.080 --> 18:51.800
And I mean, part of the fault is that we ignored that whole APL
18:51.800 --> 18:55.640
kind of direction, or nearly everybody did for 60 years,
18:55.640 --> 18:57.720
50 years.
18:57.720 --> 18:59.920
But recently, people have been starting
18:59.920 --> 19:04.840
to reinvent pieces of that and kind of create some interesting
19:04.840 --> 19:07.400
new directions in the compiler technology.
19:07.400 --> 19:11.760
So the place where that's particularly happening right now
19:11.760 --> 19:14.920
is something called MLIR, which is something that, again,
19:14.920 --> 19:18.000
Chris Lattener, the Swift guy, is leading.
19:18.000 --> 19:20.080
And because it's actually not going
19:20.080 --> 19:22.160
to be Swift on its own that solves this problem.
19:22.160 --> 19:24.880
Because the problem is that currently writing
19:24.880 --> 19:32.360
a acceptably fast GPU program is too complicated,
19:32.360 --> 19:33.680
regardless of what language you use.
19:36.480 --> 19:38.680
And that's just because if you have to deal with the fact
19:38.680 --> 19:43.160
that I've got 10,000 threads and I have to synchronize between them
19:43.160 --> 19:45.360
all, and I have to put my thing into grid blocks
19:45.360 --> 19:47.040
and think about warps and all this stuff,
19:47.040 --> 19:50.720
it's just so much boilerplate that to do that well,
19:50.720 --> 19:52.240
you have to be a specialist at that.
19:52.240 --> 19:58.200
And it's going to be a year's work to optimize that algorithm
19:58.200 --> 19:59.720
in that way.
19:59.720 --> 20:04.640
But with things like TensorFlow Comprehensions, and Tile,
20:04.640 --> 20:08.880
and MLIR, and TVM, there's all these various projects which
20:08.880 --> 20:11.840
are all about saying, let's let people
20:11.840 --> 20:16.080
create domain specific languages for tensor
20:16.080 --> 20:16.880
computations.
20:16.880 --> 20:19.120
These are the kinds of things we do generally
20:19.120 --> 20:21.640
on the GPU for deep learning, and then
20:21.640 --> 20:28.280
have a compiler which can optimize that tensor computation.
20:28.280 --> 20:31.440
A lot of this work is actually sitting on top of a project
20:31.440 --> 20:36.040
called Halide, which is a mind blowing project
20:36.040 --> 20:38.880
where they came up with such a domain specific language.
20:38.880 --> 20:41.240
In fact, two, one domain specific language for expressing,
20:41.240 --> 20:43.840
this is what my tensor computation is.
20:43.840 --> 20:46.320
And another domain specific language for expressing,
20:46.320 --> 20:50.320
this is the way I want you to structure
20:50.320 --> 20:53.040
the compilation of that, and do it block by block
20:53.040 --> 20:54.960
and do these bits in parallel.
20:54.960 --> 20:57.760
And they were able to show how you can compress
20:57.760 --> 21:02.880
the amount of code by 10x compared to optimized GPU
21:02.880 --> 21:05.600
code and get the same performance.
21:05.600 --> 21:08.480
So these are the things that are sitting on top
21:08.480 --> 21:12.240
of that kind of research, and MLIR
21:12.240 --> 21:15.160
is pulling a lot of those best practices together.
21:15.160 --> 21:17.160
And now we're starting to see work done
21:17.160 --> 21:21.400
on making all of that directly accessible through Swift
21:21.400 --> 21:25.040
so that I could use Swift to write those domain specific
21:25.040 --> 21:25.880
languages.
21:25.880 --> 21:29.520
And hopefully we'll get then Swift CUDA kernels
21:29.520 --> 21:31.720
written in a very expressive and concise way that
21:31.720 --> 21:36.280
looks a bit like J in APL, and then Swift layers on top
21:36.280 --> 21:38.360
of that, and then a Swift UI on top of that,
21:38.360 --> 21:42.600
and it'll be so nice if we can get to that point.
21:42.600 --> 21:48.560
Now does it all eventually boil down to CUDA and NVIDIA GPUs?
21:48.560 --> 21:50.120
Unfortunately at the moment it does,
21:50.120 --> 21:52.600
but one of the nice things about MLIR,
21:52.600 --> 21:56.120
if AMD ever gets their act together, which they probably
21:56.120 --> 21:59.040
want, is that they or others could
21:59.040 --> 22:05.000
write MLIR backends for other GPUs
22:05.000 --> 22:10.320
or rather tensor computation devices, of which today
22:10.320 --> 22:15.520
there are increasing number like Graphcore or Vertex AI
22:15.520 --> 22:18.840
or whatever.
22:18.840 --> 22:22.600
So yeah, being able to target lots of backends
22:22.600 --> 22:23.960
would be another benefit of this,
22:23.960 --> 22:26.680
and the market really needs competition,
22:26.680 --> 22:28.680
because at the moment NVIDIA is massively
22:28.680 --> 22:33.640
overcharging for their kind of enterprise class cards,
22:33.640 --> 22:36.720
because there is no serious competition,
22:36.720 --> 22:39.280
because nobody else is doing the software properly.
22:39.280 --> 22:41.400
In the cloud there is some competition, right?
22:41.400 --> 22:45.080
But not really, other than TPUs perhaps,
22:45.080 --> 22:49.040
but TPUs are almost unprogrammable at the moment.
22:49.040 --> 22:51.080
TPUs have the same problem that you can't.
22:51.080 --> 22:51.760
It's even worse.
22:51.760 --> 22:54.800
So TPUs, Google actually made an explicit decision
22:54.800 --> 22:57.200
to make them almost entirely unprogrammable,
22:57.200 --> 22:59.960
because they felt that there was too much IP in there,
22:59.960 --> 23:02.640
and if they gave people direct access to program them,
23:02.640 --> 23:04.360
people would learn their secrets.
23:04.360 --> 23:09.720
So you can't actually directly program
23:09.720 --> 23:12.120
the memory in a TPU.
23:12.120 --> 23:16.360
You can't even directly create code that runs on
23:16.360 --> 23:19.080
and that you look at on the machine that has the TPU.
23:19.080 --> 23:20.920
It all goes through a virtual machine.
23:20.920 --> 23:23.680
So all you can really do is this kind of cookie cutter
23:23.680 --> 23:27.760
thing of like plug in high level stuff together,
23:27.760 --> 23:31.440
which is just super tedious and annoying
23:31.440 --> 23:33.920
and totally unnecessary.
23:33.920 --> 23:40.960
So tell me if you could, the origin story of fast AI.
23:40.960 --> 23:45.760
What is the motivation, its mission, its dream?
23:45.760 --> 23:50.040
So I guess the founding story is heavily
23:50.040 --> 23:51.840
tied to my previous startup, which
23:51.840 --> 23:53.960
is a company called Inletic, which
23:53.960 --> 23:58.280
was the first company to focus on deep learning for medicine.
23:58.280 --> 24:03.240
And I created that because I saw there was a huge opportunity
24:03.240 --> 24:07.960
to, there's about a 10x shortage of the number of doctors
24:07.960 --> 24:12.120
in the world and the developing world that we need.
24:12.120 --> 24:13.840
I expected it would take about 300 years
24:13.840 --> 24:16.120
to train enough doctors to meet that gap.
24:16.120 --> 24:20.760
But I guessed that maybe if we used
24:20.760 --> 24:23.760
deep learning for some of the analytics,
24:23.760 --> 24:25.760
we could maybe make it so you don't need
24:25.760 --> 24:27.320
as highly trained doctors.
24:27.320 --> 24:28.320
For diagnosis?
24:28.320 --> 24:29.840
For diagnosis and treatment planning.
24:29.840 --> 24:33.440
Where's the biggest benefit just before get the fast AI?
24:33.440 --> 24:37.280
Where's the biggest benefit of AI and medicine that you see
24:37.280 --> 24:39.440
today and in the future?
24:39.440 --> 24:41.960
Not much happening today in terms of stuff that's actually
24:41.960 --> 24:42.440
out there.
24:42.440 --> 24:43.160
It's very early.
24:43.160 --> 24:45.320
But in terms of the opportunity, it's
24:45.320 --> 24:51.080
to take markets like India and China and Indonesia, which
24:51.080 --> 24:58.120
have big populations, Africa, small numbers of doctors,
24:58.120 --> 25:02.440
and provide diagnostic, particularly treatment
25:02.440 --> 25:05.160
planning and triage kind of on device
25:05.160 --> 25:10.360
so that if you do a test for malaria or tuberculosis
25:10.360 --> 25:12.800
or whatever, you immediately get something
25:12.800 --> 25:14.840
that even a health care worker that's
25:14.840 --> 25:20.360
had a month of training can get a very high quality
25:20.360 --> 25:23.480
assessment of whether the patient might be at risk
25:23.480 --> 25:27.480
until OK, we'll send them off to a hospital.
25:27.480 --> 25:31.720
So for example, in Africa, outside of South Africa,
25:31.720 --> 25:34.080
there's only five pediatric radiologists
25:34.080 --> 25:35.320
for the entire continent.
25:35.320 --> 25:37.200
So most countries don't have any.
25:37.200 --> 25:39.240
So if your kid is sick and they need something
25:39.240 --> 25:41.200
diagnosed through medical imaging,
25:41.200 --> 25:44.040
the person, even if you're able to get medical imaging done,
25:44.040 --> 25:48.920
the person that looks at it will be a nurse at best.
25:48.920 --> 25:52.480
But actually, in India, for example, and China,
25:52.480 --> 25:54.760
almost no x rays are read by anybody,
25:54.760 --> 25:59.400
by any trained professional, because they don't have enough.
25:59.400 --> 26:02.880
So if instead we had an algorithm that
26:02.880 --> 26:10.080
could take the most likely high risk 5% and say triage,
26:10.080 --> 26:13.280
basically say, OK, somebody needs to look at this,
26:13.280 --> 26:16.240
it would massively change the kind of way
26:16.240 --> 26:20.640
that what's possible with medicine in the developing world.
26:20.640 --> 26:23.680
And remember, increasingly, they have money.
26:23.680 --> 26:24.800
They're the developing world.
26:24.800 --> 26:26.160
They're not the poor world, the developing world.
26:26.160 --> 26:26.920
So they have the money.
26:26.920 --> 26:28.480
So they're building the hospitals.
26:28.480 --> 26:31.960
They're getting the diagnostic equipment.
26:31.960 --> 26:34.880
But there's no way for a very long time
26:34.880 --> 26:38.480
will they be able to have the expertise.
26:38.480 --> 26:39.760
Shortage of expertise.
26:39.760 --> 26:42.720
OK, and that's where the deep learning systems
26:42.720 --> 26:46.040
can step in and magnify the expertise they do have.
26:46.040 --> 26:47.840
Exactly.
26:47.840 --> 26:54.160
So you do see, just to linger a little bit longer,
26:54.160 --> 26:58.520
the interaction, do you still see the human experts still
26:58.520 --> 26:59.840
at the core of the system?
26:59.840 --> 27:00.480
Yeah, absolutely.
27:00.480 --> 27:01.720
Is there something in medicine that
27:01.720 --> 27:03.760
could be automated almost completely?
27:03.760 --> 27:06.360
I don't see the point of even thinking about that,
27:06.360 --> 27:08.480
because we have such a shortage of people.
27:08.480 --> 27:12.160
Why would we want to find a way not to use them?
27:12.160 --> 27:13.840
Like, we have people.
27:13.840 --> 27:17.200
So the idea of, even from an economic point of view,
27:17.200 --> 27:19.800
if you can make them 10x more productive,
27:19.800 --> 27:21.600
getting rid of the person doesn't
27:21.600 --> 27:23.880
impact your unit economics at all.
27:23.880 --> 27:26.680
And it totally involves the fact that there are things
27:26.680 --> 27:28.760
people do better than machines.
27:28.760 --> 27:33.120
So it's just, to me, that's not a useful way
27:33.120 --> 27:34.120
of framing the problem.
27:34.120 --> 27:36.440
I guess, just to clarify, I guess I
27:36.440 --> 27:40.560
meant there may be some problems where you can avoid even
27:40.560 --> 27:42.160
going to the expert ever.
27:42.160 --> 27:46.160
Sort of maybe preventative care or some basic stuff,
27:46.160 --> 27:47.800
the low hanging fruit, allowing the expert
27:47.800 --> 27:51.320
to focus on the things that are really that.
27:51.320 --> 27:52.960
Well, that's what the triage would do, right?
27:52.960 --> 28:00.760
So the triage would say, OK, 99% sure there's nothing here.
28:00.760 --> 28:04.040
So that can be done on device.
28:04.040 --> 28:05.920
And they can just say, OK, go home.
28:05.920 --> 28:10.520
So the experts are being used to look at the stuff which
28:10.520 --> 28:12.240
has some chance it's worth looking at,
28:12.240 --> 28:15.720
which most things is not.
28:15.720 --> 28:16.280
It's fine.
28:16.280 --> 28:19.840
Why do you think we haven't quite made progress on that yet
28:19.840 --> 28:27.480
in terms of the scale of how much AI is applied in the method?
28:27.480 --> 28:28.400
There's a lot of reasons.
28:28.400 --> 28:29.640
I mean, one is it's pretty new.
28:29.640 --> 28:32.040
I only started in late 2014.
28:32.040 --> 28:35.920
And before that, it's hard to express
28:35.920 --> 28:37.760
to what degree the medical world was not
28:37.760 --> 28:40.720
aware of the opportunities here.
28:40.720 --> 28:45.520
So I went to RSNA, which is the world's largest radiology
28:45.520 --> 28:46.240
conference.
28:46.240 --> 28:50.040
And I told everybody I could, like,
28:50.040 --> 28:51.800
I'm doing this thing with deep learning.
28:51.800 --> 28:53.320
Please come and check it out.
28:53.320 --> 28:56.880
And no one had any idea what I was talking about.
28:56.880 --> 28:59.640
No one had any interest in it.
28:59.640 --> 29:05.040
So we've come from absolute zero, which is hard.
29:05.040 --> 29:09.920
And then the whole regulatory framework, education system,
29:09.920 --> 29:13.400
everything is just set up to think of doctoring
29:13.400 --> 29:14.920
in a very different way.
29:14.920 --> 29:16.400
So today, there is a small number
29:16.400 --> 29:22.040
of people who are deep learning practitioners and doctors
29:22.040 --> 29:22.960
at the same time.
29:22.960 --> 29:25.040
And we're starting to see the first ones come out
29:25.040 --> 29:26.520
of their PhD programs.
29:26.520 --> 29:33.960
So Zach Cahane over in Boston, Cambridge
29:33.960 --> 29:41.040
has a number of students now who are data science experts,
29:41.040 --> 29:46.400
deep learning experts, and actual medical doctors.
29:46.400 --> 29:49.480
Quite a few doctors have completed our fast AI course
29:49.480 --> 29:54.920
now and are publishing papers and creating journal reading
29:54.920 --> 29:58.040
groups in the American Council of Radiology.
29:58.040 --> 30:00.280
And it's just starting to happen.
30:00.280 --> 30:02.840
But it's going to be a long process.
30:02.840 --> 30:04.920
The regulators have to learn how to regulate this.
30:04.920 --> 30:08.720
They have to build guidelines.
30:08.720 --> 30:12.120
And then the lawyers at hospitals
30:12.120 --> 30:15.080
have to develop a new way of understanding
30:15.080 --> 30:18.680
that sometimes it makes sense for data
30:18.680 --> 30:24.880
to be looked at in raw form in large quantities
30:24.880 --> 30:27.000
in order to create world changing results.
30:27.000 --> 30:30.080
Yeah, there's a regulation around data, all that.
30:30.080 --> 30:33.840
It sounds probably the hardest problem,
30:33.840 --> 30:36.760
but it sounds reminiscent of autonomous vehicles as well.
30:36.760 --> 30:38.760
Many of the same regulatory challenges,
30:38.760 --> 30:40.560
many of the same data challenges.
30:40.560 --> 30:42.160
Yeah, I mean, funnily enough, the problem
30:42.160 --> 30:44.880
is less the regulation and more the interpretation
30:44.880 --> 30:48.200
of that regulation by lawyers in hospitals.
30:48.200 --> 30:52.560
So HIPAA was actually designed.
30:52.560 --> 30:56.400
The P in HIPAA does not stand for privacy.
30:56.400 --> 30:57.640
It stands for portability.
30:57.640 --> 31:01.200
It's actually meant to be a way that data can be used.
31:01.200 --> 31:04.400
And it was created with lots of gray areas
31:04.400 --> 31:06.560
because the idea is that would be more practical
31:06.560 --> 31:10.480
and it would help people to use this legislation
31:10.480 --> 31:13.680
to actually share data in a more thoughtful way.
31:13.680 --> 31:15.320
Unfortunately, it's done the opposite
31:15.320 --> 31:18.880
because when a lawyer sees a gray area, they see, oh,
31:18.880 --> 31:22.440
if we don't know we won't get sued, then we can't do it.
31:22.440 --> 31:26.360
So HIPAA is not exactly the problem.
31:26.360 --> 31:30.080
The problem is more that hospital lawyers
31:30.080 --> 31:34.720
are not incented to make bold decisions
31:34.720 --> 31:36.520
about data portability.
31:36.520 --> 31:40.480
Or even to embrace technology that saves lives.
31:40.480 --> 31:42.440
They more want to not get in trouble
31:42.440 --> 31:44.280
for embracing that technology.
31:44.280 --> 31:47.840
Also, it is also saves lives in a very abstract way,
31:47.840 --> 31:49.840
which is like, oh, we've been able to release
31:49.840 --> 31:52.360
these 100,000 anonymous records.
31:52.360 --> 31:55.360
I can't point at the specific person whose life that's saved.
31:55.360 --> 31:57.760
I can say like, oh, we've ended up with this paper
31:57.760 --> 32:02.200
which found this result, which diagnosed 1,000 more people
32:02.200 --> 32:04.200
than we would have otherwise, but it's like,
32:04.200 --> 32:07.360
which ones were helped, it's very abstract.
32:07.360 --> 32:09.400
Yeah, and on the counter side of that,
32:09.400 --> 32:13.080
you may be able to point to a life that was taken
32:13.080 --> 32:14.360
because of something that was...
32:14.360 --> 32:18.240
Yeah, or a person whose privacy was violated.
32:18.240 --> 32:20.360
It's like, oh, this specific person,
32:20.360 --> 32:25.480
you know, there was deidentified.
32:25.480 --> 32:27.360
Just a fascinating topic.
32:27.360 --> 32:28.360
We're jumping around.
32:28.360 --> 32:32.880
We'll get back to fast AI, but on the question of privacy,
32:32.880 --> 32:38.160
data is the fuel for so much innovation in deep learning.
32:38.160 --> 32:39.840
What's your sense on privacy,
32:39.840 --> 32:44.080
whether we're talking about Twitter, Facebook, YouTube,
32:44.080 --> 32:48.720
just the technologies like in the medical field
32:48.720 --> 32:53.440
that rely on people's data in order to create impact?
32:53.440 --> 32:58.840
How do we get that right, respecting people's privacy
32:58.840 --> 33:03.360
and yet creating technology that is learned from data?
33:03.360 --> 33:11.480
One of my areas of focus is on doing more with less data,
33:11.480 --> 33:15.000
which so most vendors, unfortunately, are strongly
33:15.000 --> 33:20.000
centred to find ways to require more data and more computation.
33:20.000 --> 33:24.000
So Google and IBM being the most obvious...
33:24.000 --> 33:26.000
IBM.
33:26.000 --> 33:30.600
Yeah, so Watson, you know, so Google and IBM both strongly push
33:30.600 --> 33:35.400
the idea that they have more data and more computation
33:35.400 --> 33:37.800
and more intelligent people than anybody else,
33:37.800 --> 33:39.840
and so you have to trust them to do things
33:39.840 --> 33:42.600
because nobody else can do it.
33:42.600 --> 33:45.360
And Google's very upfront about this,
33:45.360 --> 33:48.680
like Jeff Dain has gone out there and given talks and said,
33:48.680 --> 33:52.840
our goal is to require 1,000 times more computation,
33:52.840 --> 33:55.120
but less people.
33:55.120 --> 34:00.600
Our goal is to use the people that you have better
34:00.600 --> 34:02.960
and the data you have better and the computation you have better.
34:02.960 --> 34:06.000
So one of the things that we've discovered is,
34:06.000 --> 34:11.080
or at least highlighted, is that you very, very, very often
34:11.080 --> 34:13.360
don't need much data at all.
34:13.360 --> 34:16.160
And so the data you already have in your organization
34:16.160 --> 34:19.240
will be enough to get state of the art results.
34:19.240 --> 34:22.600
So like my starting point would be to kind of say around privacy
34:22.600 --> 34:25.760
is a lot of people are looking for ways
34:25.760 --> 34:28.120
to share data and aggregate data,
34:28.120 --> 34:29.920
but I think often that's unnecessary.
34:29.920 --> 34:32.160
They assume that they need more data than they do
34:32.160 --> 34:35.240
because they're not familiar with the basics of transfer
34:35.240 --> 34:38.440
learning, which is this critical technique
34:38.440 --> 34:42.000
for needing orders of magnitude less data.
34:42.000 --> 34:44.680
Is your sense, one reason you might want to collect data
34:44.680 --> 34:50.440
from everyone is like in the recommender system context,
34:50.440 --> 34:54.520
where your individual, Jeremy Howard's individual data
34:54.520 --> 34:58.600
is the most useful for providing a product that's
34:58.600 --> 34:59.880
impactful for you.
34:59.880 --> 35:02.240
So for giving you advertisements,
35:02.240 --> 35:07.640
for recommending to you movies, for doing medical diagnosis.
35:07.640 --> 35:11.720
Is your sense we can build with a small amount of data,
35:11.720 --> 35:16.040
general models that will have a huge impact for most people,
35:16.040 --> 35:19.120
that we don't need to have data from each individual?
35:19.120 --> 35:20.560
On the whole, I'd say yes.
35:20.560 --> 35:26.400
I mean, there are things like, recommender systems
35:26.400 --> 35:30.960
have this cold start problem, where Jeremy is a new customer.
35:30.960 --> 35:33.280
We haven't seen him before, so we can't recommend him things
35:33.280 --> 35:36.520
based on what else he's bought and liked with us.
35:36.520 --> 35:39.440
And there's various workarounds to that.
35:39.440 --> 35:41.160
A lot of music programs will start out
35:41.160 --> 35:44.920
by saying, which of these artists do you like?
35:44.920 --> 35:46.800
Which of these albums do you like?
35:46.800 --> 35:49.800
Which of these songs do you like?
35:49.800 --> 35:51.040
Netflix used to do that.
35:51.040 --> 35:55.320
Nowadays, people don't like that because they think, oh,
35:55.320 --> 35:57.400
we don't want to bother the user.
35:57.400 --> 36:00.560
So you could work around that by having some kind of data
36:00.560 --> 36:04.240
sharing where you get my marketing record from Axiom
36:04.240 --> 36:06.360
or whatever and try to question that.
36:06.360 --> 36:12.360
To me, the benefit to me and to society
36:12.360 --> 36:16.520
of saving me five minutes on answering some questions
36:16.520 --> 36:23.520
versus the negative externalities of the privacy issue
36:23.520 --> 36:24.800
doesn't add up.
36:24.800 --> 36:26.600
So I think a lot of the time, the places
36:26.600 --> 36:30.520
where people are invading our privacy in order
36:30.520 --> 36:35.360
to provide convenience is really about just trying
36:35.360 --> 36:36.880
to make them more money.
36:36.880 --> 36:40.760
And they move these negative externalities
36:40.760 --> 36:44.360
into places that they don't have to pay for them.
36:44.360 --> 36:48.120
So when you actually see regulations
36:48.120 --> 36:50.560
appear that actually cause the companies that
36:50.560 --> 36:52.360
create these negative externalities to have
36:52.360 --> 36:54.320
to pay for it themselves, they say, well,
36:54.320 --> 36:56.160
we can't do it anymore.
36:56.160 --> 36:58.240
So the cost is actually too high.
36:58.240 --> 37:02.280
But for something like medicine, the hospital
37:02.280 --> 37:06.440
has my medical imaging, my pathology studies,
37:06.440 --> 37:08.920
my medical records.
37:08.920 --> 37:11.920
And also, I own my medical data.
37:11.920 --> 37:16.960
So I help a startup called DocAI.
37:16.960 --> 37:19.760
One of the things DocAI does is that it has an app.
37:19.760 --> 37:26.120
You can connect to Sutter Health and Labcore and Walgreens
37:26.120 --> 37:29.840
and download your medical data to your phone
37:29.840 --> 37:33.560
and then upload it, again, at your discretion
37:33.560 --> 37:36.040
to share it as you wish.
37:36.040 --> 37:38.440
So with that kind of approach, we
37:38.440 --> 37:41.160
can share our medical information
37:41.160 --> 37:44.840
with the people we want to.
37:44.840 --> 37:45.720
Yeah, so control.
37:45.720 --> 37:48.240
I mean, really being able to control who you share it with
37:48.240 --> 37:49.760
and so on.
37:49.760 --> 37:53.080
So that has a beautiful, interesting tangent
37:53.080 --> 37:59.360
to return back to the origin story of FastAI.
37:59.360 --> 38:02.520
Right, so before I started FastAI,
38:02.520 --> 38:07.160
I spent a year researching where are the biggest
38:07.160 --> 38:10.400
opportunities for deep learning.
38:10.400 --> 38:14.080
Because I knew from my time at Kaggle in particular
38:14.080 --> 38:17.960
that deep learning had hit this threshold point where it was
38:17.960 --> 38:20.520
rapidly becoming the state of the art approach in every area
38:20.520 --> 38:21.600
that looked at it.
38:21.600 --> 38:25.400
And I'd been working with neural nets for over 20 years.
38:25.400 --> 38:27.440
I knew that from a theoretical point of view,
38:27.440 --> 38:30.760
once it hit that point, it would do that in just about every
38:30.760 --> 38:31.600
domain.
38:31.600 --> 38:34.480
And so I spent a year researching
38:34.480 --> 38:37.120
what are the domains it's going to have the biggest low hanging
38:37.120 --> 38:39.400
fruit in the shortest time period.
38:39.400 --> 38:43.880
I picked medicine, but there were so many I could have picked.
38:43.880 --> 38:47.640
And so there was a level of frustration for me of like, OK,
38:47.640 --> 38:50.840
I'm really glad we've opened up the medical deep learning
38:50.840 --> 38:53.880
world and today it's huge, as you know.
38:53.880 --> 38:58.280
But we can't do, you know, I can't do everything.
38:58.280 --> 39:00.400
I don't even know like, like in medicine,
39:00.400 --> 39:02.760
it took me a really long time to even get a sense of like,
39:02.760 --> 39:05.080
what kind of problems do medical practitioners solve?
39:05.080 --> 39:06.400
What kind of data do they have?
39:06.400 --> 39:08.520
Who has that data?
39:08.520 --> 39:12.480
So I kind of felt like I need to approach this differently
39:12.480 --> 39:16.200
if I want to maximize the positive impact of deep learning.
39:16.200 --> 39:19.480
Rather than me picking an area and trying
39:19.480 --> 39:21.720
to become good at it and building something,
39:21.720 --> 39:24.480
I should let people who are already domain experts
39:24.480 --> 39:29.240
in those areas and who already have the data do it themselves.
39:29.240 --> 39:35.520
So that was the reason for vast AI is to basically try
39:35.520 --> 39:38.840
and figure out how to get deep learning
39:38.840 --> 39:41.800
into the hands of people who could benefit from it
39:41.800 --> 39:45.400
and help them to do so in as quick and easy and effective
39:45.400 --> 39:47.080
a way as possible.
39:47.080 --> 39:47.560
Got it.
39:47.560 --> 39:50.240
So sort of empower the domain experts.
39:50.240 --> 39:51.320
Yeah.
39:51.320 --> 39:54.200
And like partly it's because like,
39:54.200 --> 39:56.280
unlike most people in this field,
39:56.280 --> 39:59.960
my background is very applied and industrial.
39:59.960 --> 40:02.480
Like my first job was at McKinsey & Company.
40:02.480 --> 40:04.640
I spent 10 years of management consulting.
40:04.640 --> 40:10.240
I spend a lot of time with domain experts.
40:10.240 --> 40:12.800
You know, so I kind of respect them and appreciate them.
40:12.800 --> 40:16.440
And I know that's where the value generation in society is.
40:16.440 --> 40:21.560
And so I also know how most of them can't code.
40:21.560 --> 40:26.320
And most of them don't have the time to invest, you know,
40:26.320 --> 40:29.320
three years in a graduate degree or whatever.
40:29.320 --> 40:33.520
So it's like, how do I upskill those domain experts?
40:33.520 --> 40:36.080
I think that would be a super powerful thing,
40:36.080 --> 40:40.200
you know, the biggest societal impact I could have.
40:40.200 --> 40:41.680
So yeah, that was the thinking.
40:41.680 --> 40:45.680
So so much of fast AI students and researchers
40:45.680 --> 40:50.120
and the things you teach are programmatically minded,
40:50.120 --> 40:51.520
practically minded,
40:51.520 --> 40:55.840
figuring out ways how to solve real problems and fast.
40:55.840 --> 40:57.480
So from your experience,
40:57.480 --> 41:02.040
what's the difference between theory and practice of deep learning?
41:02.040 --> 41:03.680
Hmm.
41:03.680 --> 41:07.520
Well, most of the research in the deep mining world
41:07.520 --> 41:09.840
is a total waste of time.
41:09.840 --> 41:11.040
Right. That's what I was getting at.
41:11.040 --> 41:12.200
Yeah.
41:12.200 --> 41:16.240
It's it's a problem in science in general.
41:16.240 --> 41:19.600
Scientists need to be published,
41:19.600 --> 41:21.480
which means they need to work on things
41:21.480 --> 41:24.040
that their peers are extremely familiar with
41:24.040 --> 41:26.200
and can recognize in advance in that area.
41:26.200 --> 41:30.040
So that means that they all need to work on the same thing.
41:30.040 --> 41:33.040
And so it really ink and the thing they work on
41:33.040 --> 41:35.640
is nothing to encourage them to work on things
41:35.640 --> 41:38.840
that are practically useful.
41:38.840 --> 41:41.120
So you get just a whole lot of research,
41:41.120 --> 41:43.200
which is minor advances in stuff
41:43.200 --> 41:44.600
that's been very highly studied
41:44.600 --> 41:49.280
and has no significant practical impact.
41:49.280 --> 41:50.840
Whereas the things that really make a difference
41:50.840 --> 41:52.760
like I mentioned transfer learning,
41:52.760 --> 41:55.560
like if we can do better at transfer learning,
41:55.560 --> 41:58.160
then it's this like world changing thing
41:58.160 --> 42:02.880
where suddenly like lots more people can do world class work
42:02.880 --> 42:06.760
with less resources and less data and.
42:06.760 --> 42:08.480
But almost nobody works on that.
42:08.480 --> 42:10.760
Or another example, active learning,
42:10.760 --> 42:11.880
which is the study of like,
42:11.880 --> 42:15.880
how do we get more out of the human beings in the loop?
42:15.880 --> 42:17.120
That's my favorite topic.
42:17.120 --> 42:18.520
Yeah. So active learning is great,
42:18.520 --> 42:21.160
but it's almost nobody working on it
42:21.160 --> 42:23.800
because it's just not a trendy thing right now.
42:23.800 --> 42:27.040
You know what somebody started to interrupt?
42:27.040 --> 42:29.720
He was saying that nobody is publishing
42:29.720 --> 42:31.520
on active learning, right?
42:31.520 --> 42:33.440
But there's people inside companies,
42:33.440 --> 42:36.800
anybody who actually has to solve a problem,
42:36.800 --> 42:39.600
they're going to innovate on active learning.
42:39.600 --> 42:42.080
Yeah. Everybody kind of reinvents active learning
42:42.080 --> 42:43.760
when they actually have to work in practice
42:43.760 --> 42:46.360
because they start labeling things and they think,
42:46.360 --> 42:49.280
gosh, this is taking a long time and it's very expensive.
42:49.280 --> 42:51.200
And then they start thinking,
42:51.200 --> 42:52.640
well, why am I labeling everything?
42:52.640 --> 42:54.840
I'm only, the machine's only making mistakes
42:54.840 --> 42:56.040
on those two classes.
42:56.040 --> 42:56.880
They're the hard ones.
42:56.880 --> 42:58.840
Maybe I'll just start labeling those two classes
42:58.840 --> 43:00.360
and then you start thinking,
43:00.360 --> 43:01.560
well, why did I do that manually?
43:01.560 --> 43:03.000
Why can't I just get the system to tell me
43:03.000 --> 43:04.760
which things are going to be harder steps?
43:04.760 --> 43:06.200
It's an obvious thing to do.
43:06.200 --> 43:11.400
But yeah, it's just like transfer learning.
43:11.400 --> 43:14.120
It's understudied and the academic world
43:14.120 --> 43:17.440
just has no reason to care about practical results.
43:17.440 --> 43:18.360
The funny thing is, like,
43:18.360 --> 43:19.920
I've only really ever written one paper.
43:19.920 --> 43:21.520
I hate writing papers.
43:21.520 --> 43:22.760
And I didn't even write it.
43:22.760 --> 43:25.480
It was my colleague, Sebastian Ruder, who actually wrote it.
43:25.480 --> 43:28.040
I just did the research for it.
43:28.040 --> 43:31.640
But it was basically introducing successful transfer learning
43:31.640 --> 43:34.200
to NLP for the first time.
43:34.200 --> 43:37.000
And the algorithm is called ULMfit.
43:37.000 --> 43:42.320
And I actually wrote it for the course,
43:42.320 --> 43:43.720
for the first day of course.
43:43.720 --> 43:45.360
I wanted to teach people NLP.
43:45.360 --> 43:47.520
And I thought I only want to teach people practical stuff.
43:47.520 --> 43:50.560
And I think the only practical stuff is transfer learning.
43:50.560 --> 43:53.360
And I couldn't find any examples of transfer learning in NLP.
43:53.360 --> 43:54.560
So I just did it.
43:54.560 --> 43:57.320
And I was shocked to find that as soon as I did it,
43:57.320 --> 44:01.080
which, you know, the basic prototype took a couple of days,
44:01.080 --> 44:02.520
smashed the state of the art
44:02.520 --> 44:04.760
on one of the most important data sets in a field
44:04.760 --> 44:06.720
that I knew nothing about.
44:06.720 --> 44:10.400
And I just thought, well, this is ridiculous.
44:10.400 --> 44:13.800
And so I spoke to Sebastian about it.
44:13.800 --> 44:17.680
And he kindly offered to write it up the results.
44:17.680 --> 44:21.360
And so it ended up being published in ACL,
44:21.360 --> 44:25.560
which is the top computational linguistics conference.
44:25.560 --> 44:28.880
So like, people do actually care once you do it.
44:28.880 --> 44:34.160
But I guess it's difficult for maybe junior researchers.
44:34.160 --> 44:37.720
I don't care whether I get citations or papers or whatever.
44:37.720 --> 44:39.640
There's nothing in my life that makes that important,
44:39.640 --> 44:41.240
which is why I've never actually
44:41.240 --> 44:43.040
bothered to write a paper myself.
44:43.040 --> 44:44.400
But for people who do, I guess they
44:44.400 --> 44:50.960
have to pick the kind of safe option, which is like,
44:50.960 --> 44:52.720
yeah, make a slight improvement on something
44:52.720 --> 44:55.160
that everybody's already working on.
44:55.160 --> 44:59.040
Yeah, nobody does anything interesting or succeeds
44:59.040 --> 45:01.240
in life with the safe option.
45:01.240 --> 45:02.960
Well, I mean, the nice thing is nowadays,
45:02.960 --> 45:05.320
everybody is now working on NLP transfer learning.
45:05.320 --> 45:12.240
Because since that time, we've had GPT and GPT2 and BERT.
45:12.240 --> 45:15.400
So yeah, once you show that something's possible,
45:15.400 --> 45:17.680
everybody jumps in, I guess.
45:17.680 --> 45:19.320
I hope to be a part of it.
45:19.320 --> 45:21.600
I hope to see more innovation and active learning
45:21.600 --> 45:22.160
in the same way.
45:22.160 --> 45:24.560
I think transfer learning and active learning
45:24.560 --> 45:27.360
are a fascinating public open work.
45:27.360 --> 45:30.160
I actually helped start a startup called Platform AI, which
45:30.160 --> 45:31.760
is really all about active learning.
45:31.760 --> 45:34.200
And yeah, it's been interesting trying
45:34.200 --> 45:36.920
to kind of see what research is out there
45:36.920 --> 45:37.800
and make the most of it.
45:37.800 --> 45:39.200
And there's basically none.
45:39.200 --> 45:41.040
So we've had to do all our own research.
45:41.040 --> 45:44.240
Once again, and just as you described,
45:44.240 --> 45:47.640
can you tell the story of the Stanford competition,
45:47.640 --> 45:51.520
Dawn Bench, and fast AI's achievement on it?
45:51.520 --> 45:51.960
Sure.
45:51.960 --> 45:55.560
So something which I really enjoy is that I basically
45:55.560 --> 45:59.000
teach two courses a year, the practical deep learning
45:59.000 --> 46:02.120
for coders, which is kind of the introductory course,
46:02.120 --> 46:04.280
and then cutting edge deep learning for coders, which
46:04.280 --> 46:08.080
is the kind of research level course.
46:08.080 --> 46:14.320
And while I teach those courses, I basically
46:14.320 --> 46:18.440
have a big office at the University of San Francisco.
46:18.440 --> 46:19.800
It'd be enough for like 30 people.
46:19.800 --> 46:22.960
And I invite any student who wants to come and hang out
46:22.960 --> 46:25.320
with me while I build the course.
46:25.320 --> 46:26.640
And so generally, it's full.
46:26.640 --> 46:30.880
And so we have 20 or 30 people in a big office
46:30.880 --> 46:33.880
with nothing to do but study deep learning.
46:33.880 --> 46:35.880
So it was during one of these times
46:35.880 --> 46:38.640
that somebody in the group said, oh, there's
46:38.640 --> 46:41.480
a thing called Dawn Bench that looks interesting.
46:41.480 --> 46:42.800
And I say, what the hell is that?
46:42.800 --> 46:44.120
I'm going to set out some competition
46:44.120 --> 46:46.440
to see how quickly you can train a model.
46:46.440 --> 46:50.080
It seems kind of not exactly relevant to what we're doing,
46:50.080 --> 46:51.440
but it sounds like the kind of thing
46:51.440 --> 46:52.480
which you might be interested in.
46:52.480 --> 46:53.960
And I checked it out and I was like, oh, crap.
46:53.960 --> 46:55.840
There's only 10 days till it's over.
46:55.840 --> 46:58.120
It's pretty much too late.
46:58.120 --> 47:01.000
And we're kind of busy trying to teach this course.
47:01.000 --> 47:05.640
But we're like, oh, it would make an interesting case study
47:05.640 --> 47:08.200
for the course like it's all the stuff we're already doing.
47:08.200 --> 47:11.120
Why don't we just put together our current best practices
47:11.120 --> 47:12.480
and ideas.
47:12.480 --> 47:16.880
So me and I guess about four students just decided
47:16.880 --> 47:17.560
to give it a go.
47:17.560 --> 47:19.880
And we focused on this small one called
47:19.880 --> 47:24.640
SciFar 10, which is little 32 by 32 pixel images.
47:24.640 --> 47:26.160
Can you say what Dawn Bench is?
47:26.160 --> 47:29.560
Yeah, so it's a competition to train a model as fast as possible.
47:29.560 --> 47:31.000
It was run by Stanford.
47:31.000 --> 47:32.480
And as cheap as possible, too.
47:32.480 --> 47:34.320
That's also another one for as cheap as possible.
47:34.320 --> 47:38.160
And there's a couple of categories, ImageNet and SciFar 10.
47:38.160 --> 47:42.080
So ImageNet's this big 1.3 million image thing
47:42.080 --> 47:45.400
that took a couple of days to train.
47:45.400 --> 47:51.240
I remember a friend of mine, Pete Warden, who's now at Google.
47:51.240 --> 47:53.760
I remember he told me how he trained ImageNet a few years
47:53.760 --> 47:59.440
ago when he basically had this little granny flat out
47:59.440 --> 48:01.920
the back that he turned into was ImageNet training center.
48:01.920 --> 48:04.240
And after a year of work, he figured out
48:04.240 --> 48:07.040
how to train it in 10 days or something.
48:07.040 --> 48:08.480
It's like that was a big job.
48:08.480 --> 48:10.640
Whereas SciFar 10, at that time, you
48:10.640 --> 48:13.040
could train in a few hours.
48:13.040 --> 48:14.520
It's much smaller and easier.
48:14.520 --> 48:18.120
So we thought we'd try SciFar 10.
48:18.120 --> 48:23.800
And yeah, I've really never done that before.
48:23.800 --> 48:27.920
Like, things like using more than one GPU at a time
48:27.920 --> 48:29.800
was something I tried to avoid.
48:29.800 --> 48:32.160
Because to me, it's very against the whole idea
48:32.160 --> 48:35.080
of accessibility, is she better do things with one GPU?
48:35.080 --> 48:36.480
I mean, have you asked in the past
48:36.480 --> 48:39.680
before, after having accomplished something,
48:39.680 --> 48:42.520
how do I do this faster, much faster?
48:42.520 --> 48:43.240
Oh, always.
48:43.240 --> 48:44.680
But it's always, for me, it's always,
48:44.680 --> 48:47.640
how do I make it much faster on a single GPU
48:47.640 --> 48:50.400
that a normal person could afford in their day to day life?
48:50.400 --> 48:54.760
It's not, how could I do it faster by having a huge data
48:54.760 --> 48:55.280
center?
48:55.280 --> 48:57.240
Because to me, it's all about, like,
48:57.240 --> 48:59.560
as many people should be able to use something as possible
48:59.560 --> 49:04.160
without fussing around with infrastructure.
49:04.160 --> 49:06.080
So anyway, so in this case, it's like, well,
49:06.080 --> 49:10.240
we can use 8GPUs just by renting a AWS machine.
49:10.240 --> 49:11.920
So we thought we'd try that.
49:11.920 --> 49:16.560
And yeah, basically, using the stuff we were already doing,
49:16.560 --> 49:20.360
we were able to get the speed.
49:20.360 --> 49:25.360
Within a few days, we had the speed down to a very small
49:25.360 --> 49:26.040
number of minutes.
49:26.040 --> 49:28.800
I can't remember exactly how many minutes it was,
49:28.800 --> 49:31.440
but it might have been like 10 minutes or something.
49:31.440 --> 49:34.200
And so yeah, we found ourselves at the top of the leaderboard
49:34.200 --> 49:38.720
easily for both time and money, which really shocked me.
49:38.720 --> 49:40.160
Because the other people competing in this
49:40.160 --> 49:41.880
were like Google and Intel and stuff,
49:41.880 --> 49:45.360
where I know a lot more about this stuff than I think we do.
49:45.360 --> 49:46.800
So then we emboldened.
49:46.800 --> 49:50.640
We thought, let's try the ImageNet one too.
49:50.640 --> 49:53.280
I mean, it seemed way out of our league.
49:53.280 --> 49:57.120
But our goal was to get under 12 hours.
49:57.120 --> 49:59.280
And we did, which was really exciting.
49:59.280 --> 50:01.440
And we didn't put anything up on the leaderboard,
50:01.440 --> 50:03.080
but we were down to like 10 hours.
50:03.080 --> 50:10.000
But then Google put in like five hours or something,
50:10.000 --> 50:13.360
and we're just like, oh, we're so screwed.
50:13.360 --> 50:16.880
But we kind of thought, well, keep trying.
50:16.880 --> 50:17.880
If Google can do it in five hours.
50:17.880 --> 50:20.760
I mean, Google did it on five hours on like a TPU pod
50:20.760 --> 50:24.280
or something, like a lot of hardware.
50:24.280 --> 50:26.360
But we kind of like had a bunch of ideas to try.
50:26.360 --> 50:28.920
Like a really simple thing was, why
50:28.920 --> 50:30.480
are we using these big images?
50:30.480 --> 50:36.280
They're like 224, 256 by 256 pixels.
50:36.280 --> 50:37.640
Why don't we try smaller ones?
50:37.640 --> 50:41.360
And just to elaborate, there's a constraint on the accuracy
50:41.360 --> 50:43.080
that your train model is supposed to achieve.
50:43.080 --> 50:45.760
Yeah, you've got to achieve 93%.
50:45.760 --> 50:47.640
I think it was for ImageNet.
50:47.640 --> 50:49.160
Exactly.
50:49.160 --> 50:50.240
Which is very tough.
50:50.240 --> 50:51.240
So you have to repeat that.
50:51.240 --> 50:52.120
Yeah, 93%.
50:52.120 --> 50:54.680
Like they picked a good threshold.
50:54.680 --> 50:58.920
It was a little bit higher than what the most commonly used
50:58.920 --> 51:03.320
ResNet 50 model could achieve at that time.
51:03.320 --> 51:08.080
So yeah, so it's quite a difficult problem to solve.
51:08.080 --> 51:09.920
But yeah, we realized if we actually just
51:09.920 --> 51:16.200
use 64 by 64 images, it trained a pretty good model.
51:16.200 --> 51:17.960
And then we could take that same model
51:17.960 --> 51:19.560
and just give it a couple of epochs
51:19.560 --> 51:21.880
to learn 224 by 224 images.
51:21.880 --> 51:24.440
And it was basically already trained.
51:24.440 --> 51:25.480
It makes a lot of sense.
51:25.480 --> 51:27.200
Like if you teach somebody, like here's
51:27.200 --> 51:30.240
what a dog looks like, and you show them low res versions,
51:30.240 --> 51:33.640
and then you say, here's a really clear picture of a dog.
51:33.640 --> 51:36.000
They already know what a dog looks like.
51:36.000 --> 51:39.920
So that, like, just we jumped to the front,
51:39.920 --> 51:46.400
and we ended up winning parts of that competition.
51:46.400 --> 51:49.680
We actually ended up doing a distributed version
51:49.680 --> 51:51.960
over multiple machines a couple of months later
51:51.960 --> 51:53.560
and ended up at the top of the leaderboard.
51:53.560 --> 51:55.440
We had 18 minutes.
51:55.440 --> 51:56.280
ImageNet.
51:56.280 --> 52:00.560
Yeah, and people have just kept on blasting through again
52:00.560 --> 52:02.320
and again since then.
52:02.320 --> 52:06.760
So what's your view on multi GPU or multiple machine
52:06.760 --> 52:11.960
training in general as a way to speed code up?
52:11.960 --> 52:13.680
I think it's largely a waste of time.
52:13.680 --> 52:15.880
Both multi GPU on a single machine and?
52:15.880 --> 52:17.640
Yeah, particularly multi machines,
52:17.640 --> 52:18.880
because it's just clunky.
52:21.840 --> 52:25.320
Multi GPUs is less clunky than it used to be.
52:25.320 --> 52:28.520
But to me, anything that slows down your iteration speed
52:28.520 --> 52:31.800
is a waste of time.
52:31.800 --> 52:36.960
So you could maybe do your very last perfecting of the model
52:36.960 --> 52:38.960
on multi GPUs if you need to.
52:38.960 --> 52:44.560
But so for example, I think doing stuff on ImageNet
52:44.560 --> 52:46.000
is generally a waste of time.
52:46.000 --> 52:48.240
Why test things on 1.3 million images?
52:48.240 --> 52:51.040
Most of us don't use 1.3 million images.
52:51.040 --> 52:54.360
And we've also done research that shows that doing things
52:54.360 --> 52:56.840
on a smaller subset of images gives you
52:56.840 --> 52:59.280
the same relative answers anyway.
52:59.280 --> 53:02.120
So from a research point of view, why waste that time?
53:02.120 --> 53:06.200
So actually, I released a couple of new data sets recently.
53:06.200 --> 53:08.880
One is called ImageNet.
53:08.880 --> 53:12.920
The French ImageNet, which is a small subset of ImageNet,
53:12.920 --> 53:15.200
which is designed to be easy to classify.
53:15.200 --> 53:17.320
What's how do you spell ImageNet?
53:17.320 --> 53:19.200
It's got an extra T and E at the end,
53:19.200 --> 53:20.520
because it's very French.
53:20.520 --> 53:21.640
Image, OK.
53:21.640 --> 53:24.720
And then another one called ImageWolf,
53:24.720 --> 53:29.840
which is a subset of ImageNet that only contains dog breeds.
53:29.840 --> 53:31.120
But that's a hard one, right?
53:31.120 --> 53:32.000
That's a hard one.
53:32.000 --> 53:34.360
And I've discovered that if you just look at these two
53:34.360 --> 53:39.120
subsets, you can train things on a single GPU in 10 minutes.
53:39.120 --> 53:42.040
And the results you get are directly transferrable
53:42.040 --> 53:44.320
to ImageNet nearly all the time.
53:44.320 --> 53:46.600
And so now I'm starting to see some researchers start
53:46.600 --> 53:48.960
to use these smaller data sets.
53:48.960 --> 53:51.120
I so deeply love the way you think,
53:51.120 --> 53:57.000
because I think you might have written a blog post saying
53:57.000 --> 54:00.200
that going with these big data sets
54:00.200 --> 54:03.920
is encouraging people to not think creatively.
54:03.920 --> 54:04.560
Absolutely.
54:04.560 --> 54:08.320
So year two, it sort of constrains you
54:08.320 --> 54:09.840
to train on large resources.
54:09.840 --> 54:11.280
And because you have these resources,
54:11.280 --> 54:14.040
you think more research will be better.
54:14.040 --> 54:17.760
And then you start to like somehow you kill the creativity.
54:17.760 --> 54:18.000
Yeah.
54:18.000 --> 54:20.760
And even worse than that, Lex, I keep hearing from people
54:20.760 --> 54:23.480
who say, I decided not to get into deep learning
54:23.480 --> 54:26.080
because I don't believe it's accessible to people
54:26.080 --> 54:28.560
outside of Google to do useful work.
54:28.560 --> 54:31.640
So like I see a lot of people make an explicit decision
54:31.640 --> 54:36.000
to not learn this incredibly valuable tool
54:36.000 --> 54:39.840
because they've drunk the Google Kool Aid, which is that only
54:39.840 --> 54:42.440
Google's big enough and smart enough to do it.
54:42.440 --> 54:45.400
And I just find that so disappointing and it's so wrong.
54:45.400 --> 54:49.200
And I think all of the major breakthroughs in AI
54:49.200 --> 54:53.280
in the next 20 years will be doable on a single GPU.
54:53.280 --> 54:57.120
Like I would say, my sense is all the big sort of.
54:57.120 --> 54:58.200
Well, let's put it this way.
54:58.200 --> 55:00.200
None of the big breakthroughs of the last 20 years
55:00.200 --> 55:01.720
have required multiple GPUs.
55:01.720 --> 55:05.920
So like batch norm, value, dropout,
55:05.920 --> 55:08.080
to demonstrate that there's something to them.
55:08.080 --> 55:11.840
Every one of them, none of them has required multiple GPUs.
55:11.840 --> 55:15.800
GANs, the original GANs, didn't require multiple GPUs.
55:15.800 --> 55:18.040
Well, and we've actually recently shown
55:18.040 --> 55:19.680
that you don't even need GANs.
55:19.680 --> 55:23.360
So we've developed GAN level outcomes
55:23.360 --> 55:24.720
without needing GANs.
55:24.720 --> 55:26.880
And we can now do it with, again,
55:26.880 --> 55:29.680
by using transfer learning, we can do it in a couple of hours
55:29.680 --> 55:30.520
on a single GPU.
55:30.520 --> 55:31.600
So you're using a generator model
55:31.600 --> 55:32.960
without the adversarial part?
55:32.960 --> 55:33.440
Yeah.
55:33.440 --> 55:35.880
So we've found loss functions that
55:35.880 --> 55:38.680
work super well without the adversarial part.
55:38.680 --> 55:41.840
And then one of our students, a guy called Jason Antich,
55:41.840 --> 55:44.640
has created a system called Dealtify,
55:44.640 --> 55:47.280
which uses this technique to colorize
55:47.280 --> 55:48.840
old black and white movies.
55:48.840 --> 55:51.480
You can do it on a single GPU, colorize a whole movie
55:51.480 --> 55:52.920
in a couple of hours.
55:52.920 --> 55:56.080
And one of the things that Jason and I did together
55:56.080 --> 56:00.480
was we figured out how to add a little bit of GAN
56:00.480 --> 56:03.000
at the very end, which it turns out for colorization,
56:03.000 --> 56:06.000
makes it just a bit brighter and nicer.
56:06.000 --> 56:07.920
And then Jason did masses of experiments
56:07.920 --> 56:10.000
to figure out exactly how much to do.
56:10.000 --> 56:12.840
But it's still all done on his home machine,
56:12.840 --> 56:15.400
on a single GPU in his lounge room.
56:15.400 --> 56:19.200
And if you think about colorizing Hollywood movies,
56:19.200 --> 56:21.720
that sounds like something a huge studio would have to do.
56:21.720 --> 56:25.280
But he has the world's best results on this.
56:25.280 --> 56:27.040
There's this problem of microphones.
56:27.040 --> 56:28.640
We're just talking to microphones now.
56:28.640 --> 56:29.140
Yeah.
56:29.140 --> 56:32.520
It's such a pain in the ass to have these microphones
56:32.520 --> 56:34.440
to get good quality audio.
56:34.440 --> 56:36.720
And I tried to see if it's possible to plop down
56:36.720 --> 56:39.960
a bunch of cheap sensors and reconstruct higher quality
56:39.960 --> 56:41.840
audio from multiple sources.
56:41.840 --> 56:45.440
Because right now, I haven't seen work from, OK,
56:45.440 --> 56:48.760
we can save inexpensive mics, automatically combining
56:48.760 --> 56:52.280
audio from multiple sources to improve the combined audio.
56:52.280 --> 56:53.200
People haven't done that.
56:53.200 --> 56:55.080
And that feels like a learning problem.
56:55.080 --> 56:56.800
So hopefully somebody can.
56:56.800 --> 56:58.760
Well, I mean, it's evidently doable.
56:58.760 --> 57:01.000
And it should have been done by now.
57:01.000 --> 57:03.640
I felt the same way about computational photography
57:03.640 --> 57:04.480
four years ago.
57:04.480 --> 57:05.240
That's right.
57:05.240 --> 57:08.240
Why are we investing in big lenses when
57:08.240 --> 57:13.160
three cheap lenses plus actually a little bit of intentional
57:13.160 --> 57:16.640
movement, so like take a few frames,
57:16.640 --> 57:19.840
gives you enough information to get excellent subpixel
57:19.840 --> 57:22.440
resolution, which particularly with deep learning,
57:22.440 --> 57:25.840
you would know exactly what you meant to be looking at.
57:25.840 --> 57:28.200
We can totally do the same thing with audio.
57:28.200 --> 57:30.720
I think the madness that it hasn't been done yet.
57:30.720 --> 57:33.320
Has there been progress on photography companies?
57:33.320 --> 57:33.820
Yeah.
57:33.820 --> 57:36.720
Photography is basically a standard now.
57:36.720 --> 57:41.120
So the Google Pixel Nightlight, I
57:41.120 --> 57:43.240
don't know if you've ever tried it, but it's astonishing.
57:43.240 --> 57:45.440
You take a picture and almost pitch black
57:45.440 --> 57:49.120
and you get back a very high quality image.
57:49.120 --> 57:51.440
And it's not because of the lens.
57:51.440 --> 57:55.280
Same stuff with like adding the bokeh to the background
57:55.280 --> 57:55.800
blurring.
57:55.800 --> 57:57.200
It's done computationally.
57:57.200 --> 57:58.520
Just the pics over here.
57:58.520 --> 57:59.020
Yeah.
57:59.020 --> 58:05.000
Basically, everybody now is doing most of the fanciest stuff
58:05.000 --> 58:07.120
on their phones with computational photography
58:07.120 --> 58:10.640
and also increasingly, people are putting more than one lens
58:10.640 --> 58:11.840
on the back of the camera.
58:11.840 --> 58:14.360
So the same will happen for audio, for sure.
58:14.360 --> 58:16.520
And there's applications in the audio side.
58:16.520 --> 58:19.360
If you look at an Alexa type device,
58:19.360 --> 58:21.840
most people I've seen, especially I worked at Google
58:21.840 --> 58:26.000
before, when you look at noise background removal,
58:26.000 --> 58:29.480
you don't think of multiple sources of audio.
58:29.480 --> 58:31.920
You don't play with that as much as I would hope people would.
58:31.920 --> 58:33.640
But I mean, you can still do it even with one.
58:33.640 --> 58:36.120
Like, again, it's not much work's been done in this area.
58:36.120 --> 58:38.440
So we're actually going to be releasing an audio library
58:38.440 --> 58:41.040
soon, which hopefully will encourage development of this
58:41.040 --> 58:43.200
because it's so underused.
58:43.200 --> 58:46.480
The basic approach we used for our super resolution,
58:46.480 --> 58:49.960
in which Jason uses for dealdify of generating
58:49.960 --> 58:51.920
high quality images, the exact same approach
58:51.920 --> 58:53.480
would work for audio.
58:53.480 --> 58:57.160
No one's done it yet, but it would be a couple of months work.
58:57.160 --> 59:01.600
OK, also learning rate in terms of dawn bench.
59:01.600 --> 59:04.280
There's some magic on learning rate that you played around
59:04.280 --> 59:04.760
with.
59:04.760 --> 59:05.800
It's kind of interesting.
59:05.800 --> 59:08.120
Yeah, so this is all work that came from a guy called Leslie
59:08.120 --> 59:09.360
Smith.
59:09.360 --> 59:12.760
Leslie's a researcher who, like us,
59:12.760 --> 59:17.720
cares a lot about just the practicalities of training
59:17.720 --> 59:20.000
neural networks quickly and accurately,
59:20.000 --> 59:22.120
which you would think is what everybody should care about,
59:22.120 --> 59:25.000
but almost nobody does.
59:25.000 --> 59:28.120
And he discovered something very interesting,
59:28.120 --> 59:30.000
which he calls super convergence, which
59:30.000 --> 59:32.360
is there are certain networks that with certain settings
59:32.360 --> 59:34.320
of high parameters could suddenly
59:34.320 --> 59:37.440
be trained 10 times faster by using
59:37.440 --> 59:39.480
a 10 times higher learning rate.
59:39.480 --> 59:44.680
Now, no one published that paper
59:44.680 --> 59:49.520
because it's not an area of active research
59:49.520 --> 59:50.440
in the academic world.
59:50.440 --> 59:52.840
No academics recognize this is important.
59:52.840 --> 59:56.080
And also, deep learning in academia
59:56.080 --> 1:00:00.040
is not considered a experimental science.
1:00:00.040 --> 1:00:02.440
So unlike in physics, where you could say,
1:00:02.440 --> 1:00:05.360
I just saw a subatomic particle do something
1:00:05.360 --> 1:00:07.240
which the theory doesn't explain,
1:00:07.240 --> 1:00:10.440
you could publish that without an explanation.
1:00:10.440 --> 1:00:12.120
And then in the next 60 years, people
1:00:12.120 --> 1:00:14.120
can try to work out how to explain it.
1:00:14.120 --> 1:00:16.200
We don't allow this in the deep learning world.
1:00:16.200 --> 1:00:20.720
So it's literally impossible for Leslie to publish a paper that
1:00:20.720 --> 1:00:23.560
says, I've just seen something amazing happen.
1:00:23.560 --> 1:00:25.680
This thing trained 10 times faster than it should have.
1:00:25.680 --> 1:00:27.080
I don't know why.
1:00:27.080 --> 1:00:28.600
And so the reviewers were like, well,
1:00:28.600 --> 1:00:30.280
you can't publish that because you don't know why.
1:00:30.280 --> 1:00:31.000
So anyway.
1:00:31.000 --> 1:00:32.680
That's important to pause on because there's
1:00:32.680 --> 1:00:36.160
so many discoveries that would need to start like that.
1:00:36.160 --> 1:00:39.280
Every other scientific field I know of works of that way.
1:00:39.280 --> 1:00:42.520
I don't know why ours is uniquely
1:00:42.520 --> 1:00:46.480
disinterested in publishing unexplained
1:00:46.480 --> 1:00:47.680
experimental results.
1:00:47.680 --> 1:00:48.680
But there it is.
1:00:48.680 --> 1:00:51.200
So it wasn't published.
1:00:51.200 --> 1:00:55.080
Having said that, I read a lot more
1:00:55.080 --> 1:00:56.840
unpublished papers and published papers
1:00:56.840 --> 1:01:00.080
because that's where you find the interesting insights.
1:01:00.080 --> 1:01:02.680
So I absolutely read this paper.
1:01:02.680 --> 1:01:08.120
And I was just like, this is astonishingly mind blowing
1:01:08.120 --> 1:01:09.760
and weird and awesome.
1:01:09.760 --> 1:01:12.400
And why isn't everybody only talking about this?
1:01:12.400 --> 1:01:15.520
Because if you can train these things 10 times faster,
1:01:15.520 --> 1:01:18.480
they also generalize better because you're doing less epochs,
1:01:18.480 --> 1:01:20.080
which means you look at the data less,
1:01:20.080 --> 1:01:22.400
you get better accuracy.
1:01:22.400 --> 1:01:24.640
So I've been kind of studying that ever since.
1:01:24.640 --> 1:01:28.520
And eventually Leslie kind of figured out
1:01:28.520 --> 1:01:30.160
a lot of how to get this done.
1:01:30.160 --> 1:01:32.280
And we added minor tweaks.
1:01:32.280 --> 1:01:34.840
And a big part of the trick is starting
1:01:34.840 --> 1:01:37.920
at a very low learning rate, very gradually increasing it.
1:01:37.920 --> 1:01:39.800
So as you're training your model,
1:01:39.800 --> 1:01:42.120
you take very small steps at the start.
1:01:42.120 --> 1:01:44.080
And you gradually make them bigger and bigger
1:01:44.080 --> 1:01:46.440
until eventually you're taking much bigger steps
1:01:46.440 --> 1:01:49.400
than anybody thought was possible.
1:01:49.400 --> 1:01:52.280
There's a few other little tricks to make it work.
1:01:52.280 --> 1:01:55.240
Basically, we can reliably get super convergence.
1:01:55.240 --> 1:01:56.640
And so for the dorm bench thing,
1:01:56.640 --> 1:01:59.320
we were using just much higher learning rates
1:01:59.320 --> 1:02:02.200
than people expected to work.
1:02:02.200 --> 1:02:03.880
What do you think the future of,
1:02:03.880 --> 1:02:05.200
I mean, it makes so much sense for that
1:02:05.200 --> 1:02:08.640
to be a critical hyperparameter learning rate that you vary.
1:02:08.640 --> 1:02:13.480
What do you think the future of learning rate magic looks like?
1:02:13.480 --> 1:02:14.960
Well, there's been a lot of great work
1:02:14.960 --> 1:02:17.400
in the last 12 months in this area.
1:02:17.400 --> 1:02:20.800
And people are increasingly realizing that we just
1:02:20.800 --> 1:02:23.120
have no idea really how optimizers work.
1:02:23.120 --> 1:02:25.840
And the combination of weight decay,
1:02:25.840 --> 1:02:27.480
which is how we regularize optimizers,
1:02:27.480 --> 1:02:30.120
and the learning rate, and then other things
1:02:30.120 --> 1:02:32.760
like the epsilon we use in the atom optimizer,
1:02:32.760 --> 1:02:36.560
they all work together in weird ways.
1:02:36.560 --> 1:02:38.560
And different parts of the model,
1:02:38.560 --> 1:02:40.480
this is another thing we've done a lot of work on,
1:02:40.480 --> 1:02:43.480
is research into how different parts of the model
1:02:43.480 --> 1:02:46.600
should be trained at different rates in different ways.
1:02:46.600 --> 1:02:49.040
So we do something we call discriminative learning rates,
1:02:49.040 --> 1:02:51.040
which is really important, particularly for transfer
1:02:51.040 --> 1:02:53.200
learning.
1:02:53.200 --> 1:02:54.880
So really, I think in the last 12 months,
1:02:54.880 --> 1:02:57.360
a lot of people have realized that all this stuff is important.
1:02:57.360 --> 1:03:00.000
There's been a lot of great work coming out.
1:03:00.000 --> 1:03:02.880
And we're starting to see algorithms
1:03:02.880 --> 1:03:06.880
appear which have very, very few dials, if any,
1:03:06.880 --> 1:03:07.920
that you have to touch.
1:03:07.920 --> 1:03:09.240
So I think what's going to happen
1:03:09.240 --> 1:03:10.840
is the idea of a learning rate, well,
1:03:10.840 --> 1:03:14.360
it almost already has disappeared in the latest research.
1:03:14.360 --> 1:03:18.240
And instead, it's just like, we know enough
1:03:18.240 --> 1:03:22.440
about how to interpret the gradients
1:03:22.440 --> 1:03:23.840
and the change of gradients we see
1:03:23.840 --> 1:03:25.440
to know how to set every parameter of our way.
1:03:25.440 --> 1:03:26.440
There you can automate it.
1:03:26.440 --> 1:03:31.720
So you see the future of deep learning, where really,
1:03:31.720 --> 1:03:34.600
where is the input of a human expert needed?
1:03:34.600 --> 1:03:36.520
Well, hopefully, the input of a human expert
1:03:36.520 --> 1:03:39.680
will be almost entirely unneeded from the deep learning
1:03:39.680 --> 1:03:40.560
point of view.
1:03:40.560 --> 1:03:43.480
So again, Google's approach to this
1:03:43.480 --> 1:03:46.000
is to try and use thousands of times more compute
1:03:46.000 --> 1:03:49.400
to run lots and lots of models at the same time
1:03:49.400 --> 1:03:51.040
and hope that one of them is good.
1:03:51.040 --> 1:03:51.960
A lot of malkana stuff.
1:03:51.960 --> 1:03:56.800
Yeah, a lot of malkana stuff, which I think is insane.
1:03:56.800 --> 1:04:01.720
When you better understand the mechanics of how models learn,
1:04:01.720 --> 1:04:03.800
you don't have to try 1,000 different models
1:04:03.800 --> 1:04:05.680
to find which one happens to work the best.
1:04:05.680 --> 1:04:08.240
You can just jump straight to the best one, which
1:04:08.240 --> 1:04:12.720
means that it's more accessible in terms of compute, cheaper,
1:04:12.720 --> 1:04:14.920
and also with less hyperparameters to set.
1:04:14.920 --> 1:04:16.800
That means you don't need deep learning experts
1:04:16.800 --> 1:04:19.360
to train your deep learning model for you,
1:04:19.360 --> 1:04:22.480
which means that domain experts can do more of the work, which
1:04:22.480 --> 1:04:24.960
means that now you can focus the human time
1:04:24.960 --> 1:04:28.320
on the kind of interpretation, the data gathering,
1:04:28.320 --> 1:04:31.360
identifying model errors, and stuff like that.
1:04:31.360 --> 1:04:32.840
Yeah, the data side.
1:04:32.840 --> 1:04:34.720
How often do you work with data these days
1:04:34.720 --> 1:04:38.680
in terms of the cleaning, Darwin looked
1:04:38.680 --> 1:04:43.120
at different species while traveling about,
1:04:43.120 --> 1:04:45.040
do you look at data?
1:04:45.040 --> 1:04:49.400
Have you, in your roots in Kaggle, just look at data?
1:04:49.400 --> 1:04:51.320
Yeah, I mean, it's a key part of our course.
1:04:51.320 --> 1:04:53.480
It's like before we train a model in the course,
1:04:53.480 --> 1:04:55.160
we see how to look at the data.
1:04:55.160 --> 1:04:57.920
And then the first thing we do after we train our first model,
1:04:57.920 --> 1:05:00.520
which we fine tune an ImageNet model for five minutes.
1:05:00.520 --> 1:05:02.240
And then the thing we immediately do after that
1:05:02.240 --> 1:05:05.760
is we learn how to analyze the results of the model
1:05:05.760 --> 1:05:08.920
by looking at examples of misclassified images,
1:05:08.920 --> 1:05:10.880
and looking at a classification matrix,
1:05:10.880 --> 1:05:15.080
and then doing research on Google
1:05:15.080 --> 1:05:18.000
to learn about the kinds of things that it's misclassifying.
1:05:18.000 --> 1:05:19.520
So to me, one of the really cool things
1:05:19.520 --> 1:05:21.840
about machine learning models in general
1:05:21.840 --> 1:05:24.480
is that when you interpret them, they
1:05:24.480 --> 1:05:27.360
tell you about things like what are the most important features,
1:05:27.360 --> 1:05:29.400
which groups you're misclassifying,
1:05:29.400 --> 1:05:32.440
and they help you become a domain expert more quickly,
1:05:32.440 --> 1:05:34.880
because you can focus your time on the bits
1:05:34.880 --> 1:05:38.680
that the model is telling you is important.
1:05:38.680 --> 1:05:40.760
So it lets you deal with things like data leakage,
1:05:40.760 --> 1:05:43.080
for example, if it says, oh, the main feature I'm looking at
1:05:43.080 --> 1:05:45.240
is customer ID.
1:05:45.240 --> 1:05:47.640
And you're like, oh, customer ID should be predictive.
1:05:47.640 --> 1:05:52.280
And then you can talk to the people that manage customer IDs,
1:05:52.280 --> 1:05:56.840
and they'll tell you, oh, yes, as soon as a customer's application
1:05:56.840 --> 1:05:59.480
is accepted, we add a one on the end of their customer ID
1:05:59.480 --> 1:06:01.200
or something.
1:06:01.200 --> 1:06:04.360
So yeah, looking at data, particularly
1:06:04.360 --> 1:06:06.600
from the lens of which parts of the data the model says
1:06:06.600 --> 1:06:09.400
is important, is super important.
1:06:09.400 --> 1:06:11.480
Yeah, and using kind of using the model
1:06:11.480 --> 1:06:14.240
to almost debug the data to learn more about the data.
1:06:14.240 --> 1:06:16.800
Exactly.
1:06:16.800 --> 1:06:18.600
What are the different cloud options
1:06:18.600 --> 1:06:20.160
for training your networks?
1:06:20.160 --> 1:06:22.000
Last question related to Don Bench.
1:06:22.000 --> 1:06:24.240
Well, it's part of a lot of the work you do,
1:06:24.240 --> 1:06:27.280
but from a perspective of performance,
1:06:27.280 --> 1:06:29.480
I think you've written this in a blog post.
1:06:29.480 --> 1:06:32.720
There's AWS, there's a TPU from Google.
1:06:32.720 --> 1:06:33.440
What's your sense?
1:06:33.440 --> 1:06:34.520
What the future holds?
1:06:34.520 --> 1:06:37.360
What would you recommend now in terms of training in the cloud?
1:06:37.360 --> 1:06:40.520
So from a hardware point of view,
1:06:40.520 --> 1:06:45.520
Google's TPUs and the best Nvidia GPUs are similar.
1:06:45.520 --> 1:06:47.880
And maybe the TPUs are like 30% faster,
1:06:47.880 --> 1:06:51.160
but they're also much harder to program.
1:06:51.160 --> 1:06:54.720
There isn't a clear leader in terms of hardware right now,
1:06:54.720 --> 1:06:57.840
although much more importantly, the Nvidia's GPUs
1:06:57.840 --> 1:06:59.560
are much more programmable.
1:06:59.560 --> 1:07:01.280
They've got much more written problems.
1:07:01.280 --> 1:07:03.480
That's the clear leader for me and where
1:07:03.480 --> 1:07:08.640
I would spend my time as a researcher and practitioner.
1:07:08.640 --> 1:07:12.280
But then in terms of the platform,
1:07:12.280 --> 1:07:15.680
I mean, we're super lucky now with stuff like Google,
1:07:15.680 --> 1:07:21.520
GCP, Google Cloud, and AWS that you can access a GPU
1:07:21.520 --> 1:07:25.440
pretty quickly and easily.
1:07:25.440 --> 1:07:28.280
But I mean, for AWS, it's still too hard.
1:07:28.280 --> 1:07:33.760
You have to find an AMI and get the instance running
1:07:33.760 --> 1:07:37.080
and then install the software you want and blah, blah, blah.
1:07:37.080 --> 1:07:40.400
GCP is currently the best way to get
1:07:40.400 --> 1:07:42.320
started on a full server environment
1:07:42.320 --> 1:07:46.120
because they have a fantastic fast AI in PyTorch,
1:07:46.120 --> 1:07:51.120
ready to go instance, which has all the courses preinstalled.
1:07:51.120 --> 1:07:53.040
It has Jupyter Notebook prerunning.
1:07:53.040 --> 1:07:57.080
Jupyter Notebook is this wonderful interactive computing
1:07:57.080 --> 1:07:59.440
system, which everybody basically
1:07:59.440 --> 1:08:02.920
should be using for any kind of data driven research.
1:08:02.920 --> 1:08:05.880
But then even better than that, there
1:08:05.880 --> 1:08:09.560
are platforms like Salamander, which we own,
1:08:09.560 --> 1:08:13.600
and Paperspace, where literally you click a single button
1:08:13.600 --> 1:08:17.240
and it pops up and you put a notebook straight away
1:08:17.240 --> 1:08:22.240
without any kind of installation or anything.
1:08:22.240 --> 1:08:25.800
And all the course notebooks are all preinstalled.
1:08:25.800 --> 1:08:28.560
So for me, this is one of the things
1:08:28.560 --> 1:08:34.160
we spent a lot of time curating and working on.
1:08:34.160 --> 1:08:35.960
Because when we first started our courses,
1:08:35.960 --> 1:08:39.560
the biggest problem was people dropped out of lesson one
1:08:39.560 --> 1:08:42.680
because they couldn't get an AWS instance running.
1:08:42.680 --> 1:08:44.880
So things are so much better now.
1:08:44.880 --> 1:08:47.760
And we actually have, if you go to course.fast.ai,
1:08:47.760 --> 1:08:49.040
the first thing it says is, here's
1:08:49.040 --> 1:08:50.480
how to get started with your GPU.
1:08:50.480 --> 1:08:52.120
And it's like, you just click on the link
1:08:52.120 --> 1:08:55.120
and you click start and you're going.
1:08:55.120 --> 1:08:56.240
You will go GCP.
1:08:56.240 --> 1:08:58.760
I have to confess, I've never used the Google GCP.
1:08:58.760 --> 1:09:01.600
Yeah, GCP gives you $300 of compute for free,
1:09:01.600 --> 1:09:04.920
which is really nice.
1:09:04.920 --> 1:09:10.960
But as I say, Salamander and Paperspace are even easier still.
1:09:10.960 --> 1:09:15.120
So from the perspective of deep learning frameworks,
1:09:15.120 --> 1:09:18.400
you work with Fast.ai, if you think of it as framework,
1:09:18.400 --> 1:09:22.960
and PyTorch and TensorFlow, what are the strengths
1:09:22.960 --> 1:09:25.840
of each platform in your perspective?
1:09:25.840 --> 1:09:29.240
So in terms of what we've done our research on and taught
1:09:29.240 --> 1:09:34.400
in our course, we started with Theano and Keras.
1:09:34.400 --> 1:09:38.120
And then we switched to TensorFlow and Keras.
1:09:38.120 --> 1:09:40.400
And then we switched to PyTorch.
1:09:40.400 --> 1:09:43.360
And then we switched to PyTorch and Fast.ai.
1:09:43.360 --> 1:09:47.560
And that kind of reflects a growth and development
1:09:47.560 --> 1:09:52.560
of the ecosystem of deep learning libraries.
1:09:52.560 --> 1:09:57.040
Theano and TensorFlow were great,
1:09:57.040 --> 1:10:01.360
but were much harder to teach and to do research and development
1:10:01.360 --> 1:10:04.560
on because they define what's called a computational graph
1:10:04.560 --> 1:10:06.680
up front, a static graph, where you basically
1:10:06.680 --> 1:10:08.360
have to say, here are all the things
1:10:08.360 --> 1:10:12.040
that I'm going to eventually do in my model.
1:10:12.040 --> 1:10:15.080
And then later on, you say, OK, do those things with this data.
1:10:15.080 --> 1:10:17.160
And you can't debug them.
1:10:17.160 --> 1:10:18.560
You can't do them step by step.
1:10:18.560 --> 1:10:20.160
You can't program them interactively
1:10:20.160 --> 1:10:22.280
in a Jupyter notebook and so forth.
1:10:22.280 --> 1:10:24.320
PyTorch was not the first, but PyTorch
1:10:24.320 --> 1:10:27.400
was certainly the strongest entrant to come along
1:10:27.400 --> 1:10:28.720
and say, let's not do it that way.
1:10:28.720 --> 1:10:31.320
Let's just use normal Python.
1:10:31.320 --> 1:10:32.880
And everything you know about in Python
1:10:32.880 --> 1:10:34.000
is just going to work.
1:10:34.000 --> 1:10:37.880
And we'll figure out how to make that run on the GPU
1:10:37.880 --> 1:10:40.800
as and when necessary.
1:10:40.800 --> 1:10:45.120
That turned out to be a huge leap in terms
1:10:45.120 --> 1:10:46.800
of what we could do with our research
1:10:46.800 --> 1:10:49.720
and what we could do with our teaching.
1:10:49.720 --> 1:10:51.160
Because it wasn't limiting.
1:10:51.160 --> 1:10:52.760
Yeah, I mean, it was critical for us
1:10:52.760 --> 1:10:55.960
for something like Dawnbench to be able to rapidly try things.
1:10:55.960 --> 1:10:58.560
It's just so much harder to be a researcher and practitioner
1:10:58.560 --> 1:11:00.520
when you have to do everything upfront
1:11:00.520 --> 1:11:03.400
and you can't inspect it.
1:11:03.400 --> 1:11:07.360
Problem with PyTorch is it's not at all
1:11:07.360 --> 1:11:09.360
accessible to newcomers because you
1:11:09.360 --> 1:11:11.600
have to write your own training loop
1:11:11.600 --> 1:11:15.680
and manage the gradients and all this stuff.
1:11:15.680 --> 1:11:17.920
And it's also not great for researchers
1:11:17.920 --> 1:11:20.680
because you're spending your time dealing with all this boiler
1:11:20.680 --> 1:11:23.920
plate and overhead rather than thinking about your algorithm.
1:11:23.920 --> 1:11:27.760
So we ended up writing this very multi layered API
1:11:27.760 --> 1:11:31.040
that at the top level, you can train a state of the art neural
1:11:31.040 --> 1:11:33.640
network in three lines of code.
1:11:33.640 --> 1:11:35.920
And which talks to an API, which talks to an API,
1:11:35.920 --> 1:11:38.880
which talks to an API, which you can dive into at any level
1:11:38.880 --> 1:11:45.400
and get progressively closer to the machine levels of control.
1:11:45.400 --> 1:11:47.480
And this is the fast AI library.
1:11:47.480 --> 1:11:51.920
That's been critical for us and for our students
1:11:51.920 --> 1:11:54.200
and for lots of people that have won big learning
1:11:54.200 --> 1:11:58.560
competitions with it and written academic papers with it.
1:11:58.560 --> 1:12:00.680
It's made a big difference.
1:12:00.680 --> 1:12:03.960
We're still limited though by Python.
1:12:03.960 --> 1:12:05.920
And particularly this problem with things
1:12:05.920 --> 1:12:10.640
like our current neural nets say where you just can't change
1:12:10.640 --> 1:12:13.320
things unless you accept it going so slowly
1:12:13.320 --> 1:12:15.680
that it's impractical.
1:12:15.680 --> 1:12:18.320
So in the latest incarnation of the course
1:12:18.320 --> 1:12:20.880
and with some of the research we're now starting to do,
1:12:20.880 --> 1:12:24.480
we're starting to do some stuff in Swift.
1:12:24.480 --> 1:12:28.920
I think we're three years away from that being
1:12:28.920 --> 1:12:31.080
super practical, but I'm in no hurry.
1:12:31.080 --> 1:12:35.480
I'm very happy to invest the time to get there.
1:12:35.480 --> 1:12:38.000
But with that, we actually already
1:12:38.000 --> 1:12:41.840
have a nascent version of the fast AI library for vision
1:12:41.840 --> 1:12:44.720
running on Swift and TensorFlow.
1:12:44.720 --> 1:12:48.040
Because Python for TensorFlow is not going to cut it.
1:12:48.040 --> 1:12:49.920
It's just a disaster.
1:12:49.920 --> 1:12:54.440
What they did was they tried to replicate the bits
1:12:54.440 --> 1:12:56.640
that people were saying they like about PyTorch,
1:12:56.640 --> 1:12:59.160
this kind of interactive computation.
1:12:59.160 --> 1:13:02.760
But they didn't actually change their foundational runtime
1:13:02.760 --> 1:13:03.920
components.
1:13:03.920 --> 1:13:06.640
So they kind of added this like syntax, sugar,
1:13:06.640 --> 1:13:08.560
they call TF Eager, TensorFlow Eager, which
1:13:08.560 --> 1:13:10.880
makes it look a lot like PyTorch.
1:13:10.880 --> 1:13:16.400
But it's 10 times slower than PyTorch to actually do a step.
1:13:16.400 --> 1:13:19.080
So because they didn't invest the time
1:13:19.080 --> 1:13:22.080
in retooling the foundations because their code base
1:13:22.080 --> 1:13:23.520
is so horribly complex.
1:13:23.520 --> 1:13:25.280
Yeah, I think it's probably very difficult
1:13:25.280 --> 1:13:26.440
to do that kind of rejoining.
1:13:26.440 --> 1:13:28.680
Yeah, well, particularly the way TensorFlow was written,
1:13:28.680 --> 1:13:31.480
it was written by a lot of people very quickly
1:13:31.480 --> 1:13:33.320
in a very disorganized way.
1:13:33.320 --> 1:13:36.000
So when you actually look in the code, as I do often,
1:13:36.000 --> 1:13:38.840
I'm always just like, oh, god, what were they thinking?
1:13:38.840 --> 1:13:41.480
It's just, it's pretty awful.
1:13:41.480 --> 1:13:47.080
So I'm really extremely negative about the potential future
1:13:47.080 --> 1:13:52.120
for Python TensorFlow that Swift for TensorFlow
1:13:52.120 --> 1:13:53.760
can be a different beast altogether.
1:13:53.760 --> 1:13:57.560
It can be like, it can basically be a layer on top of MLIR
1:13:57.560 --> 1:14:02.640
that takes advantage of all the great compiler stuff
1:14:02.640 --> 1:14:04.760
that Swift builds on with LLVM.
1:14:04.760 --> 1:14:07.040
And yeah, it could be absolutely.
1:14:07.040 --> 1:14:10.320
I think it will be absolutely fantastic.
1:14:10.320 --> 1:14:11.920
Well, you're inspiring me to try.
1:14:11.920 --> 1:14:17.640
Evan truly felt the pain of TensorFlow 2.0 Python.
1:14:17.640 --> 1:14:19.040
It's fine by me.
1:14:19.040 --> 1:14:21.080
But of course.
1:14:21.080 --> 1:14:23.240
Yeah, I mean, it does the job if you're using
1:14:23.240 --> 1:14:27.720
predefined things that somebody's already written.
1:14:27.720 --> 1:14:29.920
But if you actually compare, like I've
1:14:29.920 --> 1:14:33.680
had to do a lot of stuff with TensorFlow recently,
1:14:33.680 --> 1:14:35.480
you actually compare like, I want
1:14:35.480 --> 1:14:37.360
to write something from scratch.
1:14:37.360 --> 1:14:39.040
And you're like, I just keep finding it's like, oh,
1:14:39.040 --> 1:14:41.560
it's running 10 times slower than PyTorch.
1:14:41.560 --> 1:14:43.800
So is the biggest cost.
1:14:43.800 --> 1:14:47.360
Let's throw running time out the window.
1:14:47.360 --> 1:14:49.640
How long it takes you to program?
1:14:49.640 --> 1:14:51.000
That's not too different now.
1:14:51.000 --> 1:14:54.080
Thanks to TensorFlow Eager, that's not too different.
1:14:54.080 --> 1:14:58.640
But because so many things take so long to run,
1:14:58.640 --> 1:15:00.320
you wouldn't run it at 10 times slower.
1:15:00.320 --> 1:15:03.000
Like, you just go like, oh, this is taking too long.
1:15:03.000 --> 1:15:04.240
And also, there's a lot of things
1:15:04.240 --> 1:15:05.840
which are just less programmable,
1:15:05.840 --> 1:15:09.000
like tf.data, which is the way data processing works
1:15:09.000 --> 1:15:11.400
in TensorFlow, is just this big mess.
1:15:11.400 --> 1:15:13.160
It's incredibly inefficient.
1:15:13.160 --> 1:15:14.800
And they kind of had to write it that way
1:15:14.800 --> 1:15:19.160
because of the TPU problems I described earlier.
1:15:19.160 --> 1:15:24.680
So I just feel like they've got this huge technical debt,
1:15:24.680 --> 1:15:27.960
which they're not going to solve without starting from scratch.
1:15:27.960 --> 1:15:29.440
So here's an interesting question then.
1:15:29.440 --> 1:15:34.720
If there's a new student starting today,
1:15:34.720 --> 1:15:37.480
what would you recommend they use?
1:15:37.480 --> 1:15:39.160
Well, I mean, we obviously recommend
1:15:39.160 --> 1:15:42.760
FastAI and PyTorch because we teach new students.
1:15:42.760 --> 1:15:43.960
And that's what we teach with.
1:15:43.960 --> 1:15:46.080
So we would very strongly recommend that
1:15:46.080 --> 1:15:50.280
because it will let you get on top of the concepts much
1:15:50.280 --> 1:15:51.960
more quickly.
1:15:51.960 --> 1:15:53.160
So then you'll become an action.
1:15:53.160 --> 1:15:56.400
And you'll also learn the actual state of the art techniques.
1:15:56.400 --> 1:15:59.240
So you actually get world class results.
1:15:59.240 --> 1:16:03.000
Honestly, it doesn't much matter what library
1:16:03.000 --> 1:16:09.240
you learn because switching from Shaina to MXNet to TensorFlow
1:16:09.240 --> 1:16:12.000
to PyTorch is going to be a couple of days work
1:16:12.000 --> 1:16:15.280
if you long as you understand the foundation as well.
1:16:15.280 --> 1:16:21.600
But you think we'll Swift creep in there as a thing
1:16:21.600 --> 1:16:22.960
that people start using?
1:16:22.960 --> 1:16:26.400
Not for a few years, particularly because Swift
1:16:26.400 --> 1:16:33.440
has no data science community, libraries, schooling.
1:16:33.440 --> 1:16:39.080
And the Swift community has a total lack of appreciation
1:16:39.080 --> 1:16:41.040
and understanding of numeric computing.
1:16:41.040 --> 1:16:43.640
So they keep on making stupid decisions.
1:16:43.640 --> 1:16:47.480
For years, they've just done dumb things around performance
1:16:47.480 --> 1:16:50.280
and prioritization.
1:16:50.280 --> 1:16:56.360
That's clearly changing now because the developer of Chris
1:16:56.360 --> 1:16:59.960
Lattner is working at Google on Swift for TensorFlow.
1:16:59.960 --> 1:17:03.200
So that's a priority.
1:17:03.200 --> 1:17:05.000
It'll be interesting to see what happens with Apple
1:17:05.000 --> 1:17:10.000
because Apple hasn't shown any sign of caring
1:17:10.000 --> 1:17:12.960
about numeric programming in Swift.
1:17:12.960 --> 1:17:16.600
So hopefully they'll get off their arse
1:17:16.600 --> 1:17:18.840
and start appreciating this because currently all
1:17:18.840 --> 1:17:24.240
of their low level libraries are not written in Swift.
1:17:24.240 --> 1:17:27.640
They're not particularly Swifty at all, stuff like Core ML.
1:17:27.640 --> 1:17:30.840
They're really pretty rubbish.
1:17:30.840 --> 1:17:32.760
So yeah, so there's a long way to go.
1:17:32.760 --> 1:17:35.360
But at least one nice thing is that Swift for TensorFlow
1:17:35.360 --> 1:17:40.000
can actually directly use Python code and Python libraries.
1:17:40.000 --> 1:17:44.240
Literally, the entire lesson one notebook of fast AI
1:17:44.240 --> 1:17:47.800
runs in Swift right now in Python mode.
1:17:47.800 --> 1:17:50.800
So that's a nice intermediate thing.
1:17:50.800 --> 1:17:56.800
How long does it take if you look at the two fast AI courses,
1:17:56.800 --> 1:18:00.360
how long does it take to get from 0.0 to completing
1:18:00.360 --> 1:18:02.360
both courses?
1:18:02.360 --> 1:18:04.800
It varies a lot.
1:18:04.800 --> 1:18:12.360
Somewhere between two months and two years, generally.
1:18:12.360 --> 1:18:15.360
So for two months, how many hours a day on average?
1:18:15.360 --> 1:18:20.360
So like somebody who is a very competent coder
1:18:20.360 --> 1:18:27.360
can can do 70 hours per course and pick up.
1:18:27.360 --> 1:18:28.360
70, 70.
1:18:28.360 --> 1:18:29.360
That's it?
1:18:29.360 --> 1:18:30.360
OK.
1:18:30.360 --> 1:18:36.360
But a lot of people I know take a year off to study fast AI
1:18:36.360 --> 1:18:39.360
full time and say at the end of the year,
1:18:39.360 --> 1:18:42.360
they feel pretty competent.
1:18:42.360 --> 1:18:45.360
Because generally, there's a lot of other things you do.
1:18:45.360 --> 1:18:48.360
Generally, they'll be entering Kaggle competitions.
1:18:48.360 --> 1:18:51.360
They might be reading Ian Goodfellow's book.
1:18:51.360 --> 1:18:54.360
They might be doing a bunch of stuff.
1:18:54.360 --> 1:18:57.360
And often, particularly if they are a domain expert,
1:18:57.360 --> 1:19:01.360
their coding skills might be a little on the pedestrian side.
1:19:01.360 --> 1:19:04.360
So part of it's just like doing a lot more writing.
1:19:04.360 --> 1:19:07.360
What do you find is the bottleneck for people usually,
1:19:07.360 --> 1:19:11.360
except getting started and setting stuff up?
1:19:11.360 --> 1:19:13.360
I would say coding.
1:19:13.360 --> 1:19:17.360
The people who are strong coders pick it up the best.
1:19:17.360 --> 1:19:21.360
Although another bottleneck is people who have a lot of
1:19:21.360 --> 1:19:27.360
experience of classic statistics can really struggle
1:19:27.360 --> 1:19:30.360
because the intuition is so the opposite of what they're used to.
1:19:30.360 --> 1:19:33.360
They're very used to trying to reduce the number of parameters
1:19:33.360 --> 1:19:38.360
in their model and looking at individual coefficients
1:19:38.360 --> 1:19:39.360
and stuff like that.
1:19:39.360 --> 1:19:42.360
So I find people who have a lot of coding background
1:19:42.360 --> 1:19:45.360
and know nothing about statistics are generally
1:19:45.360 --> 1:19:48.360
going to be the best stuff.
1:19:48.360 --> 1:19:51.360
So you taught several courses on deep learning
1:19:51.360 --> 1:19:54.360
and as Feynman says, the best way to understand something
1:19:54.360 --> 1:19:55.360
is to teach it.
1:19:55.360 --> 1:19:58.360
What have you learned about deep learning from teaching it?
1:19:58.360 --> 1:20:00.360
A lot.
1:20:00.360 --> 1:20:03.360
It's a key reason for me to teach the courses.
1:20:03.360 --> 1:20:06.360
Obviously, it's going to be necessary to achieve our goal
1:20:06.360 --> 1:20:09.360
of getting domain experts to be familiar with deep learning,
1:20:09.360 --> 1:20:12.360
but it was also necessary for me to achieve my goal
1:20:12.360 --> 1:20:16.360
of being really familiar with deep learning.
1:20:16.360 --> 1:20:24.360
I mean, to see so many domain experts from so many different
1:20:24.360 --> 1:20:28.360
backgrounds, it's definitely, I wouldn't say taught me,
1:20:28.360 --> 1:20:31.360
but convinced me something that I liked to believe was true,
1:20:31.360 --> 1:20:34.360
which was anyone can do it.
1:20:34.360 --> 1:20:37.360
So there's a lot of kind of snobbishness out there about
1:20:37.360 --> 1:20:39.360
only certain people can learn to code,
1:20:39.360 --> 1:20:42.360
only certain people are going to be smart enough to do AI.
1:20:42.360 --> 1:20:44.360
That's definitely bullshit.
1:20:44.360 --> 1:20:48.360
I've seen so many people from so many different backgrounds
1:20:48.360 --> 1:20:52.360
get state of the art results in their domain areas now.
1:20:52.360 --> 1:20:56.360
It's definitely taught me that the key differentiator
1:20:56.360 --> 1:21:00.360
between people that succeed and people that fail is tenacity.
1:21:00.360 --> 1:21:03.360
That seems to be basically the only thing that matters.
1:21:03.360 --> 1:21:07.360
A lot of people give up.
1:21:07.360 --> 1:21:13.360
But if the ones who don't give up pretty much everybody succeeds,
1:21:13.360 --> 1:21:17.360
even if at first I'm just kind of thinking,
1:21:17.360 --> 1:21:20.360
wow, they really aren't quite getting it yet, are they?
1:21:20.360 --> 1:21:24.360
But eventually people get it and they succeed.
1:21:24.360 --> 1:21:27.360
So I think that's been, I think they're both things I liked
1:21:27.360 --> 1:21:29.360
to believe was true, but I don't feel like I really had
1:21:29.360 --> 1:21:31.360
strong evidence for them to be true,
1:21:31.360 --> 1:21:34.360
but now I can see I've seen it again and again.
1:21:34.360 --> 1:21:39.360
So what advice do you have for someone
1:21:39.360 --> 1:21:42.360
who wants to get started in deep learning?
1:21:42.360 --> 1:21:44.360
Train lots of models.
1:21:44.360 --> 1:21:47.360
That's how you learn it.
1:21:47.360 --> 1:21:51.360
So I think, it's not just me.
1:21:51.360 --> 1:21:53.360
I think our course is very good,
1:21:53.360 --> 1:21:55.360
but also lots of people independently have said it's very good.
1:21:55.360 --> 1:21:58.360
It recently won the CogEx Award for AI courses,
1:21:58.360 --> 1:22:00.360
it's being the best in the world.
1:22:00.360 --> 1:22:02.360
I'd say come to our course, course.fast.ai.
1:22:02.360 --> 1:22:05.360
And the thing I keep on harping on in my lessons is
1:22:05.360 --> 1:22:08.360
train models, print out the inputs to the models,
1:22:08.360 --> 1:22:10.360
print out to the outputs to the models,
1:22:10.360 --> 1:22:14.360
like study, you know, change the inputs a bit,
1:22:14.360 --> 1:22:16.360
look at how the outputs vary,
1:22:16.360 --> 1:22:22.360
just run lots of experiments to get an intuitive understanding
1:22:22.360 --> 1:22:24.360
of what's going on.
1:22:24.360 --> 1:22:28.360
To get hooked, do you think, you mentioned training,
1:22:28.360 --> 1:22:32.360
do you think just running the models inference?
1:22:32.360 --> 1:22:35.360
If we talk about getting started.
1:22:35.360 --> 1:22:37.360
No, you've got to fine tune the models.
1:22:37.360 --> 1:22:39.360
So that's the critical thing,
1:22:39.360 --> 1:22:43.360
because at that point, you now have a model that's in your domain area.
1:22:43.360 --> 1:22:46.360
So there's no point running somebody else's model,
1:22:46.360 --> 1:22:48.360
because it's not your model.
1:22:48.360 --> 1:22:50.360
So it only takes five minutes to fine tune a model
1:22:50.360 --> 1:22:52.360
for the data you care about.
1:22:52.360 --> 1:22:54.360
And in lesson two of the course,
1:22:54.360 --> 1:22:56.360
we teach you how to create your own dataset from scratch
1:22:56.360 --> 1:22:58.360
by scripting Google image search.
1:22:58.360 --> 1:23:02.360
And we show you how to actually create a web application running online.
1:23:02.360 --> 1:23:05.360
So I create one in the course that differentiates
1:23:05.360 --> 1:23:08.360
between a teddy bear, a grizzly bear, and a brown bear.
1:23:08.360 --> 1:23:10.360
And it does it with basically 100% accuracy.
1:23:10.360 --> 1:23:13.360
It took me about four minutes to scrape the images
1:23:13.360 --> 1:23:15.360
from Google search in the script.
1:23:15.360 --> 1:23:18.360
There's a little graphical widgets we have in the notebook
1:23:18.360 --> 1:23:21.360
that help you clean up the dataset.
1:23:21.360 --> 1:23:24.360
There's other widgets that help you study the results
1:23:24.360 --> 1:23:26.360
and see where the errors are happening.
1:23:26.360 --> 1:23:29.360
And so now we've got over a thousand replies
1:23:29.360 --> 1:23:32.360
in our Share Your Work Here thread of students saying,
1:23:32.360 --> 1:23:34.360
here's the thing I built.
1:23:34.360 --> 1:23:36.360
And so there's people who, like,
1:23:36.360 --> 1:23:38.360
and a lot of them are state of the art.
1:23:38.360 --> 1:23:40.360
Like somebody said, oh, I tried looking at Dev and Gary characters
1:23:40.360 --> 1:23:42.360
and I couldn't believe it.
1:23:42.360 --> 1:23:44.360
The thing that came out was more accurate
1:23:44.360 --> 1:23:46.360
than the best academic paper after lesson one.
1:23:46.360 --> 1:23:48.360
And then there's others which are just more kind of fun,
1:23:48.360 --> 1:23:53.360
like somebody who's doing Trinidad and Tobago hummingbirds.
1:23:53.360 --> 1:23:55.360
So that's kind of their national bird.
1:23:55.360 --> 1:23:57.360
And Susie's got something that can now classify Trinidad
1:23:57.360 --> 1:23:59.360
and Tobago hummingbirds.
1:23:59.360 --> 1:24:02.360
So yeah, train models, fine tune models with your dataset
1:24:02.360 --> 1:24:05.360
and then study their inputs and outputs.
1:24:05.360 --> 1:24:07.360
How much is Fast AI courses?
1:24:07.360 --> 1:24:09.360
Free.
1:24:09.360 --> 1:24:11.360
Everything we do is free.
1:24:11.360 --> 1:24:13.360
We have no revenue sources of any kind.
1:24:13.360 --> 1:24:15.360
It's just a service to the community.
1:24:15.360 --> 1:24:17.360
You're a saint.
1:24:17.360 --> 1:24:20.360
Okay, once a person understands the basics,
1:24:20.360 --> 1:24:22.360
trains a bunch of models,
1:24:22.360 --> 1:24:25.360
if we look at the scale of years,
1:24:25.360 --> 1:24:27.360
what advice do you have for someone wanting
1:24:27.360 --> 1:24:30.360
to eventually become an expert?
1:24:30.360 --> 1:24:32.360
Train lots of models.
1:24:32.360 --> 1:24:35.360
Specifically, train lots of models in your domain area.
1:24:35.360 --> 1:24:37.360
So an expert, what, right?
1:24:37.360 --> 1:24:40.360
We don't need more expert, like,
1:24:40.360 --> 1:24:45.360
create slightly evolutionary research in areas
1:24:45.360 --> 1:24:47.360
that everybody's studying.
1:24:47.360 --> 1:24:50.360
We need experts at using deep learning
1:24:50.360 --> 1:24:52.360
to diagnose malaria.
1:24:52.360 --> 1:24:55.360
Well, we need experts at using deep learning
1:24:55.360 --> 1:25:00.360
to analyze language to study media bias.
1:25:00.360 --> 1:25:08.360
So we need experts in analyzing fisheries
1:25:08.360 --> 1:25:11.360
to identify problem areas and the ocean.
1:25:11.360 --> 1:25:13.360
That's what we need.
1:25:13.360 --> 1:25:17.360
So become the expert in your passion area.
1:25:17.360 --> 1:25:21.360
And this is a tool which you can use for just about anything,
1:25:21.360 --> 1:25:24.360
and you'll be able to do that thing better than other people,
1:25:24.360 --> 1:25:26.360
particularly by combining it with your passion
1:25:26.360 --> 1:25:27.360
and domain expertise.
1:25:27.360 --> 1:25:28.360
So that's really interesting.
1:25:28.360 --> 1:25:30.360
Even if you do want to innovate on transfer learning
1:25:30.360 --> 1:25:32.360
or active learning,
1:25:32.360 --> 1:25:34.360
your thought is, I mean,
1:25:34.360 --> 1:25:38.360
what I certainly share is you also need to find
1:25:38.360 --> 1:25:41.360
a domain or data set that you actually really care for.
1:25:41.360 --> 1:25:42.360
Right.
1:25:42.360 --> 1:25:45.360
If you're not working on a real problem that you understand,
1:25:45.360 --> 1:25:47.360
how do you know if you're doing it any good?
1:25:47.360 --> 1:25:49.360
How do you know if your results are good?
1:25:49.360 --> 1:25:51.360
How do you know if you're getting bad results?
1:25:51.360 --> 1:25:52.360
Why are you getting bad results?
1:25:52.360 --> 1:25:54.360
Is it a problem with the data?
1:25:54.360 --> 1:25:57.360
How do you know you're doing anything useful?
1:25:57.360 --> 1:26:00.360
Yeah, to me, the only really interesting research is,
1:26:00.360 --> 1:26:03.360
not the only, but the vast majority of interesting research
1:26:03.360 --> 1:26:06.360
is try and solve an actual problem and solve it really well.
1:26:06.360 --> 1:26:10.360
So both understanding sufficient tools on the deep learning side
1:26:10.360 --> 1:26:14.360
and becoming a domain expert in a particular domain
1:26:14.360 --> 1:26:18.360
are really things within reach for anybody.
1:26:18.360 --> 1:26:19.360
Yeah.
1:26:19.360 --> 1:26:23.360
To me, I would compare it to studying self driving cars,
1:26:23.360 --> 1:26:26.360
having never looked at a car or been in a car
1:26:26.360 --> 1:26:29.360
or turned a car on, which is like the way it is
1:26:29.360 --> 1:26:30.360
for a lot of people.
1:26:30.360 --> 1:26:33.360
They'll study some academic data set
1:26:33.360 --> 1:26:36.360
where they literally have no idea about that.
1:26:36.360 --> 1:26:37.360
By the way, I'm not sure how familiar
1:26:37.360 --> 1:26:39.360
you are with autonomous vehicles,
1:26:39.360 --> 1:26:42.360
but that is literally, you describe a large percentage
1:26:42.360 --> 1:26:45.360
of robotics folks working in self driving cars,
1:26:45.360 --> 1:26:48.360
as they actually haven't considered driving.
1:26:48.360 --> 1:26:50.360
They haven't actually looked at what driving looks like.
1:26:50.360 --> 1:26:51.360
They haven't driven.
1:26:51.360 --> 1:26:52.360
And it applies.
1:26:52.360 --> 1:26:54.360
It's a problem because you know when you've actually driven,
1:26:54.360 --> 1:26:57.360
these are the things that happened to me when I was driving.
1:26:57.360 --> 1:26:59.360
There's nothing that beats the real world examples
1:26:59.360 --> 1:27:02.360
or just experiencing them.
1:27:02.360 --> 1:27:04.360
You've created many successful startups.
1:27:04.360 --> 1:27:08.360
What does it take to create a successful startup?
1:27:08.360 --> 1:27:12.360
Same thing as becoming successful deep learning practitioner,
1:27:12.360 --> 1:27:14.360
which is not giving up.
1:27:14.360 --> 1:27:22.360
So you can run out of money or run out of time
1:27:22.360 --> 1:27:24.360
or run out of something, you know,
1:27:24.360 --> 1:27:27.360
but if you keep costs super low
1:27:27.360 --> 1:27:29.360
and try and save up some money beforehand
1:27:29.360 --> 1:27:34.360
so you can afford to have some time,
1:27:34.360 --> 1:27:37.360
then just sticking with it is one important thing.
1:27:37.360 --> 1:27:42.360
Doing something you understand and care about is important.
1:27:42.360 --> 1:27:44.360
By something, I don't mean...
1:27:44.360 --> 1:27:46.360
The biggest problem I see with deep learning people
1:27:46.360 --> 1:27:49.360
is they do a PhD in deep learning
1:27:49.360 --> 1:27:52.360
and then they try and commercialize their PhD.
1:27:52.360 --> 1:27:53.360
It does a waste of time
1:27:53.360 --> 1:27:55.360
because that doesn't solve an actual problem.
1:27:55.360 --> 1:27:57.360
You picked your PhD topic
1:27:57.360 --> 1:28:00.360
because it was an interesting kind of engineering
1:28:00.360 --> 1:28:02.360
or math or research exercise.
1:28:02.360 --> 1:28:06.360
But yeah, if you've actually spent time as a recruiter
1:28:06.360 --> 1:28:10.360
and you know that most of your time was spent sifting through resumes
1:28:10.360 --> 1:28:12.360
and you know that most of the time
1:28:12.360 --> 1:28:14.360
you're just looking for certain kinds of things
1:28:14.360 --> 1:28:19.360
and you can try doing that with a model for a few minutes
1:28:19.360 --> 1:28:21.360
and see whether that's something which a model
1:28:21.360 --> 1:28:23.360
seems to be able to do as well as you could,
1:28:23.360 --> 1:28:27.360
then you're on the right track to creating a startup.
1:28:27.360 --> 1:28:30.360
And then I think just being...
1:28:30.360 --> 1:28:34.360
Just be pragmatic and...
1:28:34.360 --> 1:28:36.360
try and stay away from venture capital money
1:28:36.360 --> 1:28:38.360
as long as possible, preferably forever.
1:28:38.360 --> 1:28:42.360
So yeah, on that point, do you...
1:28:42.360 --> 1:28:43.360
venture capital...
1:28:43.360 --> 1:28:46.360
So were you able to successfully run startups
1:28:46.360 --> 1:28:48.360
with self funded for quite a while?
1:28:48.360 --> 1:28:50.360
Yeah, so my first two were self funded
1:28:50.360 --> 1:28:52.360
and that was the right way to do it.
1:28:52.360 --> 1:28:53.360
Is that scary?
1:28:53.360 --> 1:28:55.360
No.
1:28:55.360 --> 1:28:57.360
VCs startups are much more scary
1:28:57.360 --> 1:29:00.360
because you have these people on your back
1:29:00.360 --> 1:29:01.360
who do this all the time
1:29:01.360 --> 1:29:03.360
and who have done it for years
1:29:03.360 --> 1:29:05.360
telling you grow, grow, grow, grow.
1:29:05.360 --> 1:29:07.360
And they don't care if you fail.
1:29:07.360 --> 1:29:09.360
They only care if you don't grow fast enough.
1:29:09.360 --> 1:29:10.360
So that's scary.
1:29:10.360 --> 1:29:13.360
We're else doing the ones myself
1:29:13.360 --> 1:29:17.360
with partners who were friends.
1:29:17.360 --> 1:29:20.360
It's nice because we just went along
1:29:20.360 --> 1:29:22.360
at a pace that made sense
1:29:22.360 --> 1:29:24.360
and we were able to build it to something
1:29:24.360 --> 1:29:27.360
which was big enough that we never had to work again
1:29:27.360 --> 1:29:29.360
but was not big enough that any VC
1:29:29.360 --> 1:29:31.360
would think it was impressive
1:29:31.360 --> 1:29:35.360
and that was enough for us to be excited.
1:29:35.360 --> 1:29:38.360
So I thought that's a much better way
1:29:38.360 --> 1:29:40.360
to do things for most people.
1:29:40.360 --> 1:29:42.360
And generally speaking now for yourself
1:29:42.360 --> 1:29:44.360
but how do you make money during that process?
1:29:44.360 --> 1:29:47.360
Do you cut into savings?
1:29:47.360 --> 1:29:49.360
So yeah, so I started Fast Mail
1:29:49.360 --> 1:29:51.360
and Optimal Decisions at the same time
1:29:51.360 --> 1:29:54.360
in 1999 with two different friends.
1:29:54.360 --> 1:29:59.360
And for Fast Mail,
1:29:59.360 --> 1:30:03.360
I guess I spent $70 a month on the server.
1:30:03.360 --> 1:30:06.360
And when the server ran out of space
1:30:06.360 --> 1:30:09.360
I put a payments button on the front page
1:30:09.360 --> 1:30:11.360
and said if you want more than 10 meg of space
1:30:11.360 --> 1:30:15.360
you have to pay $10 a year.
1:30:15.360 --> 1:30:18.360
So run low like I keep your cost down.
1:30:18.360 --> 1:30:19.360
Yeah, so I kept my cost down
1:30:19.360 --> 1:30:22.360
and once I needed to spend more money
1:30:22.360 --> 1:30:25.360
I asked people to spend the money for me
1:30:25.360 --> 1:30:29.360
and that was that basically from then on.
1:30:29.360 --> 1:30:34.360
We were making money and I was profitable from then.
1:30:34.360 --> 1:30:37.360
For Optimal Decisions it was a bit harder
1:30:37.360 --> 1:30:40.360
because we were trying to sell something
1:30:40.360 --> 1:30:42.360
that was more like a $1 million sale
1:30:42.360 --> 1:30:46.360
but what we did was we would sell scoping projects
1:30:46.360 --> 1:30:50.360
so kind of like prototypy projects
1:30:50.360 --> 1:30:51.360
but rather than doing it for free
1:30:51.360 --> 1:30:54.360
we would sell them $50,000 to $100,000.
1:30:54.360 --> 1:30:57.360
So again we were covering our costs
1:30:57.360 --> 1:30:58.360
and also making the client feel like
1:30:58.360 --> 1:31:00.360
we were doing something valuable.
1:31:00.360 --> 1:31:06.360
So in both cases we were profitable from six months in.
1:31:06.360 --> 1:31:08.360
Nevertheless it's scary.
1:31:08.360 --> 1:31:10.360
I mean, yeah, sure.
1:31:10.360 --> 1:31:13.360
I mean it's scary before you jump in
1:31:13.360 --> 1:31:18.360
and I guess I was comparing it to the scaredyness of VC.
1:31:18.360 --> 1:31:20.360
I felt like with VC stuff it was more scary.
1:31:20.360 --> 1:31:24.360
Much more in somebody else's hands.
1:31:24.360 --> 1:31:26.360
Will they fund you or not?
1:31:26.360 --> 1:31:28.360
What do they think of what you're doing?
1:31:28.360 --> 1:31:30.360
I also found it very difficult with VC's back startups
1:31:30.360 --> 1:31:33.360
to actually do the thing which I thought was important
1:31:33.360 --> 1:31:35.360
for the company rather than doing the thing
1:31:35.360 --> 1:31:38.360
which I thought would make the VC happy.
1:31:38.360 --> 1:31:40.360
Now, VC's always tell you not to do the thing
1:31:40.360 --> 1:31:41.360
that makes them happy
1:31:41.360 --> 1:31:43.360
but then if you don't do the thing that makes them happy
1:31:43.360 --> 1:31:45.360
they get sad.
1:31:45.360 --> 1:31:48.360
And do you think optimizing for the whatever they call it
1:31:48.360 --> 1:31:52.360
the exit is a good thing to optimize for?
1:31:52.360 --> 1:31:54.360
I mean it can be but not at the VC level
1:31:54.360 --> 1:31:59.360
because the VC exit needs to be, you know, a thousand X.
1:31:59.360 --> 1:32:02.360
So where else the lifestyle exit
1:32:02.360 --> 1:32:04.360
if you can sell something for $10 million
1:32:04.360 --> 1:32:06.360
then you've made it, right?
1:32:06.360 --> 1:32:08.360
So it depends.
1:32:08.360 --> 1:32:10.360
If you want to build something that's going to,
1:32:10.360 --> 1:32:13.360
you're kind of happy to do forever then fine.
1:32:13.360 --> 1:32:16.360
If you want to build something you want to sell
1:32:16.360 --> 1:32:18.360
then three years time that's fine too.
1:32:18.360 --> 1:32:21.360
I mean they're both perfectly good outcomes.
1:32:21.360 --> 1:32:24.360
So you're learning Swift now?
1:32:24.360 --> 1:32:26.360
In a way, I mean you already.
1:32:26.360 --> 1:32:31.360
And I read that you use at least in some cases
1:32:31.360 --> 1:32:34.360
space repetition as a mechanism for learning new things.
1:32:34.360 --> 1:32:38.360
I use Anki quite a lot myself.
1:32:38.360 --> 1:32:41.360
I actually don't never talk to anybody about it.
1:32:41.360 --> 1:32:44.360
Don't know how many people do it
1:32:44.360 --> 1:32:46.360
and it works incredibly well for me.
1:32:46.360 --> 1:32:48.360
Can you talk to your experience?
1:32:48.360 --> 1:32:52.360
Like how did you, what do you, first of all, okay,
1:32:52.360 --> 1:32:53.360
let's back it up.
1:32:53.360 --> 1:32:55.360
What is space repetition?
1:32:55.360 --> 1:33:00.360
So space repetition is an idea created
1:33:00.360 --> 1:33:03.360
by a psychologist named Ebbinghaus,
1:33:03.360 --> 1:33:06.360
I don't know, must be a couple hundred years ago
1:33:06.360 --> 1:33:08.360
or something 150 years ago.
1:33:08.360 --> 1:33:11.360
He did something which sounds pretty damn tedious.
1:33:11.360 --> 1:33:16.360
He found random sequences of letters on cards
1:33:16.360 --> 1:33:21.360
and tested how well he would remember those random sequences
1:33:21.360 --> 1:33:23.360
a day later, a week later, whatever.
1:33:23.360 --> 1:33:26.360
He discovered that there was this kind of a curve
1:33:26.360 --> 1:33:29.360
where his probability of remembering one of them
1:33:29.360 --> 1:33:31.360
would be dramatically smaller the next day
1:33:31.360 --> 1:33:32.360
and then a little bit smaller the next day
1:33:32.360 --> 1:33:34.360
and a little bit smaller the next day.
1:33:34.360 --> 1:33:37.360
What he discovered is that if he revised those cards
1:33:37.360 --> 1:33:42.360
a day, the probabilities would decrease at a smaller rate
1:33:42.360 --> 1:33:44.360
and then if he revised them again a week later,
1:33:44.360 --> 1:33:46.360
they would decrease at a smaller rate again.
1:33:46.360 --> 1:33:51.360
And so he basically figured out a roughly optimal equation
1:33:51.360 --> 1:33:56.360
for when you should revise something you want to remember.
1:33:56.360 --> 1:34:00.360
So space repetition learning is using this simple algorithm,
1:34:00.360 --> 1:34:03.360
just something like revise something after a day
1:34:03.360 --> 1:34:06.360
and then three days and then a week and then three weeks
1:34:06.360 --> 1:34:07.360
and so forth.
1:34:07.360 --> 1:34:10.360
And so if you use a program like Anki, as you know,
1:34:10.360 --> 1:34:12.360
it will just do that for you.
1:34:12.360 --> 1:34:14.360
And it will say, did you remember this?
1:34:14.360 --> 1:34:18.360
And if you say no, it will reschedule it back to be
1:34:18.360 --> 1:34:22.360
appear again like 10 times faster than it otherwise would have.
1:34:22.360 --> 1:34:27.360
It's a kind of a way of being guaranteed to learn something
1:34:27.360 --> 1:34:30.360
because by definition, if you're not learning it,
1:34:30.360 --> 1:34:33.360
it will be rescheduled to be revised more quickly.
1:34:33.360 --> 1:34:37.360
Unfortunately though, it doesn't let you fool yourself.
1:34:37.360 --> 1:34:42.360
If you're not learning something, you know your revisions
1:34:42.360 --> 1:34:44.360
will just get more and more.
1:34:44.360 --> 1:34:48.360
So you have to find ways to learn things productively
1:34:48.360 --> 1:34:50.360
and effectively treat your brain well.
1:34:50.360 --> 1:34:57.360
So using mnemonics and stories and context and stuff like that.
1:34:57.360 --> 1:34:59.360
So yeah, it's a super great technique.
1:34:59.360 --> 1:35:01.360
It's like learning how to learn is something
1:35:01.360 --> 1:35:05.360
which everybody should learn before they actually learn anything.
1:35:05.360 --> 1:35:07.360
But almost nobody does.
1:35:07.360 --> 1:35:10.360
Yes, so what have you, so it certainly works well
1:35:10.360 --> 1:35:14.360
for learning new languages, for, I mean, for learning,
1:35:14.360 --> 1:35:16.360
like small projects almost.
1:35:16.360 --> 1:35:19.360
But do you, you know, I started using it for,
1:35:19.360 --> 1:35:22.360
I forget who wrote a blog post about this inspired me.
1:35:22.360 --> 1:35:25.360
It might have been you, I'm not sure.
1:35:25.360 --> 1:35:28.360
I started when I read papers.
1:35:28.360 --> 1:35:31.360
I'll, concepts and ideas, I'll put them.
1:35:31.360 --> 1:35:32.360
Was it Michael Nielsen?
1:35:32.360 --> 1:35:33.360
It was Michael Nielsen.
1:35:33.360 --> 1:35:34.360
Yeah, it was Michael Nielsen.
1:35:34.360 --> 1:35:36.360
Michael started doing this recently
1:35:36.360 --> 1:35:39.360
and has been writing about it.
1:35:39.360 --> 1:35:44.360
I, so the kind of today's ebbing house is a guy called Peter Wozniak
1:35:44.360 --> 1:35:47.360
who developed a system called Super Memo.
1:35:47.360 --> 1:35:51.360
And he's been basically trying to become like
1:35:51.360 --> 1:35:55.360
the world's greatest renaissance man over the last few decades.
1:35:55.360 --> 1:36:00.360
He's basically lived his life with space repetition learning
1:36:00.360 --> 1:36:03.360
for everything.
1:36:03.360 --> 1:36:07.360
I, and sort of like Michael's only very recently got into this,
1:36:07.360 --> 1:36:09.360
but he started really getting excited about doing it
1:36:09.360 --> 1:36:10.360
for a lot of different things.
1:36:10.360 --> 1:36:14.360
For me personally, I actually don't use it
1:36:14.360 --> 1:36:16.360
for anything except Chinese.
1:36:16.360 --> 1:36:21.360
And the reason for that is that Chinese is specifically a thing.
1:36:21.360 --> 1:36:26.360
I made a conscious decision that I want to continue to remember
1:36:26.360 --> 1:36:29.360
even if I don't get much of a chance to exercise it
1:36:29.360 --> 1:36:33.360
because like I'm not often in China, so I don't.
1:36:33.360 --> 1:36:37.360
Or else something like programming languages or papers,
1:36:37.360 --> 1:36:39.360
they have a very different approach,
1:36:39.360 --> 1:36:42.360
which is I try not to learn anything from them,
1:36:42.360 --> 1:36:46.360
but instead I try to identify the important concepts
1:36:46.360 --> 1:36:48.360
and like actually ingest them.
1:36:48.360 --> 1:36:53.360
So like really understand that concept deeply
1:36:53.360 --> 1:36:54.360
and study it carefully.
1:36:54.360 --> 1:36:56.360
Well, decide if it really is important.
1:36:56.360 --> 1:37:00.360
If it is like incorporate it into our library,
1:37:00.360 --> 1:37:03.360
you know, incorporate it into how I do things
1:37:03.360 --> 1:37:06.360
or decide it's not worth it.
1:37:06.360 --> 1:37:12.360
So I find I then remember the things that I care about
1:37:12.360 --> 1:37:15.360
because I'm using it all the time.
1:37:15.360 --> 1:37:19.360
So for the last 25 years,
1:37:19.360 --> 1:37:23.360
I've committed to spending at least half of every day
1:37:23.360 --> 1:37:25.360
learning or practicing something new,
1:37:25.360 --> 1:37:28.360
which is all my colleagues have always hated
1:37:28.360 --> 1:37:30.360
because it always looks like I'm not working on
1:37:30.360 --> 1:37:31.360
what I'm meant to be working on,
1:37:31.360 --> 1:37:34.360
but that always means I do everything faster
1:37:34.360 --> 1:37:36.360
because I've been practicing a lot of stuff.
1:37:36.360 --> 1:37:39.360
So I kind of give myself a lot of opportunity
1:37:39.360 --> 1:37:41.360
to practice new things.
1:37:41.360 --> 1:37:47.360
And so I find now I don't often kind of find myself
1:37:47.360 --> 1:37:50.360
wishing I could remember something
1:37:50.360 --> 1:37:51.360
because if it's something that's useful,
1:37:51.360 --> 1:37:53.360
then I've been using it a lot.
1:37:53.360 --> 1:37:55.360
It's easy enough to look it up on Google.
1:37:55.360 --> 1:37:59.360
But speaking Chinese, you can't look it up on Google.
1:37:59.360 --> 1:38:01.360
Do you have advice for people learning new things?
1:38:01.360 --> 1:38:04.360
What have you learned as a process?
1:38:04.360 --> 1:38:07.360
I mean, it all starts just making the hours
1:38:07.360 --> 1:38:08.360
and the day available.
1:38:08.360 --> 1:38:10.360
Yeah, you've got to stick with it,
1:38:10.360 --> 1:38:12.360
which is, again, the number one thing
1:38:12.360 --> 1:38:14.360
that 99% of people don't do.
1:38:14.360 --> 1:38:16.360
So the people I started learning Chinese with,
1:38:16.360 --> 1:38:18.360
none of them were still doing it 12 months later.
1:38:18.360 --> 1:38:20.360
I'm still doing it 10 years later.
1:38:20.360 --> 1:38:22.360
I tried to stay in touch with them,
1:38:22.360 --> 1:38:24.360
but they just, no one did it.
1:38:24.360 --> 1:38:26.360
For something like Chinese,
1:38:26.360 --> 1:38:28.360
like study how human learning works.
1:38:28.360 --> 1:38:31.360
So every one of my Chinese flashcards
1:38:31.360 --> 1:38:33.360
is associated with a story,
1:38:33.360 --> 1:38:36.360
and that story is specifically designed to be memorable.
1:38:36.360 --> 1:38:38.360
And we find things memorable,
1:38:38.360 --> 1:38:41.360
funny or disgusting or sexy
1:38:41.360 --> 1:38:44.360
or related to people that we know or care about.
1:38:44.360 --> 1:38:47.360
So I try to make sure all the stories that are in my head
1:38:47.360 --> 1:38:50.360
have those characteristics.
1:38:50.360 --> 1:38:52.360
Yeah, so you have to, you know,
1:38:52.360 --> 1:38:55.360
you won't remember things well if they don't have some context.
1:38:55.360 --> 1:38:57.360
And yeah, you won't remember them well
1:38:57.360 --> 1:39:00.360
if you don't regularly practice them,
1:39:00.360 --> 1:39:02.360
whether it be just part of your day to day life
1:39:02.360 --> 1:39:05.360
for the Chinese and me flashcards.
1:39:05.360 --> 1:39:09.360
I mean, the other thing is, let yourself fail sometimes.
1:39:09.360 --> 1:39:11.360
So like, I've had various medical problems
1:39:11.360 --> 1:39:13.360
over the last few years,
1:39:13.360 --> 1:39:16.360
and basically my flashcards just stopped
1:39:16.360 --> 1:39:18.360
for about three years.
1:39:18.360 --> 1:39:21.360
And then there've been other times I've stopped
1:39:21.360 --> 1:39:24.360
for a few months, and it's so hard because you get back to it,
1:39:24.360 --> 1:39:27.360
and it's like, you have 18,000 cards due.
1:39:27.360 --> 1:39:30.360
It's like, and so you just have to go,
1:39:30.360 --> 1:39:33.360
all right, well, I can either stop and give up everything
1:39:33.360 --> 1:39:36.360
or just decide to do this every day for the next two years
1:39:36.360 --> 1:39:38.360
until I get back to it.
1:39:38.360 --> 1:39:41.360
The amazing thing has been that even after three years,
1:39:41.360 --> 1:39:45.360
I, you know, the Chinese were still in there.
1:39:45.360 --> 1:39:47.360
Like, it was so much faster to relearn
1:39:47.360 --> 1:39:49.360
than it was to mine the first time.
1:39:49.360 --> 1:39:51.360
Yeah, absolutely.
1:39:51.360 --> 1:39:52.360
It's in there.
1:39:52.360 --> 1:39:55.360
I have the same with guitar, with music and so on.
1:39:55.360 --> 1:39:58.360
It's sad because work sometimes takes away
1:39:58.360 --> 1:40:00.360
and then you won't play for a year.
1:40:00.360 --> 1:40:03.360
But really, if you then just get back to it every day,
1:40:03.360 --> 1:40:05.360
you're right there again.
1:40:05.360 --> 1:40:08.360
What do you think is the next big breakthrough
1:40:08.360 --> 1:40:09.360
in artificial intelligence?
1:40:09.360 --> 1:40:12.360
What are your hopes in deep learning or beyond
1:40:12.360 --> 1:40:14.360
that people should be working on,
1:40:14.360 --> 1:40:16.360
or you hope there'll be breakthroughs?
1:40:16.360 --> 1:40:18.360
I don't think it's possible to predict.
1:40:18.360 --> 1:40:20.360
I think what we already have
1:40:20.360 --> 1:40:23.360
is an incredibly powerful platform
1:40:23.360 --> 1:40:26.360
to solve lots of societally important problems
1:40:26.360 --> 1:40:28.360
that are currently unsolved.
1:40:28.360 --> 1:40:30.360
I just hope that people will, lots of people
1:40:30.360 --> 1:40:33.360
will learn this toolkit and try to use it.
1:40:33.360 --> 1:40:36.360
I don't think we need a lot of new technological breakthroughs
1:40:36.360 --> 1:40:39.360
to do a lot of great work right now.
1:40:39.360 --> 1:40:42.360
And when do you think we're going to create
1:40:42.360 --> 1:40:44.360
a human level intelligence system?
1:40:44.360 --> 1:40:45.360
Do you think?
1:40:45.360 --> 1:40:46.360
I don't know.
1:40:46.360 --> 1:40:47.360
How hard is it?
1:40:47.360 --> 1:40:48.360
How far away are we?
1:40:48.360 --> 1:40:49.360
I don't know.
1:40:49.360 --> 1:40:50.360
I have no way to know.
1:40:50.360 --> 1:40:51.360
I don't know.
1:40:51.360 --> 1:40:53.360
Like, I don't know why people make predictions about this
1:40:53.360 --> 1:40:57.360
because there's no data and nothing to go on.
1:40:57.360 --> 1:40:59.360
And it's just like,
1:40:59.360 --> 1:41:03.360
there's so many societally important problems
1:41:03.360 --> 1:41:04.360
to solve right now,
1:41:04.360 --> 1:41:08.360
I just don't find it a really interesting question
1:41:08.360 --> 1:41:09.360
to even answer.
1:41:09.360 --> 1:41:12.360
So in terms of societally important problems,
1:41:12.360 --> 1:41:15.360
what's the problem that is within reach?
1:41:15.360 --> 1:41:17.360
Well, I mean, for example,
1:41:17.360 --> 1:41:19.360
there are problems that AI creates, right?
1:41:19.360 --> 1:41:21.360
So more specifically,
1:41:22.360 --> 1:41:26.360
labor force displacement is going to be huge
1:41:26.360 --> 1:41:28.360
and people keep making this
1:41:28.360 --> 1:41:31.360
frivolous econometric argument of being like,
1:41:31.360 --> 1:41:33.360
oh, there's been other things that aren't AI
1:41:33.360 --> 1:41:34.360
that have come along before
1:41:34.360 --> 1:41:37.360
and haven't created massive labor force displacement.
1:41:37.360 --> 1:41:39.360
Therefore, AI won't.
1:41:39.360 --> 1:41:41.360
So that's a serious concern for you?
1:41:41.360 --> 1:41:42.360
Oh, yeah.
1:41:42.360 --> 1:41:43.360
Andrew Yang is running on it.
1:41:43.360 --> 1:41:44.360
Yeah.
1:41:44.360 --> 1:41:46.360
It's desperately concerned.
1:41:46.360 --> 1:41:52.360
And you see already that the changing workplace
1:41:52.360 --> 1:41:55.360
has lived to a hollowing out of the middle class.
1:41:55.360 --> 1:41:58.360
You're seeing that students coming out of school today
1:41:58.360 --> 1:42:03.360
have a less rosy financial future ahead of them
1:42:03.360 --> 1:42:04.360
than the parents did,
1:42:04.360 --> 1:42:06.360
which has never happened in recent,
1:42:06.360 --> 1:42:08.360
in the last 300 years.
1:42:08.360 --> 1:42:11.360
We've always had progress before.
1:42:11.360 --> 1:42:16.360
And you see this turning into anxiety and despair
1:42:16.360 --> 1:42:19.360
and even violence.
1:42:19.360 --> 1:42:21.360
So I very much worry about that.
1:42:21.360 --> 1:42:24.360
You've written quite a bit about ethics, too.
1:42:24.360 --> 1:42:27.360
I do think that every data scientist
1:42:27.360 --> 1:42:32.360
working with deep learning needs to recognize
1:42:32.360 --> 1:42:34.360
they have an incredibly high leverage tool
1:42:34.360 --> 1:42:36.360
that they're using that can influence society
1:42:36.360 --> 1:42:37.360
in lots of ways.
1:42:37.360 --> 1:42:38.360
And if they're doing research,
1:42:38.360 --> 1:42:41.360
that research is going to be used by people
1:42:41.360 --> 1:42:42.360
doing this kind of work
1:42:42.360 --> 1:42:44.360
and they have a responsibility
1:42:44.360 --> 1:42:46.360
to consider the consequences
1:42:46.360 --> 1:42:49.360
and to think about things like
1:42:49.360 --> 1:42:53.360
how will humans be in the loop here?
1:42:53.360 --> 1:42:55.360
How do we avoid runaway feedback loops?
1:42:55.360 --> 1:42:58.360
How do we ensure an appeals process for humans
1:42:58.360 --> 1:43:00.360
that are impacted by my algorithm?
1:43:00.360 --> 1:43:04.360
How do I ensure that the constraints of my algorithm
1:43:04.360 --> 1:43:08.360
are adequately explained to the people that end up using them?
1:43:08.360 --> 1:43:11.360
There's all kinds of human issues,
1:43:11.360 --> 1:43:13.360
which only data scientists
1:43:13.360 --> 1:43:17.360
are actually in the right place to educate people about,
1:43:17.360 --> 1:43:21.360
but data scientists tend to think of themselves as
1:43:21.360 --> 1:43:22.360
just engineers
1:43:22.360 --> 1:43:24.360
and that they don't need to be part of that process,
1:43:24.360 --> 1:43:26.360
which is wrong.
1:43:26.360 --> 1:43:29.360
Well, you're in the perfect position to educate them better,
1:43:29.360 --> 1:43:32.360
to read literature, to read history,
1:43:32.360 --> 1:43:35.360
to learn from history.
1:43:35.360 --> 1:43:38.360
Well, Jeremy, thank you so much for everything you do
1:43:38.360 --> 1:43:40.360
for inspiring a huge amount of people,
1:43:40.360 --> 1:43:42.360
getting them into deep learning
1:43:42.360 --> 1:43:44.360
and having the ripple effects,
1:43:44.360 --> 1:43:48.360
the flap of a butterfly's wings that will probably change the world.
1:43:48.360 --> 1:44:17.360
So thank you very much.