Datasets:
File size: 83,705 Bytes
a3be5d0 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 1355 1356 1357 1358 1359 1360 1361 1362 1363 1364 1365 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 1383 1384 1385 1386 1387 1388 1389 1390 1391 1392 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 1410 1411 1412 1413 1414 1415 1416 1417 1418 1419 1420 1421 1422 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 1442 1443 1444 1445 1446 1447 1448 1449 1450 1451 1452 1453 1454 1455 1456 1457 1458 1459 1460 1461 1462 1463 1464 1465 1466 1467 1468 1469 1470 1471 1472 1473 1474 1475 1476 1477 1478 1479 1480 1481 1482 1483 1484 1485 1486 1487 1488 1489 1490 1491 1492 1493 1494 1495 1496 1497 1498 1499 1500 1501 1502 1503 1504 1505 1506 1507 1508 1509 1510 1511 1512 1513 1514 1515 1516 1517 1518 1519 1520 1521 1522 1523 1524 1525 1526 1527 1528 1529 1530 1531 1532 1533 1534 1535 1536 1537 1538 1539 1540 1541 1542 1543 1544 1545 1546 1547 1548 1549 1550 1551 1552 1553 1554 1555 1556 1557 1558 1559 1560 1561 1562 1563 1564 1565 1566 1567 1568 1569 1570 1571 1572 1573 1574 1575 1576 1577 1578 1579 1580 1581 1582 1583 1584 1585 1586 1587 1588 1589 1590 1591 1592 1593 1594 1595 1596 1597 1598 1599 1600 1601 1602 1603 1604 1605 1606 1607 1608 1609 1610 1611 1612 1613 1614 1615 1616 1617 1618 1619 1620 1621 1622 1623 1624 1625 1626 1627 1628 1629 1630 1631 1632 1633 1634 1635 1636 1637 1638 1639 1640 1641 1642 1643 1644 1645 1646 1647 1648 1649 1650 1651 1652 1653 1654 1655 1656 1657 1658 1659 1660 1661 1662 1663 1664 1665 1666 1667 1668 1669 1670 1671 1672 1673 1674 1675 1676 1677 1678 1679 1680 1681 1682 1683 1684 1685 1686 1687 1688 1689 1690 1691 1692 1693 1694 1695 1696 1697 1698 1699 1700 1701 1702 1703 1704 1705 1706 1707 1708 1709 1710 1711 1712 1713 1714 1715 1716 1717 1718 1719 1720 1721 1722 1723 1724 1725 1726 1727 1728 1729 1730 1731 1732 1733 1734 1735 1736 1737 1738 1739 1740 1741 1742 1743 1744 1745 1746 1747 1748 1749 1750 1751 1752 1753 1754 1755 1756 1757 1758 1759 1760 1761 1762 1763 1764 1765 1766 1767 1768 1769 1770 1771 1772 1773 1774 1775 1776 1777 1778 1779 1780 1781 1782 1783 1784 1785 1786 1787 1788 1789 1790 1791 1792 1793 1794 1795 1796 1797 1798 1799 1800 1801 1802 1803 1804 1805 1806 1807 1808 1809 1810 1811 1812 1813 1814 1815 1816 1817 1818 1819 1820 1821 1822 1823 1824 1825 1826 1827 1828 1829 1830 1831 1832 1833 1834 1835 1836 1837 1838 1839 1840 1841 1842 1843 1844 1845 1846 1847 1848 1849 1850 1851 1852 1853 1854 1855 1856 1857 1858 1859 1860 1861 1862 1863 1864 1865 1866 1867 1868 1869 1870 1871 1872 1873 1874 1875 1876 1877 1878 1879 1880 1881 1882 1883 1884 1885 1886 1887 1888 1889 1890 1891 1892 1893 1894 1895 1896 1897 1898 1899 1900 1901 1902 1903 1904 1905 1906 1907 1908 1909 1910 1911 1912 1913 1914 1915 1916 1917 1918 1919 1920 1921 1922 1923 1924 1925 1926 1927 1928 1929 1930 1931 1932 1933 1934 1935 1936 1937 1938 1939 1940 1941 1942 1943 1944 1945 1946 1947 1948 1949 1950 1951 1952 1953 1954 1955 1956 1957 1958 1959 1960 1961 1962 1963 1964 1965 1966 1967 1968 1969 1970 1971 1972 1973 1974 1975 1976 1977 1978 1979 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025 2026 2027 2028 2029 2030 2031 2032 2033 2034 2035 2036 2037 2038 2039 2040 2041 2042 2043 2044 2045 2046 2047 2048 2049 2050 2051 2052 2053 2054 2055 2056 2057 2058 2059 2060 2061 2062 2063 2064 2065 2066 2067 2068 2069 2070 2071 2072 2073 2074 2075 2076 2077 2078 2079 2080 2081 2082 2083 2084 2085 2086 2087 2088 2089 2090 2091 2092 2093 2094 2095 2096 2097 2098 2099 2100 2101 2102 2103 2104 2105 2106 2107 2108 2109 2110 2111 2112 2113 2114 2115 2116 2117 2118 2119 2120 2121 2122 2123 2124 2125 2126 2127 2128 2129 2130 2131 2132 2133 2134 2135 2136 2137 2138 2139 2140 2141 2142 2143 2144 2145 2146 2147 2148 2149 2150 2151 2152 2153 2154 2155 2156 2157 2158 2159 2160 2161 2162 2163 2164 2165 2166 2167 2168 2169 2170 2171 2172 2173 2174 2175 2176 2177 2178 |
WEBVTT
00:00.000 --> 00:04.720
The following is a conversation with Gavin Miller, he's the head of Adobe Research.
00:04.720 --> 00:08.960
Adobe has empowered artists, designers, and creative minds from all professions,
00:08.960 --> 00:14.320
working in the digital medium for over 30 years with software such as Photoshop, Illustrator,
00:14.320 --> 00:20.560
Premiere, After Effects, InDesign, Audition, Software that work with images, video, and audio.
00:21.200 --> 00:25.920
Adobe Research is working to define the future evolution of these products in a way
00:25.920 --> 00:31.360
that makes the life of creatives easier, automates the tedious tasks, and gives more and more time
00:31.360 --> 00:36.880
to operate in the idea space instead of pixel space. This is where the cutting edge, deep
00:36.880 --> 00:41.360
learning methods of the past decade can really shine more than perhaps any other application.
00:42.240 --> 00:47.840
Gavin is the embodiment of combining tech and creativity. Outside of Adobe Research,
00:47.840 --> 00:53.600
he writes poetry and builds robots, both things that are near and dear to my heart as well.
00:53.600 --> 00:59.200
This conversation is part of the Artificial Intelligence Podcast. If you enjoy it, subscribe
00:59.200 --> 01:05.360
on YouTube, iTunes, or simply connect with me on Twitter at Lex Friedman's spelled F R I D.
01:06.000 --> 01:09.600
And now here's my conversation with Gavin Miller.
01:11.120 --> 01:15.920
You're head of Adobe Research, leading a lot of innovative efforts and applications of AI,
01:15.920 --> 01:23.200
creating images, video, audio, language, but you're also yourself an artist, a poet,
01:23.200 --> 01:28.640
a writer, and even a roboticist. So while I promise to everyone listening,
01:28.640 --> 01:32.880
that I will not spend the entire time we have together reading your poetry, which I love.
01:33.440 --> 01:39.200
I have to sprinkle it in at least a little bit. So some of them are pretty deep and profound,
01:39.200 --> 01:43.520
and some are light and silly. Let's start with a few lines from the silly variety.
01:43.520 --> 01:56.800
You write in a beautiful parody, both Edith Piaf's and my web at Frank Sinatra.
01:56.800 --> 02:06.400
So it opens with, and now dessert is near. It's time to pay the final total. I've tried to slim
02:06.400 --> 02:14.800
all year, but my diets have been anecdotal. So where does that love for poetry come from
02:14.800 --> 02:20.880
for you? And if we dissect your mind, how does it all fit together in the bigger puzzle of Dr.
02:20.880 --> 02:27.440
Gavin Miller? Well, interesting you chose that one. That was a poem I wrote when I'd been to
02:27.440 --> 02:32.400
my doctor and he said, you really need to lose some weight and go on a diet. And whilst the
02:32.400 --> 02:37.200
rational part of my brain wanted to do that, the irrational part of my brain was protesting and
02:37.200 --> 02:42.400
sort of embraced the opposite idea. I regret nothing, hence. Yes, exactly. Taken to an extreme,
02:42.400 --> 02:49.600
I thought it would be funny. Obviously, it's a serious topic for some people. But I think,
02:49.600 --> 02:53.920
for me, I've always been interested in writing since I was in high school, as well as doing
02:53.920 --> 02:58.960
technology and invention. And sometimes the parallel strands in your life that carry on,
02:58.960 --> 03:05.120
and one is more about your private life and one's more about your technological career.
03:05.680 --> 03:10.640
And then at sort of happy moments along the way, sometimes the two things touch, one idea informs
03:10.640 --> 03:17.040
the other. And we can talk about that as we go. Do you think you're writing the art, the poetry
03:17.040 --> 03:23.440
contribute indirectly or directly to your research, to your work in Adobe? Well, sometimes it does if
03:23.440 --> 03:30.000
I say, imagine a future in a science fiction kind of way. And then once it exists on paper,
03:30.000 --> 03:37.520
I think, well, why shouldn't I just build that? There was an example where when realistic voice
03:37.520 --> 03:42.800
synthesis first started in the 90s at Apple, where I worked in research. I was done by a friend of mine.
03:44.000 --> 03:48.640
I sort of sat down and started writing a poem which each line I would enter into the voice
03:48.640 --> 03:54.240
synthesizer and see how it sounded and sort of wrote it for that voice. And at the time,
03:55.040 --> 04:00.160
the agents weren't very sophisticated. So they'd sort of add random intonation. And I kind of made
04:00.160 --> 04:06.560
up the poem to sort of match the tone of the voice. And it sounded slightly sad and depressed. So I
04:06.560 --> 04:12.720
pretended it was a poem written by an intelligent agent, sort of telling the user to go home and
04:12.720 --> 04:16.560
leave them alone. But at the same time, they were lonely and wanted to have company and learn from
04:16.560 --> 04:21.760
what the user was saying. And at the time, it was way beyond anything that AI could possibly do.
04:21.760 --> 04:27.520
But, you know, since then, it's becoming more within the bounds of possibility.
04:29.040 --> 04:34.800
And then at the same time, I had a project at home where I did sort of a smart home. This was
04:34.800 --> 04:40.960
probably 93, 94. And I had the talking voice who'd remind me when I walked in the door of what
04:40.960 --> 04:45.600
things I had to do. I had buttons on my washing machine because I was a bachelor and I'd leave
04:45.600 --> 04:49.920
the clothes in there for three days and they'd go moldy. So as I got up in the morning, I would say,
04:49.920 --> 04:56.640
don't forget the washing and so on. I made photographic photo albums that used light
04:56.640 --> 05:01.040
sensors to know which page you were looking at would send that over wireless radio to the agent
05:01.040 --> 05:05.760
who would then play sounds that matched the image she were looking at in the book. So I was kind of
05:05.760 --> 05:10.480
in love with this idea of magical realism and whether it was possible to do that with technology.
05:10.480 --> 05:16.080
So that was a case where the sort of the agent sort of intrigued me from a literary point of
05:16.080 --> 05:22.880
view and became a personality. I think more recently, I've also written plays and when
05:22.880 --> 05:27.440
plays you write dialogue and obviously you write a fixed set of dialogue that follows a linear
05:27.440 --> 05:33.360
narrative. But with modern agents, as you design a personality or a capability for conversation,
05:33.360 --> 05:37.760
you're sort of thinking of, I kind of have imaginary dialogue in my head. And then I think,
05:37.760 --> 05:43.680
what would it take not only to have that be real, but for it to really know what it's talking about.
05:44.240 --> 05:49.440
So it's easy to fall into the uncanny valley with AI where it says something it doesn't really
05:49.440 --> 05:54.560
understand, but it sounds good to the person. But you rapidly realize that it's kind of just
05:55.520 --> 06:00.000
stimulus response. It doesn't really have real world knowledge about the thing it's describing.
06:00.640 --> 06:06.320
And so when you get to that point, it really needs to have multiple ways of talking about
06:06.320 --> 06:10.560
the same concept. So it sounds as though it really understands it. Now, what really understanding
06:10.560 --> 06:16.160
means is in the eye of the beholder, right? But if it only has one way of referring to something,
06:16.160 --> 06:21.200
it feels like it's a canned response. But if it can reason about it, or you can go at it from
06:21.200 --> 06:25.600
multiple angles and give a similar kind of response that people would, then it starts to
06:26.400 --> 06:30.480
seem more like there's something there that's sentient.
06:31.040 --> 06:35.600
You can say the same thing, multiple things from different perspectives. I mean, with the
06:35.600 --> 06:40.000
automatic image captioning that I've seen the work that you're doing, there's elements of that,
06:40.000 --> 06:46.000
right? Being able to generate different kinds of... Right. So one in my team, there's a lot of work on
06:46.640 --> 06:52.000
turning a medium from one form to another, whether it's auto tagging imagery or making up full
06:52.000 --> 06:57.840
sentences about what's in the image, then changing the sentence, finding another image that matches
06:57.840 --> 07:04.720
the new sentence or vice versa. And in the modern world of GANs, you sort of give it a description
07:04.720 --> 07:11.360
and it synthesizes an asset that matches the description. So I've sort of gone on a journey.
07:11.360 --> 07:16.560
My early days in my career were about 3D computer graphics, the sort of pioneering work sort of
07:16.560 --> 07:22.400
before movies had special effects done with 3D graphics and sort of rode that revolution. And
07:22.400 --> 07:26.720
that was very much like the renaissance where people would model light and color and shape
07:26.720 --> 07:32.160
and everything. And now we're kind of in another wave where it's more impressionistic and it's
07:32.160 --> 07:38.240
sort of the idea of something can be used to generate an image directly, which is sort of the
07:38.240 --> 07:45.520
new frontier in computer image generation using AI algorithms. So the creative process is more in
07:45.520 --> 07:49.840
the space of ideas or becoming more in the space of ideas versus in the raw pixels?
07:50.720 --> 07:55.280
Well, it's interesting. It depends. I think at Adobe, we really want to span the entire range
07:55.280 --> 08:01.040
from really, really good, what you might call low level tools by low level, as close to say analog
08:01.040 --> 08:07.040
workflows as possible. So what we do there is we make up systems that do really realistic oil
08:07.040 --> 08:11.600
paint and watercolor simulation. So if you want every bristle to behave as it would in the real
08:11.600 --> 08:17.760
world and leave a beautiful analog trail of water and then flow after you've made the brushstroke,
08:17.760 --> 08:21.600
you can do that. And that's really important for people who want to create something
08:22.720 --> 08:28.160
really expressive or really novel because they have complete control. And then a certain other
08:28.160 --> 08:35.600
task become automated. It frees the artists up to focus on the inspiration and less of the perspiration.
08:35.600 --> 08:43.920
So thinking about different ideas, obviously, once you finish the design, there's a lot of work to
08:43.920 --> 08:49.840
say do it for all the different aspect ratio of phones or websites and so on. And that used to
08:49.840 --> 08:55.040
take up an awful lot of time for artists. It still does for many, what we call content velocity.
08:55.040 --> 09:01.360
And one of the targets of AI is actually to reason about from the first example of what are the
09:01.360 --> 09:06.960
likely intent for these other formats, maybe if you change the language to German and the words
09:06.960 --> 09:12.160
are longer, how do you reflow everything so that it looks nicely artistic in that way.
09:12.160 --> 09:17.360
And so the person can focus on the really creative bit in the middle, which is what is the look and
09:17.360 --> 09:21.200
style and feel and what's the message and what's the story and the human element.
09:21.200 --> 09:27.920
So I think creativity is changing. So that's one way in which we're trying to just make it easier
09:27.920 --> 09:32.880
and faster and cheaper to do so that there can be more of it, more demand, because it's less
09:32.880 --> 09:39.200
expensive. So everyone wants beautiful artwork for everything from a school website to Hollywood movie.
09:40.800 --> 09:46.480
On the other side, as some of these things have automatic versions of them, people will
09:46.480 --> 09:52.480
possibly change role from being the hands on artist and to being either the art director or
09:52.480 --> 09:57.920
the conceptual artist. And then the computer will be a partner to help create polished examples of
09:57.920 --> 10:02.720
the idea that they're exploring. Let's talk about Adobe products versus AI and Adobe products.
10:04.000 --> 10:11.200
Just so you know where I'm coming from, I'm a huge fan of Photoshop for images premiere for video,
10:11.200 --> 10:17.200
audition for audio. I'll probably use Photoshop to create the thumbnail for this video, premiere
10:17.200 --> 10:24.800
to edit the video, audition to do the audio. That said, everything I do is really manually. And I
10:24.800 --> 10:30.640
set up, I use this old school kinesis keyboard and I have auto hotkey that just it's really about
10:30.640 --> 10:37.280
optimizing the flow of just making sure there's as few clicks as possible. So just being extremely
10:37.280 --> 10:43.920
efficient. It's something you started to speak to. So before we get into the fun, sort of awesome
10:43.920 --> 10:48.880
deep learning things, where does AI, if you could speak a little more to it AI or just
10:48.880 --> 10:57.280
automation in general, do you see in the coming months and years or in general prior in 2018
10:58.560 --> 11:03.840
fitting into making the life, the low level pixel work flow easier?
11:03.840 --> 11:10.160
Yeah, that's a great question. So we have a very rich array of algorithms already in Photoshop,
11:10.160 --> 11:17.600
just classical procedural algorithms as well as ones based on data. In some cases, they end up
11:17.600 --> 11:22.560
with a large number of sliders and degrees of freedom. So one way in which AI can help is just
11:22.560 --> 11:27.520
an auto button which comes up with default settings based on the content itself rather than
11:27.520 --> 11:34.080
default values for the tool. At that point, you then start tweaking. So that's that's a very kind of
11:34.080 --> 11:39.600
make life easier for people whilst making use of common sense from other example images.
11:39.600 --> 11:40.960
So like smart defaults.
11:40.960 --> 11:47.760
Smart defaults, absolutely. Another one is something we've spent a lot of work over the last
11:47.760 --> 11:53.360
20 years. I've been at Adobe 19 thinking about selection, for instance, where
11:53.360 --> 11:58.720
you know, with a quick select, you would look at color boundaries and figure out how to sort of
11:58.720 --> 12:02.480
flood fill into regions that you thought were physically connected in the real world.
12:03.360 --> 12:08.080
But that algorithm had no visual common sense about what a cat looks like or a dog. It would just do
12:08.080 --> 12:14.080
it based on rules of thumb, which were applied to graph theory. And it was a big improvement over
12:14.080 --> 12:19.600
the previous work we had sort of almost click every everything by hand or if it just did similar
12:19.600 --> 12:24.880
colors, it would do little tiny regions that wouldn't be connected. But in the future,
12:24.880 --> 12:31.120
using neural nets to actually do a great job with say a single click or even in the case of
12:31.120 --> 12:36.160
well known categories like people or animals, no click, where you just say select the object and
12:36.160 --> 12:41.440
it just knows the dominant object as a person in the middle of the photograph. Those kinds of things
12:41.440 --> 12:47.520
are really valuable if they can be robust enough to give you good quality results.
12:47.520 --> 12:50.720
Or they can be a great start for like tweaking it.
12:50.720 --> 12:56.080
So for example, background removal, like one thing I'll, in a thumbnail,
12:56.080 --> 13:00.240
I'll take a picture of you right now and essentially remove the background behind you.
13:00.240 --> 13:06.480
And I want to make that as easy as possible. You don't have flowing hair, like rich at the
13:06.480 --> 13:11.040
moment. Rich sort of. I had it in the past, it may come again in the future, but for now.
13:12.320 --> 13:16.160
So that sometimes makes it a little more challenging to remove the background.
13:16.160 --> 13:23.680
How difficult do you think is that problem for AI for basically making the quick selection tool
13:23.680 --> 13:26.960
smarter and smarter and smarter? Well, we have a lot of research on that already.
13:28.400 --> 13:34.160
If you want a sort of quick, cheap and cheerful, look, I'm pretending I'm in Hawaii,
13:34.160 --> 13:38.400
but it's sort of a joke, then you don't need perfect boundaries. And you can do that today
13:38.400 --> 13:44.960
with a single click for the algorithms we have. We have other algorithms where with a little bit
13:44.960 --> 13:48.560
more guidance on the boundaries, like you might need to touch it up a little bit.
13:49.920 --> 13:56.480
We have other algorithms that can pull a nice mat from a crude selection. So we have combinations
13:56.480 --> 14:04.000
of tools that can do all of that. And at our recent Max conference at AB Max, we demonstrated how
14:04.880 --> 14:09.680
very quickly just by drawing a simple polygon around the object of interest, we could not
14:09.680 --> 14:17.920
only do it for a single still, but we could pull at least a selection mask from a moving target,
14:17.920 --> 14:23.360
like a person dancing in front of a brick wall or something. And so it's going from hours to
14:23.360 --> 14:29.360
a few seconds for workflows that are really nice. And then you might go in and touch up a little.
14:30.240 --> 14:33.040
So that's a really interesting question. You mentioned the word robust.
14:33.040 --> 14:40.000
You know, there's like a journey for an idea, right? And what you presented probably at Max
14:41.360 --> 14:46.320
has elements of just sort of it inspires the concept, it can work pretty well in a majority
14:46.320 --> 14:51.520
of cases. But how do you make something that works? Well, in majority of cases, how do you make
14:51.520 --> 14:56.480
something that works, maybe in all cases, or it becomes a robust tool?
14:56.480 --> 15:01.760
There are a couple of things. So that really touches on the difference between academic research
15:01.760 --> 15:06.640
and industrial research. So in academic research, it's really about who's the person to have the
15:06.640 --> 15:12.480
great new idea that shows promise. And we certainly love to be those people too. But
15:13.120 --> 15:17.840
we have sort of two forms of publishing. One is academic peer review, which we do a lot of,
15:17.840 --> 15:24.720
and we have great success there as much as some universities. But then we also have shipping,
15:24.720 --> 15:30.160
which is a different type of, and then we get customer review, as well as, you know, product
15:30.160 --> 15:37.840
critics. And that might be a case where it's not about being perfect every single time, but
15:37.840 --> 15:43.280
perfect enough at the time, plus a mechanism to intervene and recover where you do have mistakes.
15:43.280 --> 15:47.120
So we have the luxury of very talented customers. We don't want them to be
15:48.720 --> 15:55.200
overly taxed doing it every time. But if they can go in and just take it from 99 to 100,
15:55.200 --> 16:01.440
100 with the touch of a mouse or something, then for the professional end, that's something
16:01.440 --> 16:06.320
that we definitely want to support as well. And for them, it went from having to do that
16:06.320 --> 16:13.360
tedious task all the time to much less often. So I think that gives us an out. If it had to be
16:13.360 --> 16:18.640
100% automatic all the time, then that would delay the time at which we could get to market.
16:18.640 --> 16:26.000
So on that thread, maybe you can untangle something. Again, I'm sort of just speaking to
16:26.000 --> 16:36.000
my own experience. Maybe that is the most useful idea. So I think Photoshop, as an example or premiere,
16:37.680 --> 16:44.240
has a lot of amazing features that I haven't touched. And so what's the, in terms of AI,
16:44.240 --> 16:54.080
helping make my life or the life of creatives easier? How this collaboration between human
16:54.080 --> 16:58.960
and machine, how do you learn to collaborate better? How do you learn the new algorithms?
17:00.000 --> 17:04.240
Is it something that where you have to watch tutorials and you have to watch videos and so
17:04.240 --> 17:10.400
on? Or do you ever think, do you think about the experience itself through exploration being
17:10.400 --> 17:18.320
the teacher? We absolutely do. So I'm glad that you brought this up. We sort of think about
17:18.320 --> 17:22.080
two things. One is helping the person in the moment to do the task that they need to do. But
17:22.080 --> 17:26.960
the other is thinking more holistically about their journey learning a tool. And when it's like,
17:26.960 --> 17:31.120
think of it as Adobe University, where you use the tool long enough, you become an expert.
17:31.120 --> 17:34.960
And not necessarily an expert in everything. It's like living in a city. You don't necessarily
17:34.960 --> 17:40.160
know every street, but you know, the important ones you need to get to. So we have projects in
17:40.160 --> 17:45.360
research, which actually look at the thousands of hours of tutorials online and try to understand
17:45.360 --> 17:51.040
what's being taught in them. And then we had one publication at CHI where it was looking at,
17:52.560 --> 17:57.360
given the last three or four actions you did, what did other people in tutorials do next?
17:57.360 --> 18:01.680
So if you want some inspiration for what you might do next, or you just want to watch the
18:01.680 --> 18:06.800
tutorial and see, learn from people who are doing similar workflows to you, you can without having
18:06.800 --> 18:13.360
to go and search on keywords and everything. So really trying to use the context of your use of
18:13.360 --> 18:18.640
the app to make intelligent suggestions, either about choices that you might make,
18:20.800 --> 18:26.480
or in a more assistive way where it could say, if you did this next, we could show you. And that's
18:26.480 --> 18:31.360
basically the frontier that we're exploring now, which is, if we really deeply understand the
18:31.360 --> 18:38.480
domain in which designers and creative people work, can we combine that with AI and pattern
18:38.480 --> 18:47.520
matching of behavior to make intelligent suggestions, either through verbal possibilities or just
18:47.520 --> 18:53.840
showing the results of if you try this. And that's really the sort of, I was in a meeting today
18:53.840 --> 18:58.880
thinking about these things. So it's still a grand challenge. We'd all love
18:58.880 --> 19:05.920
an artist over one shoulder and a teacher over the other, right? And we hope to get there. And
19:05.920 --> 19:10.640
the right thing to do is to give enough at each stage that it's useful in itself, but it builds
19:10.640 --> 19:19.120
a foundation for the next level of expectation. Are you aware of this gigantic medium of YouTube
19:19.120 --> 19:26.240
that's creating just a bunch of creative people, both artists and teachers of different kinds?
19:26.240 --> 19:31.440
Absolutely. And the more we can understand those media types, both visually and in terms of
19:31.440 --> 19:37.200
transcripts and words, the more we can bring the wisdom that they embody into the guidance that's
19:37.200 --> 19:42.960
embedded in the tool. That would be brilliant to remove the barrier from having to yourself type
19:42.960 --> 19:49.600
in the keyword, searching, so on. Absolutely. And then in the longer term, an interesting
19:49.600 --> 19:54.000
discussion is, does it ultimately not just assist with learning the interface we have,
19:54.000 --> 19:59.200
but does it modify the interface to be simpler? Or do you fragment into a variety of tools,
19:59.200 --> 20:05.600
each of which has a different level of visibility of the functionality? I like to say that if you
20:05.600 --> 20:12.640
add a feature to a GUI, you have to have yet more visual complexity confronting the new user.
20:12.640 --> 20:17.120
Whereas if you have an assistant with a new skill, if you know they have it, so you know
20:17.120 --> 20:23.280
to ask for it, then it's sort of additive without being more intimidating. So we definitely think
20:23.280 --> 20:29.120
about new users and how to onboard them. Many actually value the idea of being able to master
20:29.120 --> 20:34.720
that complex interface and keyboard shortcuts, like you were talking about earlier, because
20:35.520 --> 20:39.680
with great familiarity, it becomes a musical instrument for expressing your visual ideas.
20:40.480 --> 20:45.840
And other people just want to get something done quickly in the simplest way possible,
20:45.840 --> 20:50.400
and that's where a more assistive version of the same technology might be useful,
20:50.400 --> 20:54.560
maybe on a different class of device, which is more in context for capture, say,
20:55.920 --> 21:01.680
whereas somebody who's in a deep post production workflow maybe want to be on a laptop or a big
21:01.680 --> 21:10.560
screen desktop and have more knobs and dials to really express the subtlety of what they want to do.
21:12.160 --> 21:16.320
So there's so many exciting applications of computer vision and machine learning
21:16.320 --> 21:21.920
that Adobe is working on, like scene stitching, sky replacement, foreground,
21:21.920 --> 21:26.880
background removal, spatial object based image search, automatic image captioning,
21:26.880 --> 21:31.280
like we mentioned, project cloak, project deep fill filling in parts of the images,
21:31.920 --> 21:38.640
project scribbler, style transfer video, style transfer faces and video with Project Puppetron,
21:38.640 --> 21:49.040
best name ever. Can you talk through a favorite or some of them or examples that pop in mind?
21:49.040 --> 21:54.800
I'm sure I'll be able to provide links to other ones we don't talk about because there's visual
21:54.800 --> 22:00.640
elements to all of them that are exciting. Why they're interesting for different reasons might
22:00.640 --> 22:06.640
be a good way to go. So I think sky replace is interesting because we talked about selection
22:06.640 --> 22:11.440
being sort of an atomic operation. It's almost like a, if you think of an assembly language,
22:11.440 --> 22:17.840
it's like a single instruction. Whereas sky replace is a compound action where you automatically
22:17.840 --> 22:22.720
select the sky, you look for stock content that matches the geometry of the scene.
22:24.000 --> 22:27.600
You try to have variety in your choices so that you do coverage of different moods.
22:28.160 --> 22:35.600
It then mats in the sky behind the foreground, but then importantly it uses the foreground
22:35.600 --> 22:40.560
of the other image that you just searched on to recolor the foreground of the image that
22:40.560 --> 22:47.840
you're editing. So if you say go from a midday sky to an evening sky, it will actually add
22:47.840 --> 22:53.760
sort of an orange glow to the foreground objects as well. I was a big fan in college of Magritte
22:53.760 --> 22:59.600
and he has a number of paintings where it's surrealism because he'll like do a composite,
22:59.600 --> 23:03.440
but the foreground building will be at night and the sky will be during the day. There's one
23:03.440 --> 23:09.120
called The Empire of Light which was on my wall in college and we're trying not to do surrealism.
23:09.120 --> 23:15.680
It can be a choice, but we'd rather have it be natural by default rather than it looking fake
23:15.680 --> 23:19.760
and then you have to do a whole bunch of post production to fix it. So that's a case where
23:19.760 --> 23:25.120
we're kind of capturing an entire workflow into a single action and doing it in about a second
23:25.120 --> 23:30.720
rather than a minute or two. And when you do that, you can not just do it once, but you can do it
23:30.720 --> 23:36.960
for say like 10 different backgrounds and then you're almost back to this inspiration idea of
23:36.960 --> 23:41.840
I don't know quite what I want, but I'll know it when I see it. And you can just explore the
23:41.840 --> 23:47.200
design space as close to final production value as possible. And then when you really pick one,
23:47.200 --> 23:51.120
you might go back and slightly tweak the selection mask just to make it perfect and
23:51.120 --> 23:54.320
do that kind of polish that professionals like to bring to their work.
23:54.320 --> 24:01.040
So then there's this idea of, as you mentioned, the sky replacing it to different stock images of
24:01.040 --> 24:07.040
the sky. In general, you have this idea or it could be on your disk or whatever. But making even
24:07.040 --> 24:12.400
more intelligent choices about ways to search stock images which is really interesting. It's
24:12.400 --> 24:19.120
kind of spatial. Absolutely. Right. So that was something we called Concept Canvas. So normally
24:19.120 --> 24:26.240
when you do say an image search, I assume it's just based on text. You would give the keywords
24:26.240 --> 24:30.080
of the things you want to be in the image and it would find the nearest one that had those tags.
24:32.720 --> 24:37.200
For many tasks, you really want to be able to say I want a big person in the middle or in a
24:37.200 --> 24:41.280
dog to the right and umbrella above the left because you want to leave space for the text or
24:41.280 --> 24:48.640
whatever. And so Concept Canvas lets you assign spatial regions to the keywords. And then we've
24:48.640 --> 24:54.560
already preindexed the images to know where the important concepts are in the picture. So we then
24:54.560 --> 25:01.200
go through that index matching to assets. And even though it's just another form of search,
25:01.200 --> 25:06.480
because you're doing spatial design or layout, it starts to feel like design. You sort of feel
25:06.480 --> 25:13.120
oddly responsible for the image that comes back as if you invented it a little bit. So it's a good
25:13.120 --> 25:18.960
example where giving enough control starts to make people have a sense of ownership over the
25:18.960 --> 25:23.280
outcome of the event. And then we also have technologies in Photoshop where you physically
25:23.280 --> 25:29.440
can move the dog in post as well. But for Concept Canvas, it was just a very fast way to sort of
25:29.440 --> 25:38.560
loop through and be able to lay things out. In terms of being able to remove objects from a scene
25:38.560 --> 25:45.920
and fill in the background automatically. So that's extremely exciting. And that's
25:45.920 --> 25:51.200
so neural networks are stepping in there. I just talked this week with Ian Goodfellow.
25:51.200 --> 25:55.360
So the GANS for doing that is definitely one approach.
25:55.360 --> 25:59.440
So is that a really difficult problem? Is it as difficult as it looks,
25:59.440 --> 26:06.080
again, to take it to a robust product level? Well, there are certain classes of image for
26:06.080 --> 26:10.800
which the traditional algorithms like Content Aware Fill work really well. Like if you have
26:10.800 --> 26:15.200
a naturalistic texture like a gravel path or something, because it's patch based, it will
26:15.200 --> 26:19.840
make up a very plausible looking intermediate thing and fill in the hole. And then we use some
26:20.960 --> 26:25.200
algorithms to sort of smooth out the lighting so you don't see any brightness contrasts in that
26:25.200 --> 26:29.680
region or you've gradually ramped from dark to light if it straddles the boundary.
26:29.680 --> 26:37.600
Where it gets complicated is if you have to infer invisible structure behind the person in front.
26:37.600 --> 26:41.920
And that really requires a common sense knowledge of the world to know what,
26:42.480 --> 26:47.040
if I see three quarters of a house, do I have a rough sense of what the rest of the house looks
26:47.040 --> 26:51.840
like? If you just fill it in with patches, it can end up sort of doing things that make sense
26:51.840 --> 26:56.480
locally. But you look at the global structure and it looks like it's just sort of crumpled or messed
26:56.480 --> 27:02.800
up. And so what GANs and neural nets bring to the table is this common sense learned from the
27:02.800 --> 27:10.640
training set. And the challenge right now is that the generative methods that can make up
27:10.640 --> 27:14.960
missing holes using that kind of technology are still only stable at low resolutions.
27:15.520 --> 27:19.680
And so you either need to then go from a low resolution to a high resolution using some other
27:19.680 --> 27:24.720
algorithm or we need to push the state of the art and it's still in research to get to that point.
27:24.720 --> 27:30.720
Right. Of course, if you show it something, say it's trained on houses and then you show it in
27:30.720 --> 27:37.360
octopus, it's not going to do a very good job of showing common sense about octopuses. So
27:39.360 --> 27:45.600
again, you're asking about how you know that it's ready for prime time. You really need a very
27:45.600 --> 27:52.880
diverse training set of images. And ultimately, that may be a case where you put it out there
27:52.880 --> 28:00.400
with some guard rails where you might do a detector which looks at the image and
28:00.960 --> 28:05.280
sort of estimates its own competence of how well a job could this algorithm do.
28:05.920 --> 28:10.400
So eventually, there may be this idea of what we call an ensemble of experts where
28:11.120 --> 28:15.440
any particular expert is specialized in certain things and then there's sort of a
28:15.440 --> 28:19.360
either they vote to say how confident they are about what to do. This is sort of more future
28:19.360 --> 28:24.080
looking or there's some dispatcher which says you're good at houses, you're good at trees.
28:27.120 --> 28:31.520
So I mean, all this adds up to a lot of work because each of those models will be a whole
28:31.520 --> 28:38.320
bunch of work. But I think over time, you'd gradually fill out the set and initially focus
28:38.320 --> 28:41.520
on certain workflows and then sort of branch out as you get more capable.
28:41.520 --> 28:48.640
So you mentioned workflows and have you considered maybe looking far into the future?
28:50.000 --> 28:57.680
First of all, using the fact that there is a huge amount of people that use Photoshop,
28:57.680 --> 29:03.520
for example, they have certain workflows, being able to collect the information by which
29:04.880 --> 29:09.440
they basically get information about their workflows, about what they need,
29:09.440 --> 29:15.120
what can the ways to help them, whether it is houses or octopus that people work on more.
29:16.000 --> 29:23.440
Basically getting a beat on what kind of data is needed to be annotated and collected for people
29:23.440 --> 29:26.320
to build tools that actually work well for people.
29:26.320 --> 29:31.680
Absolutely. And this is a big topic and the whole world of AI is what data can you gather and why.
29:33.200 --> 29:39.120
At one level, the way to think about it is we not only want to train our customers in how to use
29:39.120 --> 29:44.160
our products, but we want them to teach us what's important and what's useful. At the same time,
29:44.160 --> 29:51.120
we want to respect their privacy and obviously we wouldn't do things without their explicit permission.
29:52.800 --> 29:57.440
And I think the modern spirit of the age around this is you have to demonstrate to somebody
29:57.440 --> 30:02.720
how they're benefiting from sharing their data with the tool. Either it's helping in the short
30:02.720 --> 30:07.120
term to understand their intent so you can make better recommendations or if they're
30:07.120 --> 30:11.840
there friendly to your cause or you're tall or they want to help you evolve quickly because
30:11.840 --> 30:16.320
they depend on you for their livelihood, they may be willing to share some of their
30:17.360 --> 30:23.360
workflows or choices with the dataset to be then trained.
30:24.720 --> 30:30.560
There are technologies for looking at learning without necessarily storing all the information
30:30.560 --> 30:36.080
permanently so that you can learn on the fly but not keep a record of what somebody did.
30:36.080 --> 30:38.720
So, we're definitely exploring all of those possibilities.
30:38.720 --> 30:45.520
And I think Adobe exists in a space where Photoshop, if I look at the data I've created
30:45.520 --> 30:51.840
in OWN, I'm less comfortable sharing data with social networks than I am with Adobe because
30:51.840 --> 31:01.360
there's just exactly as you said, there's an obvious benefit for sharing the data that I use
31:01.360 --> 31:05.440
to create in Photoshop because it's helping improve the workflow in the future.
31:05.440 --> 31:06.080
Right.
31:06.080 --> 31:09.440
As opposed to it's not clear what the benefit is in social networks.
31:10.000 --> 31:14.000
It's nice of you to say that. I mean, I think there are some professional workflows where
31:14.000 --> 31:17.360
people might be very protective of what they're doing such as if I was preparing
31:18.240 --> 31:22.640
evidence for a legal case, I wouldn't want any of that, you know,
31:22.640 --> 31:25.440
phoning home to help train the algorithm or anything.
31:26.560 --> 31:30.720
There may be other cases where people say having a trial version or they're doing some,
31:30.720 --> 31:35.280
I'm not saying we're doing this today, but there's a future scenario where somebody has a more
31:35.280 --> 31:40.400
permissive relationship with Adobe where they explicitly say, I'm fine, I'm only doing hobby
31:40.400 --> 31:48.000
projects or things which are non confidential and in exchange for some benefit tangible or
31:48.000 --> 31:51.200
otherwise, I'm willing to share very fine grain data.
31:51.840 --> 31:57.920
So, another possible scenario is to capture relatively crude high level things from more
31:57.920 --> 32:02.160
people and then more detailed knowledge from people who are willing to participate.
32:02.160 --> 32:07.280
We do that today with explicit customer studies where, you know, we go and visit somebody and
32:07.280 --> 32:10.640
ask them to try the tool and we human observe what they're doing.
32:12.000 --> 32:15.760
In the future, to be able to do that enough to be able to train an algorithm,
32:16.320 --> 32:20.240
we'd need a more systematic process, but we'd have to do it very consciously because
32:21.200 --> 32:24.560
one of the things people treasure about Adobe is a sense of trust
32:24.560 --> 32:28.880
and we don't want to endanger that through overly aggressive data collection.
32:28.880 --> 32:35.520
So, we have a Chief Privacy Officer and it's definitely front and center of thinking about AI
32:35.520 --> 32:39.920
rather than an afterthought. Well, when you start that program, sign me up.
32:39.920 --> 32:41.040
Okay, happy to.
32:42.800 --> 32:48.640
Is there other projects that you wanted to mention that I didn't perhaps that pop into mind?
32:48.640 --> 32:51.840
Well, you covered the number. I think you mentioned Project Puppetron.
32:51.840 --> 32:58.480
I think that one is interesting because you might think of Adobe as only thinking in 2D
32:59.760 --> 33:04.800
and that's a good example where we're actually thinking more three dimensionally about how to
33:04.800 --> 33:09.440
assign features to faces so that we can, you know, if you take, so what Puppetron does,
33:09.440 --> 33:16.320
it takes either a still or a video of a person talking and then it can take a painting of somebody
33:16.320 --> 33:20.160
else and then apply the style of the painting to the person who's talking in the video.
33:20.160 --> 33:29.600
And it's unlike a sort of screen door post filter effect that you sometimes see online.
33:30.320 --> 33:36.080
It really looks as though it's sort of somehow attached or reflecting the motion of the face.
33:36.080 --> 33:40.320
And so that's the case where even to do a 2D workflow like stylization,
33:40.320 --> 33:44.160
you really need to infer more about the 3D structure of the world.
33:44.160 --> 33:48.560
And I think as 3D computer vision algorithms get better,
33:48.560 --> 33:52.960
initially they'll focus on particular domains like faces where you have a lot of
33:52.960 --> 33:57.680
prior knowledge about structure and you can maybe have a parameterized template that you fit to the
33:57.680 --> 34:03.600
image. But over time, this should be possible for more general content. And it might even be
34:03.600 --> 34:09.360
invisible to the user that you're doing 3D reconstruction but under the hood, but it might
34:09.360 --> 34:15.200
then let you do edits much more reliably or correctly than you would otherwise.
34:15.200 --> 34:20.800
And you know, the face is a very important application, right?
34:20.800 --> 34:22.480
So making things work.
34:22.480 --> 34:26.640
And a very sensitive one. If you do something uncanny, it's very disturbing.
34:26.640 --> 34:30.080
That's right. You have to get it. You have to get it right.
34:30.080 --> 34:39.040
So in the space of augmented reality and virtual reality, what do you think is the role of AR and
34:39.040 --> 34:45.360
VR and in the content we consume as people as consumers and the content we create as creators?
34:45.360 --> 34:47.920
No, that's a great question. Let me think about this a lot too.
34:48.720 --> 34:55.360
So I think VR and AR serve slightly different purposes. So VR can really transport you to an
34:55.360 --> 35:02.400
entire immersive world, no matter what your personal situation is. To that extent, it's a bit like
35:02.400 --> 35:06.720
a really, really widescreen television where it sort of snaps you out of your context and puts you
35:06.720 --> 35:13.360
in a new one. And I think it's still evolving in terms of the hardware I actually worked on,
35:13.360 --> 35:17.680
VR in the 90s, trying to solve the latency and sort of nausea problem, which we did,
35:17.680 --> 35:23.040
but it was very expensive and a bit early. There's a new wave of that now. I think
35:23.600 --> 35:27.600
and increasingly those devices are becoming all in one rather than something that's tethered to a
35:27.600 --> 35:34.480
box. I think the market seems to be bifurcating into things for consumers and things for professional
35:34.480 --> 35:40.080
use cases, like for architects and people designing where your product is a building and you really
35:40.080 --> 35:45.200
want to experience it better than looking at a scale model or a drawing, I think,
35:45.920 --> 35:50.320
or even than a video. So I think for that, where you need a sense of scale and spatial
35:50.320 --> 35:59.600
relationships, it's great. I think AR holds the promise of sort of taking digital assets off the
35:59.600 --> 36:03.600
screen and putting them in context in the real world on the table in front of you on the wall
36:03.600 --> 36:10.400
behind you. And that has the corresponding need that the assets need to adapt to the physical
36:10.400 --> 36:15.680
context in which they're being placed. I mean, it's a bit like having a live theater troupe come
36:15.680 --> 36:20.880
to your house and put on Hamlet. My mother had a friend who used to do this at Stately Homes in
36:20.880 --> 36:26.640
England for the National Trust. And they would adapt the scenes and even they'd walk the audience
36:26.640 --> 36:32.880
through the rooms to see the action based on the country house they found themselves in for two
36:32.880 --> 36:38.720
days. And I think AR will have the same issue that if you have a tiny table in a big living room
36:38.720 --> 36:44.960
or something, it'll try to figure out what can you change and what's fixed. And there's a little
36:44.960 --> 36:52.560
bit of a tension between fidelity, where if you captured Senior Eye of doing a fantastic ballet,
36:52.560 --> 36:56.640
you'd want it to be sort of exactly reproduced. And maybe all you could do is scale it down.
36:56.640 --> 37:03.760
Whereas somebody telling you a story might be walking around the room doing some gestures
37:03.760 --> 37:07.040
and that could adapt to the room in which they were telling the story.
37:07.760 --> 37:12.800
And do you think fidelity is that important in that space or is it more about the storytelling?
37:12.800 --> 37:17.840
I think it may depend on the characteristic of the media. If it's a famous celebrity,
37:17.840 --> 37:22.480
then it may be that you want to catch every nuance and they don't want to be reanimated by some
37:22.480 --> 37:30.800
algorithm. It could be that if it's really a loveable frog telling you a story and it's
37:30.800 --> 37:34.320
about a princess and a frog, then it doesn't matter if the frog moves in a different way.
37:35.520 --> 37:38.640
I think a lot of the ideas that have sort of grown up in the game world will
37:39.520 --> 37:45.120
now come into the broader commercial sphere once they're needing adaptive characters in AR.
37:45.120 --> 37:51.920
Are you thinking of engineering tools that allow creators to create in the augmented world,
37:52.480 --> 37:56.000
basically making a Photoshop for the augmented world?
37:57.360 --> 38:02.560
Well, we have shown a few demos of sort of taking a Photoshop layer stack and then expanding it into
38:02.560 --> 38:08.640
3D. That's actually been shown publicly as one example in AR. Where we're particularly excited
38:08.640 --> 38:17.120
at the moment is in 3D. 3D design is still a very challenging space. We believe that it's
38:17.120 --> 38:22.800
a worthwhile experiment to try to figure out if AR or immersive makes 3D design more spontaneous.
38:23.360 --> 38:26.960
Can you give me an example of 3D design just like applications?
38:26.960 --> 38:32.080
Well, literally, a simple one would be laying out objects, right? On a conventional screen,
38:32.080 --> 38:35.680
you'd sort of have a plan view and a side view and a perspective view and you sort of be dragging
38:35.680 --> 38:39.440
it around with the mouse and if you're not careful, it would go through the wall and all that.
38:39.440 --> 38:46.400
Whereas if you were really laying out objects, say in a VR headset, you could literally move
38:46.400 --> 38:50.720
your head to see a different viewpoint. They'd be in stereo, so you'd have a sense of depth
38:50.720 --> 38:57.040
because you're already wearing the depth glasses, right? So it would be those sort of big gross
38:57.040 --> 39:01.120
motor, move things around, kind of skills seem much more spontaneous just like they are in the
39:01.120 --> 39:08.320
real world. The frontier for us, I think, is whether that same medium can be used to do fine
39:08.320 --> 39:14.880
grain design tasks, like very accurate constraints on, say, a CAD model or something. That may be
39:14.880 --> 39:19.120
better done on a desktop, but it may just be a matter of inventing the right UI.
39:20.160 --> 39:27.600
So we're hopeful that because there will be this potential explosion of demand for 3D assets
39:27.600 --> 39:32.560
that's driven by AR and more real time animation on conventional screens,
39:34.880 --> 39:40.800
those tools will also help with, or those devices will help with designing the content as well.
39:40.800 --> 39:46.480
You've mentioned quite a few interesting sort of new ideas. At the same time, there's old
39:46.480 --> 39:51.280
timers like me that are stuck in their old ways. I think I'm the old timer.
39:51.280 --> 39:58.560
Okay. All right. But the opposed all change at all costs. When you're thinking about
39:58.560 --> 40:05.440
creating new interfaces, do you feel the burden of just this giant user base that loves the
40:05.440 --> 40:13.760
current product? So anything new you do that any new idea comes at a cost that you'll be resisted?
40:13.760 --> 40:21.200
Well, I think if you have to trade off control for convenience, then our existing user base
40:21.200 --> 40:27.680
would definitely be offended by that. I think if there are some things where you have more convenience
40:27.680 --> 40:34.160
and just as much control, that may be more welcome. We do think about not breaking well known
40:34.160 --> 40:40.640
metaphors for things. So things should sort of make sense. Photoshop has never been a static
40:40.640 --> 40:46.640
target. It's always been evolving and growing. And to some extent, there's been a lot of brilliant
40:46.640 --> 40:50.640
thought along the way of how it works today. So we don't want to just throw all that out.
40:51.840 --> 40:55.600
If there's a fundamental breakthrough, like a single click is good enough to select an object
40:55.600 --> 41:01.760
rather than having to do lots of strokes, that actually fits in quite nicely to the existing
41:01.760 --> 41:07.840
tool set, either as an optional mode or as a starting point. I think where we're looking at
41:07.840 --> 41:13.440
radical simplicity where you could encapsulate an entire workflow with a much simpler UI,
41:14.000 --> 41:18.640
then sometimes that's easier to do in the context of either a different device like a
41:18.640 --> 41:25.120
mobile device where the affordances are naturally different or in a tool that's targeted at a
41:25.120 --> 41:31.040
different workflow where it's about spontaneity and velocity rather than precision. And we have
41:31.040 --> 41:36.720
projects like Rush, which can let you do professional quality video editing for a certain class of
41:36.720 --> 41:47.440
media output that is targeted very differently in terms of users and the experience. And ideally,
41:47.440 --> 41:54.160
people would go, if I'm feeling like doing Premiere, big project, I'm doing a four part
41:54.160 --> 41:59.200
television series. That's definitely a premier thing. But if I want to do something to show my
41:59.200 --> 42:05.200
recent vacation, maybe I'll just use Rush because I can do it in the half an hour. I have free at
42:05.200 --> 42:12.480
home rather than the four hours I need to do it at work. And for the use cases which we can do well,
42:12.480 --> 42:16.960
it really is much faster to get the same output. But the more professional tools obviously have
42:16.960 --> 42:21.520
a much richer toolkit and more flexibility in what they can do.
42:21.520 --> 42:26.400
And then at the same time, with the flexibility and control, I like this idea of smart defaults,
42:27.040 --> 42:33.520
of using AI to coach you to like what Google has, I'm feeling lucky button.
42:33.520 --> 42:37.360
Right. Or one button kind of gives you a pretty good
42:38.160 --> 42:41.440
set of settings. And then you almost, that's almost an educational tool.
42:42.000 --> 42:43.360
Absolutely. Yeah.
42:43.360 --> 42:49.920
To show, because sometimes when you have all this control, you're not sure about the
42:51.040 --> 42:55.600
correlation between the different bars that control different elements of the image and so on.
42:55.600 --> 43:00.400
And sometimes there's a degree of, you don't know what the optimal is.
43:00.400 --> 43:05.360
And then some things are sort of on demand like help, right?
43:05.360 --> 43:06.160
Help, yeah.
43:06.160 --> 43:09.600
I'm stuck. I need to know what to look for. I'm not quite sure what it's called.
43:10.400 --> 43:13.920
And something that was proactively making helpful suggestions or,
43:14.800 --> 43:20.640
you know, you could imagine a make a suggestion button where you'd use all of that knowledge
43:20.640 --> 43:25.120
of workflows and everything to maybe suggest something to go and learn about or just to try
43:25.120 --> 43:31.280
or show the answer. And maybe it's not one intelligent to pothole, but it's like a variety
43:31.280 --> 43:33.760
of defaults. And then you go, oh, I like that one.
43:33.760 --> 43:34.960
Yeah. Yeah.
43:34.960 --> 43:35.680
Several options.
43:37.200 --> 43:39.520
So back to poetry.
43:39.520 --> 43:40.640
Ah, yes.
43:40.640 --> 43:42.400
We're going to interleave.
43:43.440 --> 43:48.000
So first few lines of a recent poem of yours before I ask the next question.
43:48.000 --> 43:55.840
Yeah. This is about the smartphone. Today left my phone at home and went down to the sea.
43:57.120 --> 44:01.600
The sand was soft, the ocean glass, but I was still just me.
44:02.480 --> 44:08.240
So this is a poem about you leaving your phone behind and feeling quite liberated because of it.
44:08.960 --> 44:14.800
So this is kind of a difficult topic and let's see if we can talk about it, figure it out.
44:14.800 --> 44:21.120
But so with the help of AI, more and more, we can create versions of ourselves, versions of
44:21.120 --> 44:31.920
reality that are in some ways more beautiful than actual reality. And some of the creative effort
44:31.920 --> 44:38.880
there is part of creating this illusion. So of course, this is inevitable, but how do you think
44:38.880 --> 44:44.320
we should adjust this human beings to live in this digital world that's partly artificial,
44:44.320 --> 44:51.520
that's better than the world that we lived in a hundred years ago when you didn't have
44:51.520 --> 44:55.840
Instagram and Facebook versions of ourselves and the online.
44:55.840 --> 44:58.880
Oh, this is sort of showing off better versions of ourselves.
44:58.880 --> 45:04.880
We're using the tooling of modifying the images or even with artificial intelligence
45:04.880 --> 45:12.880
ideas of deep fakes and creating adjusted or fake versions of ourselves and reality.
45:14.080 --> 45:17.600
I think it's an interesting question. You're all sort of historical bent on this.
45:19.360 --> 45:24.720
I actually wonder if 18th century aristocrats who commissioned famous painters to paint portraits
45:24.720 --> 45:28.960
of them had portraits that were slightly nicer than they actually looked in practice.
45:28.960 --> 45:29.680
Well played, sir.
45:29.680 --> 45:34.720
So human desire to put your best foot forward has always been true.
45:37.440 --> 45:42.240
I think it's interesting. You sort of framed it in two ways. One is if we can imagine alternate
45:42.240 --> 45:47.280
realities and visualize them, is that a good or bad thing? In the old days, you do it with
45:47.280 --> 45:52.560
storytelling and words and poetry, which still resides sometimes on websites. But
45:52.560 --> 46:00.640
we've become a very visual culture in particular. In the 19th century, we were very much a text
46:00.640 --> 46:07.280
based culture. People would read long tracks. Political speeches were very long. Nowadays,
46:07.280 --> 46:15.280
everything's very kind of quick and visual and snappy. I think it depends on how harmless your
46:15.280 --> 46:23.120
intent. A lot of it's about intent. So if you have a somewhat flattering photo that you pick
46:23.120 --> 46:30.960
out of the photos that you have in your inbox to say, this is what I look like, it's probably fine.
46:32.160 --> 46:37.040
If someone's going to judge you by how you look, then they'll decide soon enough when they meet
46:37.040 --> 46:45.120
you whether the reality. I think where it can be harmful is if people hold themselves up to an
46:45.120 --> 46:49.520
impossible standard, which they then feel bad about themselves for not meeting. I think that's
46:49.520 --> 46:58.400
definitely can be an issue. But I think the ability to imagine and visualize an alternate
46:58.400 --> 47:04.800
reality, which sometimes which you then go off and build later, can be a wonderful thing too.
47:04.800 --> 47:10.720
People can imagine architectural styles, which they then have a startup, make a fortune and then
47:10.720 --> 47:17.680
build a house that looks like their favorite video game. Is that a terrible thing? I think
47:18.720 --> 47:24.560
I used to worry about exploration actually, that part of the joy of going to the moon
47:24.560 --> 47:30.320
when I was a tiny child, I remember it, and grainy black and white, was to know what it would look
47:30.320 --> 47:35.120
like when you got there. And I think now we have such good graphics for knowing, for visualizing
47:35.120 --> 47:40.960
experience before it happens, that I slightly worry that it may take the edge off actually wanting
47:40.960 --> 47:46.160
to go. Because we've seen it on TV, we kind of, oh, by the time we finally get to Mars,
47:46.160 --> 47:53.200
we're going, oh yeah, it's Mars, that's what it looks like. But then the outer exploration,
47:53.200 --> 47:58.480
I mean, I think Pluto was a fantastic recent discovery where nobody had any idea what it
47:58.480 --> 48:04.800
looked like and it was just breathtakingly varied and beautiful. So I think expanding
48:04.800 --> 48:10.800
the ability of the human toolkit to imagine and communicate on balance is a good thing.
48:10.800 --> 48:15.920
I think there are abuses, we definitely take them seriously and try to discourage them.
48:17.440 --> 48:22.960
I think there's a parallel side where the public needs to know what's possible through events like
48:22.960 --> 48:30.720
this, right? So that you don't believe everything you read and print anymore, and it may over time
48:30.720 --> 48:35.440
become true of images as well. Or you need multiple sets of evidence to really believe
48:35.440 --> 48:40.240
something rather than a single media asset. So I think it's a constantly evolving thing.
48:40.240 --> 48:45.680
It's been true forever. There's a famous story about Anne of Cleves and Henry VIII where,
48:47.760 --> 48:52.720
luckily for Anne, they didn't get married, right? So, or they got married and
48:53.840 --> 48:54.560
What's the story?
48:54.560 --> 48:59.200
Oh, so Holbein went and painted a picture and then Henry VIII wasn't pleased and, you know,
48:59.200 --> 49:03.520
history doesn't record whether Anne was pleased, but I think she was pleased not
49:03.520 --> 49:07.920
to be married more than a day or something. So I mean, this has gone on for a long time,
49:07.920 --> 49:13.280
but I think it's just part of the magnification of human capability.
49:14.640 --> 49:20.560
You've kind of built up an amazing research environment here, research culture, research
49:20.560 --> 49:25.600
lab, and you've written that the secret to a thriving research lab is interns. Can you unpack
49:25.600 --> 49:29.520
that a little bit? Oh, absolutely. So a couple of reasons.
49:31.360 --> 49:36.080
As you see, looking at my personal history, there are certain ideas you bond with at a certain
49:36.080 --> 49:40.880
stage of your career and you tend to keep revisiting them through time. If you're lucky,
49:40.880 --> 49:44.880
you pick one that doesn't just get solved in the next five years, and then you're sort of out of
49:44.880 --> 49:49.840
luck. So I think a constant influx of new people brings new ideas with it.
49:49.840 --> 49:56.800
From the point of view of industrial research, because a big part of what we do is really taking
49:56.800 --> 50:03.360
those ideas to the point where they can ship us very robust features, you end up investing a lot
50:03.360 --> 50:08.640
in a particular idea. And if you're not careful, people can get too conservative in what they
50:08.640 --> 50:15.280
choose to do next, knowing that the product teams will want it. And interns let you explore the more
50:15.280 --> 50:22.160
fanciful or unproven ideas in a relatively lightweight way, ideally leading to new publications for
50:22.160 --> 50:28.080
the intern and for the researcher. And it gives you then a portfolio from which to draw which idea
50:28.080 --> 50:32.720
am I going to then try to take all the way through to being robust in the next year or two to ship.
50:34.000 --> 50:37.520
So it sort of becomes part of the funnel. It's also a great way for us to
50:38.160 --> 50:42.640
identify future full time researchers, many of our greatest researchers were former interns.
50:42.640 --> 50:48.800
It builds a bridge to university departments so we can get to know and build an enduring relationship
50:48.800 --> 50:53.840
with the professors and we often do academic give funds to as well as an acknowledgement of the
50:53.840 --> 51:01.360
value the interns add and their own collaborations. So it's sort of a virtuous cycle. And then the
51:01.360 --> 51:06.960
long term legacy of a great research lab hopefully will be not only the people who stay but the ones
51:06.960 --> 51:11.520
who move through and then go off and carry that same model to other companies.
51:11.520 --> 51:17.440
And so we believe strongly in industrial research and how it can complement academia and
51:17.440 --> 51:21.920
we hope that this model will continue to propagate and be invested in by other companies,
51:21.920 --> 51:26.800
which makes it harder for us to recruit, of course, but you know, that's the sign of success
51:26.800 --> 51:30.320
and a rising tide lifts all ships in that sense.
51:31.040 --> 51:38.080
And where's the idea of born with the interns? Is there brainstorming? Is there discussions
51:38.080 --> 51:46.240
about, you know, like what the ideas come from? Yeah, as I'm asking the question, I
51:46.240 --> 51:51.760
realized how dumb it is, but I'm hoping you have a better answer than a question I ask at the
51:51.760 --> 51:59.280
beginning of every summer. So what will happen is we'll send out a call for interns. They'll
52:00.000 --> 52:04.240
we'll have a number of resumes come in, people will contact the candidates, talk to them about
52:04.240 --> 52:09.920
their interests. They'll usually try to find some somebody who has a reasonably good match to what
52:09.920 --> 52:14.320
they're already doing, or just has a really interesting domain that they've been pursuing in
52:14.320 --> 52:20.640
their PhD. And we think we'd love to do one of those projects too. And then the intern stays in
52:20.640 --> 52:27.360
touch with the mentors, we call them. And then they come and in the first at the end of two weeks,
52:27.360 --> 52:32.000
they have to decide. So they'll often have a general sense by the time they arrive.
52:32.000 --> 52:36.080
And we'll have internal discussions about what are all the general
52:36.800 --> 52:41.120
ideas that we're wanting to pursue to see whether two people have the same idea and maybe they
52:41.120 --> 52:47.040
should talk and all that. But then once the intern actually arrives, sometimes the idea goes linearly
52:47.040 --> 52:51.120
and sometimes it takes a giant left turn and we go, that sounded good. But when we thought about
52:51.120 --> 52:54.880
it, there's this other project or it's already been done and we found this paper that we were
52:54.880 --> 53:02.240
scooped. But we have this other great idea. So it's pretty flexible at the beginning. One of the
53:02.240 --> 53:08.080
questions for research labs is who's deciding what to do, and then who's to blame if it goes
53:08.080 --> 53:15.600
wrong, who gets the credit if it goes right. And so in Adobe, we push the needle very much towards
53:15.600 --> 53:22.960
freedom of choice of projects by the researchers and the interns. But then we reward people based
53:22.960 --> 53:28.000
on impact. So if the projects ultimately end up impacting the products and having papers and so
53:28.000 --> 53:34.400
on. And so your alternative model just to be clear is that you have one lab director who thinks he's
53:34.400 --> 53:38.720
a genius and tells everybody what to do, takes all the credit if it goes well, blames everybody
53:38.720 --> 53:44.800
else if it goes badly. So we don't want that model. And this helps new ideas percolate up.
53:45.440 --> 53:49.840
The art of running such a lab is that there are strategic priorities for the company
53:49.840 --> 53:55.520
and there are areas where we do want to invest in pressing problems. And so it's a little bit of a
53:55.520 --> 54:01.360
trickle down and filter up meets in the middle. And so you don't tell people you have to do X,
54:01.360 --> 54:07.360
but you say X would be particularly appreciated this year. And then people reinterpret X through
54:07.360 --> 54:12.720
the filter of things they want to do and they're interested in. And miraculously, it usually comes
54:12.720 --> 54:18.640
together very well. One thing that really helps is Adobe has a really broad portfolio of products.
54:18.640 --> 54:24.960
So if we have a good idea, there's usually a product team that is intrigued or interested.
54:26.000 --> 54:31.520
So it means we don't have to qualify things too much ahead of time. Once in a while, the product
54:31.520 --> 54:36.880
teams sponsor an extra intern because they have a particular problem that they really care about,
54:36.880 --> 54:41.440
in which case it's a little bit more, we really need one of these. And then we sort of say,
54:41.440 --> 54:45.920
great, I get an extra intern. We find an intern who thinks that's a great problem. But that's not
54:45.920 --> 54:49.520
the typical model. That's sort of the icing on the cake as far as the budget's concerned.
54:51.440 --> 54:55.920
And all of the above end up being important. It's really hard to predict at the beginning of the
54:55.920 --> 55:01.200
summer, which we all have high hopes of all of the intern projects. But ultimately, some of them
55:01.200 --> 55:06.480
pay off and some of them sort of are a nice paper, but don't turn into a feature. Others turn out
55:06.480 --> 55:12.080
not to be as novel as we thought, but they'd be a great feature, but not a paper. And then others,
55:12.080 --> 55:16.400
we make a little bit of progress and we realize how much we don't know. And maybe we revisit that
55:16.400 --> 55:22.320
problem several years in a row until it finally we have a breakthrough. And then it becomes more
55:22.320 --> 55:30.320
on track to impact a product. Jumping back to a big overall view of Adobe Research, what are you
55:30.320 --> 55:37.360
looking forward to in 2019 and beyond? What is, you mentioned there's a giant suite of products,
55:37.360 --> 55:45.920
giant suite of products, giant suite of ideas, new interns, a large team of researchers.
55:46.960 --> 55:54.400
Where do you think the future holds? In terms of the technological breakthroughs?
55:54.400 --> 56:00.960
Technological breakthroughs, especially ones that will make it into product will get to
56:00.960 --> 56:05.920
impact the world. So I think the creative or the analytics assistance that we talked about where
56:05.920 --> 56:10.320
they're constantly trying to figure out what you're trying to do and how can they be helpful and make
56:10.320 --> 56:16.160
useful suggestions is a really hot topic. And it's very unpredictable as to when it'll be ready,
56:16.160 --> 56:20.720
but I'm really looking forward to seeing how much progress we make against that. I think
56:22.480 --> 56:28.640
some of the core technologies like generative adversarial networks are immensely promising
56:28.640 --> 56:34.720
and seeing how quickly those become practical for mainstream use cases at high resolution with
56:34.720 --> 56:39.840
really good quality is also exciting. And they also have this sort of strange way of even the
56:39.840 --> 56:45.120
things they do oddly are odd in an interesting way. So it can look like dreaming or something.
56:46.160 --> 56:55.040
So that's fascinating. I think internally we have a Sensei platform, which is a way in which
56:55.040 --> 57:01.760
we're pooling our neural net and other intelligence models into a central platform, which can then be
57:01.760 --> 57:07.040
leveraged by multiple product teams at once. So we're in the middle of transitioning from a,
57:07.040 --> 57:11.040
you know, once you have a good idea, you pick a product team to work with and you sort of hand
57:11.040 --> 57:17.520
design it for that use case to a more sort of Henry Ford, stand it up in a standard way, which
57:17.520 --> 57:22.880
can be accessed in a standard way, which should mean that the time between a good idea and impacting
57:22.880 --> 57:28.960
our products will be greatly shortened. And when one product has a good idea, many of the other
57:28.960 --> 57:34.240
products can just leverage it too. So it's sort of an economy of scale. So that's more about the
57:34.240 --> 57:39.680
how then the what, but that combination of this sort of renaissance in AI, there's a comparable
57:39.680 --> 57:45.280
one in graphics with real time ray tracing and other really exciting emerging technologies.
57:45.280 --> 57:49.920
And when these all come together, you'll sort of basically be dancing with light, right? Where
57:49.920 --> 57:56.080
you'll have real time shadows, reflections, and as if it's a real world in front of you, but then
57:56.080 --> 58:00.880
with all these magical properties brought by AI where it sort of anticipates or modifies itself
58:00.880 --> 58:05.040
in ways that make sense based on how it understands the creative task you're trying to do.
58:06.320 --> 58:12.320
That's a really exciting future for creative for myself too, the creator. So first of all,
58:12.320 --> 58:17.760
I work in autonomous vehicles. I'm a roboticist. I love robots. And I think you have a fascination
58:17.760 --> 58:23.920
with snakes, both natural and artificial robots. I share your fascination. I mean, their movement
58:23.920 --> 58:31.760
is beautiful, adaptable. The adaptability is fascinating. There are, I looked it up, 2900
58:31.760 --> 58:37.200
species of snakes in the world. Wow. The 175 venomous, some are tiny, some are huge.
58:38.880 --> 58:44.560
Saw that there's one that's 25 feet in some cases. So what's the most interesting thing
58:44.560 --> 58:52.000
that you connect with in terms of snakes, both natural and artificial? Why, what was the connection
58:52.000 --> 58:58.000
with robotics AI in this particular form of a robot? Well, it actually came out of my work
58:58.000 --> 59:02.880
in the 80s on computer animation, where I started doing things like cloth simulation and
59:02.880 --> 59:07.360
other kind of soft body simulation. And you'd sort of drop it and it would bounce,
59:07.360 --> 59:10.880
then it would just sort of stop moving. And I thought, well, what if you animate the spring
59:10.880 --> 59:16.160
lengths and simulate muscles? And the simplest object I could do that for was an earthworm.
59:16.160 --> 59:21.680
So I actually did a paper in 1988 on called the motion dynamics of snakes and worms. And I
59:21.680 --> 59:27.760
read the physiology literature on both Hale snakes and worms move and then did some of the early
59:27.760 --> 59:34.640
computer animation examples of that. So your interest in robotics started with graphics?
59:34.640 --> 59:40.960
Came out of simulation and graphics. When I moved from Alias to Apple, we actually did a
59:40.960 --> 59:46.160
movie called Her Majesty's Secret Serpent, which is about a secret agent snake that parachutes in
59:46.160 --> 59:50.320
and captures a film canister from a satellite, which tells you how old fashioned we were thinking
59:50.320 --> 59:57.760
back then, sort of classic 19 sort of 50s or 60s Bond movie kind of thing. And at the same time,
59:57.760 --> 1:00:03.120
I'd always made radio control ships when I was a child and from scratch. And I thought, well,
1:00:03.120 --> 1:00:08.800
how can it be to build a real one? And so then started what turned out to be like a 15 year
1:00:08.800 --> 1:00:14.320
obsession with trying to build better snake robots. And the first one that I built just sort of
1:00:14.320 --> 1:00:19.520
slithered sideways, but didn't actually go forward, then added wheels and building things in real
1:00:19.520 --> 1:00:25.840
life makes you honest about the friction. The thing that appeals to me is I love creating the
1:00:25.840 --> 1:00:30.800
illusion of life, which is what drove me to drove me to animation. And if you have a robot with
1:00:30.800 --> 1:00:36.320
enough degrees of coordinated freedom that move in a kind of biological way, then it starts to
1:00:36.320 --> 1:00:42.080
cross the Yankani Valley and to see me like a creature rather than a thing. And I certainly got
1:00:42.080 --> 1:00:50.320
that with the early snakes by S3, I had it able to sidewind as well as go directly forward. My
1:00:50.320 --> 1:00:54.240
wife to be suggested that it would be the ring bearer at our wedding. So it actually went down
1:00:54.240 --> 1:00:59.360
the aisle carrying the rings and got in the local paper for that, which was really fun.
1:01:00.160 --> 1:01:07.200
And this was all done as a hobby. And then I at the time that can onboard compute was incredibly
1:01:07.200 --> 1:01:11.840
limited. It was sort of yes, you should explain that these snakes, the whole idea is that you would
1:01:11.840 --> 1:01:18.640
you're trying to run it autonomously. Autonomously, on board right. And so
1:01:19.760 --> 1:01:25.280
the very first one, I actually built the controller from discrete logic, because I used to do LSI,
1:01:25.280 --> 1:01:30.640
you know, circuits and things when I was a teenager. And then the second and third one,
1:01:30.640 --> 1:01:35.120
the 8 bit microprocessors were available with like a whole 256 bytes of RAM,
1:01:36.000 --> 1:01:39.840
which you could just about squeeze in. So they were radio controlled rather than autonomous.
1:01:39.840 --> 1:01:44.480
And really, we're more about the physic physicality and coordinated motion.
1:01:46.560 --> 1:01:51.520
I've occasionally taken a side step into if only I could make it cheaply enough,
1:01:51.520 --> 1:01:59.040
bake a great toy, which has been a lesson in how clockwork is its own magical realm that
1:01:59.040 --> 1:02:03.680
you venture into and learn things about backlash and other things you don't take into account as
1:02:03.680 --> 1:02:07.600
a computer scientist, which is why what seemed like a good idea doesn't work. So it's quite
1:02:07.600 --> 1:02:14.160
humbling. And then more recently, I've been building S9, which is a much better engineered
1:02:14.160 --> 1:02:17.760
version of S3 where the motors wore out and it doesn't work anymore. And you can't buy
1:02:17.760 --> 1:02:24.640
replacements, which is sad given that it was such a meaningful one. S5 was about twice as long and
1:02:25.760 --> 1:02:32.960
look much more biologically inspired. I, unlike the typical roboticist, I taper my snakes.
1:02:33.520 --> 1:02:37.040
There are good mechanical reasons to do that, but it also makes them look more biological,
1:02:37.040 --> 1:02:43.280
although it means every segment's unique rather than a repetition, which is why most engineers
1:02:43.280 --> 1:02:49.840
don't do it. It actually saves weight and leverage and everything. And that one is currently on
1:02:49.840 --> 1:02:54.480
display at the International Spy Museum in Washington, DC. None of it has done any spying.
1:02:56.080 --> 1:03:00.000
It was on YouTube and it got its own conspiracy theory where people thought that it wasn't real
1:03:00.000 --> 1:03:04.160
because they work at Adobe, it must be fake graphics. And people would write to me, tell me
1:03:04.160 --> 1:03:11.200
it's real. They say the background doesn't move and it's like, it's on a tripod. So that one,
1:03:11.200 --> 1:03:16.960
but you can see the real thing. So it really is true. And then the latest one is the first one
1:03:16.960 --> 1:03:21.280
where I could put a Raspberry Pi, which leads to all sorts of terrible jokes about pythons and
1:03:21.280 --> 1:03:29.920
things. But this one can have onboard compute. And then where my hobby work and my work worker
1:03:29.920 --> 1:03:36.400
converging is you can now add vision accelerator chips, which can evaluate neural nets and do
1:03:36.400 --> 1:03:41.600
object recognition and everything. So both for the snakes and more recently for the spider that
1:03:41.600 --> 1:03:48.640
I've been working on, having desktop level compute is now opening up a whole world of
1:03:49.200 --> 1:03:54.880
true autonomy with onboard compute, onboard batteries, and still having that sort of
1:03:54.880 --> 1:04:01.680
biomimetic quality that appeals to children in particular. They are really drawn to them and
1:04:01.680 --> 1:04:08.960
adults think they look creepy, but children actually think they look charming. And I gave a
1:04:08.960 --> 1:04:14.880
series of lectures at Girls Who Code to encourage people to take an interest in technology. And
1:04:14.880 --> 1:04:19.200
at the moment, I'd say they're still more expensive than the value that they add,
1:04:19.200 --> 1:04:22.560
which is why they're a great hobby for me, but they're not really a great product.
1:04:22.560 --> 1:04:30.000
It makes me think about doing that very early thing I did at Alias with changing the muscle
1:04:30.000 --> 1:04:35.760
rest lengths. If I could do that with a real artificial muscle material, then the next snake
1:04:35.760 --> 1:04:40.960
ideally would use that rather than motors and gearboxes and everything. It would be lighter,
1:04:40.960 --> 1:04:49.200
much stronger, and more continuous and smooth. So I like to say being in research as a license
1:04:49.200 --> 1:04:54.640
to be curious, and I have the same feeling with my hobby yet. It forced me to read biology and
1:04:54.640 --> 1:04:59.680
be curious about things that otherwise would have just been natural geographic special.
1:04:59.680 --> 1:05:04.320
Suddenly, I'm thinking, how does that snake move? Can I copy it? I look at the trails that
1:05:04.320 --> 1:05:08.640
side winding snakes leave in sand and see if my snake robots would do the same thing.
1:05:10.240 --> 1:05:13.840
Out of something inanimate, I like why you put a try to bring life into it and beauty.
1:05:13.840 --> 1:05:18.240
Absolutely. And then ultimately, give it a personality, which is where the intelligent
1:05:18.240 --> 1:05:25.040
agent research will converge with the vision and voice synthesis to give it a sense of having
1:05:25.040 --> 1:05:29.840
not necessarily human level intelligence. I think the Turing test is such a high bar, it's
1:05:30.480 --> 1:05:36.080
a little bit self defeating, but having one that you can have a meaningful conversation with,
1:05:36.880 --> 1:05:43.040
especially if you have a reasonably good sense of what you can say. So not trying to have it so
1:05:43.040 --> 1:05:50.080
a stranger could walk up and have one, but so as a pet owner or a robot pet owner, you could
1:05:50.080 --> 1:05:53.200
know what it thinks about and what it can reason about.
1:05:53.200 --> 1:05:58.800
Or sometimes just meaningful interaction. If you have the kind of interaction you have with a dog,
1:05:58.800 --> 1:06:02.080
sometimes you might have a conversation, but it's usually one way.
1:06:02.080 --> 1:06:02.640
Absolutely.
1:06:02.640 --> 1:06:06.000
And nevertheless, it feels like a meaningful connection.
1:06:06.800 --> 1:06:12.720
And one of the things that I'm trying to do in the sample audio that will play you is beginning
1:06:12.720 --> 1:06:17.680
to get towards the point where the reasoning system can explain why it knows something or
1:06:17.680 --> 1:06:22.320
why it thinks something. And that again, creates the sense that it really does know what it's
1:06:22.320 --> 1:06:29.840
talking about, but also for debugging. As you get more and more elaborate behavior, it's like,
1:06:29.840 --> 1:06:37.120
why did you decide to do that? How do you know that? I think the robot's really
1:06:37.120 --> 1:06:42.560
my muse for helping me think about the future of AI and what to invent next.
1:06:42.560 --> 1:06:47.040
So even at Adobe, that's mostly operating in the digital world.
1:06:47.040 --> 1:06:47.440
Correct.
1:06:47.440 --> 1:06:54.480
Do you ever, do you see a future where Adobe even expands into the more physical world perhaps?
1:06:54.480 --> 1:07:02.080
So bringing life not into animations, but bringing life into physical objects with whether it's,
1:07:02.080 --> 1:07:08.640
well, I'd have to say at the moment it's a twinkle in my eye. I think the more likely thing is that
1:07:08.640 --> 1:07:14.240
we will bring virtual objects into the physical world through augmented reality.
1:07:14.240 --> 1:07:21.360
And many of the ideas that might take five years to build a robot to do, you can do in a few weeks
1:07:21.360 --> 1:07:29.120
with digital assets. So I think when really intelligent robots finally become commonplace,
1:07:29.120 --> 1:07:34.000
they won't be that surprising because we'll have been living with those personalities in the virtual
1:07:34.000 --> 1:07:38.640
sphere for a long time. And then they'll just say, oh, it's Siri with legs or Alexa,
1:07:39.520 --> 1:07:47.760
Alexa on hooves or something. So I can see that welcoming. And for now, it's still an adventure
1:07:47.760 --> 1:07:53.760
and we don't know quite what the experience will be like. And it's really exciting to sort of see
1:07:53.760 --> 1:07:59.680
all of these different strands of my career converge. Yeah, in interesting ways. And it is
1:07:59.680 --> 1:08:07.920
definitely a fun adventure. So let me end with my favorite poem, the last few lines of my favorite
1:08:07.920 --> 1:08:14.560
poem of yours that ponders mortality. And in some sense, immortality, as our ideas live through
1:08:14.560 --> 1:08:21.120
the ideas of others through the work of others, it ends with, do not weep or mourn. It was enough
1:08:21.120 --> 1:08:28.000
the little atomies permitted just a single dance. Scatter them as deep as your eyes can see. I'm
1:08:28.000 --> 1:08:34.320
content they'll have another chance, sweeping more centered parts along to join a jostling,
1:08:34.320 --> 1:08:41.440
lifting throng as others danced in me. Beautiful poem. Beautiful way to end it. Gavin, thank you
1:08:41.440 --> 1:08:45.920
so much for talking today. And thank you for inspiring and empowering millions of people
1:08:45.920 --> 1:08:50.960
like myself for creating amazing stuff. Oh, thank you. It's a great conversation.
|