diff --git "a/distill.jsonl" "b/distill.jsonl" new file mode 100644--- /dev/null +++ "b/distill.jsonl" @@ -0,0 +1,54 @@ +{"url": "n/a", "source": "distill", "source_type": "html", "converted_with": "python", "title": "Thread: Differentiable Self-organizing Systems", "authors": ["Alexander Mordvintsev", "Ettore Randazzo", "Eyvind Niklasson", "Michael Levin", "Sam Greydanus"], "date_published": "2020-08-27", "abstract": " Self-organisation is omnipresent on all scales of biological life. From complex interactions between molecules forming structures such as proteins, to cell colonies achieving global goals like exploration by means of the individual cells collaborating and communicating, to humans forming collectives in society such as tribes, governments or countries. The old adage “the whole is greater than the sum of its parts”, often ascribed to Aristotle, rings true everywhere we look. ", "journal_ref": "distill-pub", "doi": "https://doi.org/10.23915/distill.00027", "text": "\n\n Aristotle, rings true everywhere we look.\n\n \n\n communities.\n\n \n\nArticles & Comments\n\n-------------------\n\n self-organizing systems,\n\n interspersed with critical commentary from several experts in adjacent fields.\n\n The thread will be a living document, with new articles added over time.\n\n Articles and comments are presented below in chronological order:\n\n \n\n### \n\n[Growing Neural Cellular Automata](https://distill.pub/2020/growing-ca/)\n\n### Authors\n\n### Affiliations\n\n[Alexander Mordvintsev](https://znah.net/),\n\n Ettore Randazzo,\n\n [Eyvind Niklasson](https://eyvind.me/),\n\n [Michael Levin](http://www.drmichaellevin.org/)\n\n[Google](https://research.google/),\n\n [Allen Discovery Center](https://allencenter.tufts.edu/)\n\n arbitrary structure starting from a single cell.\n\n [Read Full Article](https://distill.pub/2020/growing-ca/) \n\n### \n\n[Self-classifying MNIST Digits](https://distill.pub/2020/selforg/mnist/)\n\n### Authors\n\n### Affiliations\n\nEttore Randazzo,\n\n [Alexander Mordvintsev](https://znah.net/),\n\n [Eyvind Niklasson](https://eyvind.me/),\n\n [Michael Levin](http://www.drmichaellevin.org/),\n\n [Sam Greydanus](https://greydanus.github.io/about.html)\n\n[Google](https://research.google/),\n\n [Allen Discovery Center](https://allencenter.tufts.edu/),\n\n [Oregon State University and the ML Collective](http://mlcollective.org/)\n\n perturbations with a learned self-correcting classification behaviour.\n\n [Read Full Article](https://distill.pub/2020/selforg/mnist/) \n\n### \n\n[Self-Organising Textures](https://distill.pub/selforg/2021/textures/)\n\n### Authors\n\n### Affiliations\n\n[Eyvind Niklasson](https://eyvind.me/),\n\n [Alexander Mordvintsev](https://znah.net/),\n\n Ettore Randazzo,\n\n [Michael Levin](http://www.drmichaellevin.org/)\n\n[Google](https://research.google/),\n\n [Allen Discovery Center](https://allencenter.tufts.edu/),\n\n \n\n to robust and unexpected behaviors.\n\n [Read Full Article](https://distill.pub/selforg/2021/textures/) \n\n### \n\n### Authors\n\n### Affiliations\n\n[Ettore Randazzo](https://oteret.github.io/),\n\n [Alexander Mordvintsev](https://znah.net/),\n\n [Eyvind Niklasson](https://eyvind.me/),\n\n [Michael Levin](http://www.drmichaellevin.org/)\n\n[Google](https://research.google/),\n\n [Allen Discovery Center](https://allencenter.tufts.edu/),\n\n \n\nThis work takes existing Neural CA models and shows how they can be \n\nadversarially reprogrammed to perform novel tasks. \n\nMNIST CA can be deceived into outputting incorrect classifications and \n\nthe patterns in Growing CA can be made to have their shape and colour \n\naltered.\n\n [Read Full Article](https://distill.pub/selforg/2021/adversarial/) \n\n#### This is a living document\n\n Expect more articles on this topic, along with critical comments from\n\n experts.\n\n \n\nGet Involved\n\n------------\n\n Critical\n\n commentary and discussion of existing articles is also welcome. The thread\n\n is organized through the open `#selforg` channel on the\n\n [Distill slack](http://slack.distill.pub/). Articles can be\n\n suggested there, and will be included at the discretion of previous\n\n authors in the thread, or in the case of disagreement by an uninvolved\n\n editor.\n\n \n\n If you would like get involved but don't know where to start, small\n\n projects may be available if you ask in the channel.\n\n \n\nAbout the Thread Format\n\n-----------------------\n\n Part of Distill's mandate is to experiment with new forms of scientific\n\n publishing. We believe that that reconciling faster and more continuous\n\n approaches to publication with review and discussion is an important open\n\n problem in scientific publishing.\n\n \n\n Threads are collections of short articles, experiments, and critical\n\n commentary around a narrow or unusual research topic, along with a slack\n\n channel for real time discussion and collaboration. They are intended to\n\n be earlier stage than a full Distill paper, and allow for more fluid\n\n publishing, feedback and discussion. We also hope they'll allow for wider\n\n participation. Think of a cross between a Twitter thread, an academic\n\n workshop, and a book of collected essays.\n\n \n\n Threads are very much an experiment. We think it's possible they're a\n\n great format, and also possible they're terrible. We plan to trial two\n\n such threads and then re-evaluate our thought on the format.\n\n \n\n", "bibliography_bib": null, "id": "cafded3fa5fd4510cbcb12b5c0f57130"} +{"url": "n/a", "source": "distill", "source_type": "html", "converted_with": "python", "title": "How to Use t-SNE Effectively", "authors": ["Martin Wattenberg", "Fernanda Viégas", "Ian Johnson"], "date_published": "2016-10-13", "abstract": "A popular method for exploring high-dimensional data is something called t-SNE, introduced by van der Maaten and Hinton in 2008 [1]. The technique has become widespread in the field of machine learning, since it has an almost magical ability to create compelling two-dimensonal “maps” from data with hundreds or even thousands of dimensions. Although impressive, these images can be tempting to misread. The purpose of this note is to prevent some common misreadings.", "journal_ref": "distill-pub", "doi": null, "text": "\n\nHow to Use t-SNE Effectively\n\n============================\n\nAlthough extremely useful for visualizing \n\nhigh-dimensional data, t-SNE plots can sometimes be mysterious or \n\nmisleading. By exploring how it behaves in simple cases, we can learn to\n\n use it more effectively.\n\n #playground {\n\n overflow: hidden;\n\n font-family: 'Open Sans', sans-serif;\n\n border-top: 1px solid rgba(0, 0, 0, 0.1);\n\n /\\*border-bottom: 1px solid rgba(0, 0, 0, 0.1);\\*/\n\n margin-top: 36px;\n\n padding: 36px 0 0 0;\n\n /\\*background: #fcfcfc;\\*/\n\n z-index: 1000;\n\n }\n\n #playground \\* {\n\n box-sizing: border-box;\n\n }\n\n #playground.modal {\n\n position: fixed;\n\n left: 10px;\n\n right: 10px;\n\n top: 50px;\n\n }\n\n /\\* Playground Canvas \\*/\n\n #playground-canvas {\n\n float: left;\n\n width: 55%;\n\n }\n\n #playground-canvas canvas {\n\n width: 100%;\n\n }\n\n /\\* Data Menu \\*/\n\n #data-menu {\n\n /\\*float: left;\\*/\n\n /\\*width: 25%\\*/\n\n width: 40%;\n\n float: left;\n\n margin-bottom: 24px;\n\n overflow: hidden;\n\n margin-left: 5%;\n\n }\n\n #data-menu .demo-data {\n\n cursor: pointer;\n\n position: relative;\n\n font-size: 10px;\n\n line-height: 1.2em;\n\n box-sizing: border-box;\n\n float: left;\n\n margin: 2px;\n\n padding: 4px;\n\n width: calc(33% - 4px);\n\n background: white;\n\n border: 1px solid rgba(0, 0, 0, 0.1);\n\n border-radius: 4px;\n\n box-shadow: 0 0 3px rgba(0, 0, 0, 0.08);\n\n }\n\n @media(min-width: 480px) {\n\n #data-menu .demo-data {\n\n width: calc(25% - 8px);\n\n padding: 8px;\n\n margin: 4px;\n\n }\n\n }\n\n @media(min-width: 768px) {\n\n #data-menu .demo-data {\n\n width: calc(16.5% - 8px);\n\n padding: 8px;\n\n margin: 4px;\n\n }\n\n }\n\n #data-menu .demo-data:hover {\n\n border: 1px solid rgba(0, 0, 0, 0.2);\n\n }\n\n #data-menu .demo-data.selected::after {\n\n content: \"\";\n\n border: 2px solid rgba(70, 130, 180, 0.8);\n\n width: 100%;\n\n height: 100%;\n\n position: absolute;\n\n top: 0;\n\n left: 0;\n\n box-sizing: border-box;\n\n border-radius: 4px;\n\n }\n\n #data-menu .demo-data span {\n\n display: none;\n\n }\n\n #data-menu .demo-data:hover canvas {\n\n opacity: 1;\n\n }\n\n #data-menu .demo-data canvas {\n\n width: 100%;\n\n opacity: 0.3;\n\n }\n\n #data-menu .demo-data.selected canvas {\n\n opacity: 1;\n\n }\n\n /\\* Data Details \\*/\n\n #data-details {\n\n position: relative;\n\n }\n\n @media(min-width: 768px) {\n\n #data-details {\n\n width: 40%;\n\n float: right;\n\n }\n\n }\n\n #data-details #data-controls {\n\n width: 40%;\n\n float: right;\n\n position: relative;\n\n overflow: hidden;\n\n font-size: 13px;\n\n }\n\n @media(min-width: 768px) {\n\n #data-details #data-controls {\n\n width: 50%;\n\n margin-right: 10%;\n\n float: left;\n\n }\n\n }\n\n #data-details #play-controls {\n\n margin-bottom: 18px;\n\n overflow: hidden;\n\n position: relative;\n\n }\n\n #data-details #play-controls button {\n\n cursor: pointer;\n\n outline: none;\n\n border-radius: 50%;\n\n background: steelblue;\n\n color: white;\n\n width: 25%;\n\n margin-right: 5%;\n\n padding-top: 25%;\n\n padding-bottom: 0;\n\n border: none;\n\n float: left;\n\n position: relative;\n\n }\n\n #play-controls i {\n\n display: block;\n\n position: absolute;\n\n top: 50%;\n\n left: 0;\n\n width: 100%;\n\n height: 36px;\n\n font-size: 24px;\n\n line-height: 0;\n\n }\n\n @media(min-width: 768px) {\n\n #play-controls i {\n\n font-size: 30px;\n\n }\n\n }\n\n #play-controls #play-pause i {\n\n display: none;\n\n position: absolute;\n\n }\n\n #play-controls #play-pause.paused i:nth-child(1) {\n\n display: block;\n\n }\n\n #play-controls #play-pause.playing i:nth-child(2) {\n\n display: block;\n\n }\n\n #steps-display {\n\n float: left;\n\n text-align: center;\n\n width: 25%;\n\n line-height: 1.5em;\n\n font-size: 13px;\n\n }\n\n @media(min-width: 1024px) {\n\n #steps-display {\n\n font-size: 16px;\n\n line-height: 1.6em;\n\n }\n\n }\n\n #data-details #data-description {\n\n width: 50%;\n\n margin-right: 10%;\n\n float: right;\n\n font-size: 14px;\n\n line-height: 1.6em;\n\n }\n\n @media(min-width: 768px) {\n\n #data-details #data-description {\n\n width: 40%;\n\n float: left;\n\n margin-right: 0;\n\n }\n\n }\n\n /\\* Options \\*/\n\n #data-details #options {\n\n float: left;\n\n padding-left: 36px;\n\n font-size: 13px;\n\n line-height: 1.5em;\n\n width: 45%;\n\n }\n\n #data-details input {\n\n display: block;\n\n width: 100%;\n\n margin: 8px 0 16px 0;\n\n }\n\n #options #data-options {\n\n width: 45%;\n\n margin-left: 4px;\n\n }\n\n #options #tsne-options {\n\n width: 45%;\n\n margin-left: 4px;\n\n }\n\n #data-details #share {\n\n margin-top: 8px;\n\n display: block;\n\n color: rgba(0, 0, 0, 0.4);\n\n text-decoration: none;\n\n font-size: 12px;\n\n }\n\n #data-details #share:hover {\n\n text-decoration: underline;\n\n }\n\n #data-details #share i {\n\n line-height: 0px;\n\n position: relative;\n\n top: 7px;\n\n }\n\n*play\\_arrow**pause*\n\n*refresh*\n\n Step \n\n102\n\nPoints Per Side 20\n\nPerplexity 10Epsilon 5\n\n[*link* Share this view](#perplexity=10&epsilon=5&demo=0&demoParams=20)\n\n dt-byline {\n\n font-size: 12px;\n\n line-height: 18px;\n\n display: block;\n\n border-top: 1px solid rgba(0, 0, 0, 0.1);\n\n border-bottom: 1px solid rgba(0, 0, 0, 0.1);\n\n color: rgba(0, 0, 0, 0.5);\n\n padding-top: 12px;\n\n padding-bottom: 12px;\n\n }\n\n dt-article.centered dt-byline {\n\n text-align: center;\n\n }\n\n dt-byline a,\n\n dt-article dt-byline a {\n\n text-decoration: none;\n\n border-bottom: none;\n\n }\n\n dt-article dt-byline a:hover {\n\n text-decoration: underline;\n\n border-bottom: none;\n\n }\n\n dt-byline .authors {\n\n text-align: left;\n\n }\n\n dt-byline .name {\n\n display: inline;\n\n text-transform: uppercase;\n\n }\n\n dt-byline .affiliation {\n\n display: inline;\n\n }\n\n dt-byline .date {\n\n display: block;\n\n text-align: left;\n\n }\n\n dt-byline .year, dt-byline .month {\n\n display: inline;\n\n }\n\n dt-byline .citation {\n\n display: block;\n\n text-align: left;\n\n }\n\n dt-byline .citation div {\n\n display: inline;\n\n }\n\n @media(min-width: 768px) {\n\n dt-byline {\n\n }\n\n }\n\n @media(min-width: 1080px) {\n\n dt-byline {\n\n border-bottom: none;\n\n margin-bottom: 70px;\n\n }\n\n dt-byline a:hover {\n\n color: rgba(0, 0, 0, 0.9);\n\n }\n\n dt-byline .authors {\n\n display: inline-block;\n\n }\n\n dt-byline .author {\n\n display: inline-block;\n\n margin-right: 12px;\n\n /\\*padding-left: 20px;\\*/\n\n /\\*border-left: 1px solid #ddd;\\*/\n\n }\n\n dt-byline .affiliation {\n\n display: block;\n\n }\n\n dt-byline .author:last-child {\n\n margin-right: 0;\n\n }\n\n dt-byline .name {\n\n display: block;\n\n }\n\n dt-byline .date {\n\n border-left: 1px solid rgba(0, 0, 0, 0.1);\n\n padding-left: 15px;\n\n margin-left: 15px;\n\n display: inline-block;\n\n }\n\n dt-byline .year, dt-byline .month {\n\n display: block;\n\n }\n\n dt-byline .citation {\n\n border-left: 1px solid rgba(0, 0, 0, 0.15);\n\n padding-left: 15px;\n\n margin-left: 15px;\n\n display: inline-block;\n\n }\n\n dt-byline .citation div {\n\n display: block;\n\n }\n\n }\n\n[Martin Wattenberg](http://hint.fm/)\n\n[Google Brain](http://g.co/brain)\n\n[Fernanda Viégas](http://hint.fm/)\n\n[Google Brain](http://g.co/brain)\n\n[Ian Johnson](http://enjalot.github.io/)\n\n[Google Cloud](http://cloud.google.com/)\n\nOct. 13\n\n2016\n\n[Citation:\n\nWattenberg, et al., 2016](#citation)\n\n The technique has become widespread in the field of machine learning, \n\nsince it has an almost magical ability to create compelling \n\ntwo-dimensonal \"maps\" from data with hundreds or even thousands of \n\ndimensions.\n\nAlthough impressive, these images can be tempting to misread. The \n\npurpose of this note is to prevent some common misreadings.\n\nWe'll walk through a series of simple examples to illustrate what \n\nt-SNE diagrams can and cannot show. The t-SNE technique really is \n\nuseful—but only if you know how to interpret it.\n\nBefore diving in: if you haven't encountered t-SNE before, here's \n\nwhat you need to know about the math behind it. The goal is to take a \n\nset of points in a high-dimensional space and find a faithful \n\nrepresentation of those points in a lower-dimensional space, typically \n\nthe 2D plane. The algorithm is non-linear and adapts to the underlying \n\ndata, performing different transformations on different regions. Those \n\ndifferences can be a major source of confusion.\n\nA second feature of t-SNE is a tuneable parameter, \"perplexity,\" \n\nwhich says (loosely) how to balance attention between local and global \n\naspects of your data. The parameter is, in a sense, a guess about the \n\nnumber of close neighbors each point has. The perplexity value has a \n\n But the story is more nuanced than that. Getting the most from t-SNE \n\nmay mean analyzing multiple plots with different perplexities.\n\nThat's not the end of the complications. The t-SNE algorithm \n\ndoesn't always produce similar output on successive runs, for example, \n\nand there are additional hyperparameters related to the optimization \n\nprocess.\n\n---\n\n1. Those hyperparameters really matter\n\n--------------------------------------\n\nLet's start with the \"hello world\" of t-SNE: a data set of two \n\nwidely separated clusters. To make things as simple as possible, we'll \n\nconsider clusters in a 2D plane, as shown in the lefthand diagram. (For \n\nclarity, the two clusters are color coded.) The diagrams at right show \n\nt-SNE plots for five different perplexity values.\n\nWith perplexity values in the range (5 - 50) suggested by van der \n\nMaaten & Hinton, the diagrams do show these clusters, although with \n\nvery different shapes. Outside that range, things get a little weird. \n\nWith perplexity 2, local variations dominate. The image for perplexity \n\n100, with merged clusters, illustrates a pitfall: for the algorithm to \n\noperate properly, the perplexity really should be smaller than the \n\nnumber of points. Implementations can give unexpected behavior \n\notherwise.\n\nEach of the plots above was made with 5,000 iterations with a \n\nlearning rate (often called \"epsilon\") of 10, and had reached a point of\n\n stability by step 5,000. How much of a difference do those values make?\n\n In our experience, the most important thing is to iterate until \n\nreaching a stable configuration.\n\nThe images above show five different runs at perplexity 30. The \n\nfirst four were stopped before stability. After 10, 20, 60, and 120 \n\nsteps you can see layouts with seeming 1-dimensional and even pointlike \n\nimages of the clusters. If you see a t-SNE plot with strange \"pinched\" \n\nshapes, chances are the process was stopped too early. Unfortunately, \n\nthere's no fixed number of steps that yields a stable result. Different \n\ndata sets can require different numbers of iterations to converge.\n\nAnother natural question is whether different runs with the same \n\nhyperparameters produce the same results. In this simple two-cluster \n\nexample, and most of the others we discuss, multiple runs give the same \n\nglobal shape. Certain data sets, however, yield markedly different \n\ndiagrams on different runs; we'll give an example of one of these later.\n\nFrom now on, unless otherwise stated, we'll show results from 5,000\n\n iterations. That's generally enough for convergence in the (relatively \n\nsmall) examples in this essay. We'll keep showing a range of \n\nperplexities, however, since that seems to make a big difference in \n\nevery case.\n\n---\n\n2. Cluster sizes in a t-SNE plot mean nothing\n\n---------------------------------------------\n\nSo far, so good. But what if the two clusters have different \n\nstandard deviations, and so different sizes? (By size we mean bounding \n\nbox measurements, not number of points.) Below are t-SNE plots for a \n\nmixture of Gaussians in plane, where one is 10 times as dispersed as the\n\n other.\n\nSurprisingly, the two clusters look about same size in the t-SNE \n\nplots.\n\n What's going on? The t-SNE algorithm adapts its notion of \"distance\" \n\nto regional density variations in the data set. As a result, it \n\nnaturally expands dense clusters, and contracts sparse ones, evening out\n\n cluster sizes. To be clear, this is a different effect than the \n\nrun-of-the-mill fact that any dimensionality reduction technique will \n\ndistort distances. (After all, in this example all data was \n\ntwo-dimensional to begin with.) Rather, density equalization happens by \n\ndesign and is a predictable feature of t-SNE.\n\n---\n\n3. Distances between clusters might not mean anything\n\n-----------------------------------------------------\n\nAt perplexity 50, the diagram gives a good sense of the global \n\ngeometry. For lower perplexity values the clusters look equidistant. \n\nWhen the perplexity is 100, we see the global geometry fine, but one of \n\nthe cluster appears, falsely, much smaller than the others.\n\n Since perplexity 50 gave us a good picture in this example, can we \n\nalways set perplexity to 50 if we want to see global geometry?\n\nSadly, no. If we add more points to each cluster, the perplexity \n\nhas to increase to compensate. Here are the t-SNE diagrams for three \n\nGaussian clusters with 200 points each, instead of 50. Now none of the \n\ntrial perplexity values gives a good result.\n\nIt's bad news that seeing global geometry requires fine-tuning \n\nperplexity. Real-world data would probably have multiple clusters with \n\ndifferent numbers of elements. There may not be one perplexity value \n\nthat will capture distances across all clusters—and sadly perplexity is a\n\n global parameter. Fixing this problem might be an interesting area for \n\nfuture research.\n\n---\n\n4. Random noise doesn't always look random.\n\n-------------------------------------------\n\nA classic pitfall is thinking you see patterns in what is really \n\njust random data. Recognizing noise when you see it is a critical skill,\n\n but it takes time to build up the right intuitions. A tricky thing \n\nabout t-SNE is that it throws a lot of existing intuition out the \n\nwindow.\n\n The next diagrams show genuinely random data, 500 points drawn from a \n\nunit Gaussian distribution in 100 dimensions. The left image is a \n\nprojection onto the first two coordinates.\n\nThe plot with perplexity 2 seems to show dramatic clusters. If you \n\nwere tuning perplexity to bring out structure in the data, you might \n\nthink you'd hit the jackpot.\n\nOf course, since we know the cloud of points was generated \n\nrandomly, it has no statistically interesting clusters: those \"clumps\" \n\naren't meaningful. If you look back at previous examples, low perplexity\n\n values often lead to this kind of distribution. Recognizing these \n\nclumps as random noise is an important part of reading t-SNE plots.\n\nThere's something else interesting, though, which may be a win for \n\nt-SNE. At first the perplexity 30 plot doesn't look like a Gaussian \n\ndistribution at all: there's only a slight density difference across \n\ndifferent regions of the cloud, and the points seem suspiciously evenly \n\ndistributed. In fact, these features are saying useful things about \n\nhigh-dimensional normal distributions, which are very close to uniform \n\ndistributions on a sphere: evenly distributed, with roughly equal spaces\n\n between points. Seen in this light, the t-SNE plot is more accurate \n\nthan any linear projection could be.\n\n---\n\n5. You can see some shapes, sometimes\n\n-------------------------------------\n\nIt's rare for data to be distributed in a perfectly symmetric way. \n\nLet's take a look at an axis-aligned Gaussian distribution in 50 \n\ndimensions, where the standard deviation in coordinate i is 1/i. That \n\nis, we're looking at a long-ish ellipsoidal cloud of points.\n\nFor high enough perplexity values, the elongated shapes are easy to\n\n read. On the other hand, at low perplexity, local effects and \n\nmeaningless \"clumping\" take center stage. More extreme shapes also come \n\nthrough, but again only at the right perplexity. For example, here are \n\ntwo clusters of 75 points each in 2D, arranged in parallel lines with a \n\nbit of noise.\n\nEven in the best cases, though, there's a subtle distortion: the \n\nlines are slightly curved outwards in the t-SNE diagram. The reason is \n\nthat, as usual, t-SNE tends to expand denser regions of data. Since the \n\nmiddles of the clusters have less empty space around them than the ends,\n\n the algorithm magnifies them.\n\n---\n\n6. For topology, you may need more than one plot\n\n------------------------------------------------\n\nSometimes you can read topological information off a t-SNE plot, \n\nbut that typically requires views at multiple perplexities.\n\n One of the simplest topological properties is containment. The plots \n\nbelow show two groups of 75 points in 50 dimensional space. Both are \n\nsampled from symmetric Gaussian distributions centered at the origin, \n\nbut one is 50 times more tightly dispersed than the other. The \"small\" \n\ndistribution is in effect contained in the large one.\n\nThe perplexity 30 view shows the basic topology correctly, but \n\nagain t-SNE greatly exaggerates the size of the smaller group of points.\n\n At perplexity 50, there's a new phenomenon: the outer group becomes a \n\ncircle, as the plot tries to depict the fact that all its points are \n\nabout the same distance from the inner group. If you looked at this \n\nimage alone, it would be easy to misread these outer points as a \n\none-dimensional structure.\n\nWhat about more complicated types of topology? This may be a \n\nsubject dearer to mathematicians than to practical data analysts, but \n\ninteresting low-dimensional structures are occasionally found in the \n\nwild.\n\nConsider a set of points that trace a link or a knot in three \n\ndimensions. Once again, looking at multiple perplexity values gives the \n\nmost complete picture. Low perplexity values give two completely \n\nseparate loops; high ones show a kind of global connectivity.\n\nThe trefoil knot is an interesting example of how multiple runs \n\naffect the outcome of t-SNE. Below are five runs of the perplexity-2 \n\nview.\n\nThe algorithm settles twice on a circle, which at least preserves \n\nthe intrinsic topology. But in three of the runs it ends up with three \n\ndifferent solutions which introduce artificial breaks. Using the dot \n\ncolor as a guide, you can see that the first and third runs are far from\n\n each other.\n\nFive runs at perplexity 50, however, give results that (up to \n\nsymmetry) are visually identical. Evidently some problems are easier \n\nthan others to optimize.\n\n---\n\nConclusion\n\n----------\n\nThere's a reason that t-SNE has become so popular: it's incredibly \n\nflexible, and can often find structure where other \n\ndimensionality-reduction algorithms cannot. Unfortunately, that very \n\nflexibility makes it tricky to interpret. Out of sight from the user, \n\nthe algorithm makes all sorts of adjustments that tidy up its \n\nvisualizations.\n\n Don't let the hidden \"magic\" scare you away from the whole technique, \n\nthough. The good news is that by studying how t-SNE behaves in simple \n\ncases, it's possible to develop an intuition for what's going on.\n\n.tsne-group {\n\n overflow: visible;\n\n display: -webkit-flex;\n\n display: flex;\n\n flex-direction: column;\n\n margin-top: 36px;\n\n margin-bottom: 36px;\n\n}\n\n@media(min-width: 640px) {\n\n .tsne-group {\n\n -webkit-flex-direction: row;\n\n flex-direction: row;\n\n }\n\n}\n\n.tsne-group .original,\n\n.tsne-group .runner {\n\n box-sizing: border-box;\n\n -webkit-flex-grow: 1;\n\n flex-grow: 1;\n\n position: relative;\n\n display: -webkit-flex;\n\n display: flex;\n\n margin-bottom: 12px;\n\n}\n\n.tsne-group .original {\n\n position: relative;\n\n border-bottom: 1px solid rgba(0, 0, 0, 0.1);\n\n padding-bottom: 12px;\n\n}\n\n@media(min-width: 640px) {\n\n .tsne-group .original,\n\n .tsne-group .runner {\n\n display: block;\n\n width: 100%;\n\n margin-bottom: 0;\n\n }\n\n .tsne-group .original {\n\n margin-left: 0;\n\n padding-right: 12px;\n\n border-right: 1px solid rgba(0, 0, 0, 0.1);\n\n border-bottom: none;\n\n padding-bottom: 0;\n\n }\n\n .tsne-group .runner {\n\n margin-left: 12px;\n\n }\n\n}\n\n.tsne-group .runner.no-click {\n\n cursor: default;\n\n}\n\n.runner.clickable {\n\n cursor: pointer;\n\n}\n\n.runner.clickable:hover .image {\n\n box-shadow: 0 2px 12px rgba(0, 0, 0, 0.08);\n\n}\n\n.runner:hover i {\n\n opacity: 1;\n\n}\n\n.runner.selected i {\n\n opacity: 1;\n\n}\n\n.runner i {\n\n font-size: 30px;\n\n position: absolute;\n\n top: 1px;\n\n left: 1px;\n\n background-color: hsla(207, 44%, 79%, 0.8);\n\n color: steelblue;\n\n border-radius: 4px;\n\n opacity: 0;\n\n transition: opacity 0.3s;\n\n line-height: 0;\n\n width: 150px;\n\n padding: 75px 0;\n\n text-align: center;\n\n}\n\n@media(min-width: 640px) {\n\n .runner i {\n\n width: 100%;\n\n padding: 50% 0;\n\n }\n\n}\n\n.runner.selected .image::after {\n\n content: \"\";\n\n border: 2px solid rgba(70, 130, 180, 1);\n\n width: 100%;\n\n height: 100%;\n\n position: absolute;\n\n top: 0;\n\n box-sizing: border-box;\n\n border-radius: 4px;\n\n z-index: 10;\n\n}\n\n.tsne-group .image {\n\n border: 1px solid rgba(0, 0, 0, 0.1);\n\n border-radius: 4px;\n\n box-shadow: 0 0 6px rgba(0, 0, 0, 0.08);\n\n width: 150px;\n\n padding-top: 150px;\n\n background: white;\n\n position: relative;\n\n transition: border-color 0.3s, box-shadow 0.3s;\n\n margin-right: 12px;\n\n}\n\n@media(min-width: 640px) {\n\n .tsne-group .image {\n\n width: 100%;\n\n padding-top: 100%;\n\n margin-right: 0;\n\n }\n\n}\n\n.tsne-group .original .image {\n\n box-shadow: none;\n\n}\n\n.tsne-group .image img {\n\n position: absolute;\n\n top: 5%;\n\n left: 5%;\n\n width: 90%;\n\n}\n\n.tsne-group h3.caption {\n\n font-size: 15px;\n\n margin-top: 12px;\n\n}\n\n.tsne-group .caption {\n\n font-size: 12px;\n\n line-height: 1.6em;\n\n margin-top: 12px;\n\n width: 100px;\n\n}\n\n@media(min-width: 640px) {\n\n .tsne-group .caption,\n\n .tsne-group h3.caption {\n\n margin-top: 8px;\n\n width: 100%;\n\n }\n\n}\n\n(function() {\n\n var format = d3.format(\",\");\n\n d3.selectAll(\".tsne-group\").data(figures, function(d) {\n\n if(!d) return this.dataset.id;\n\n return d.id\n\n })\n\n .each(generateFigure)\n\n function generateFigure(figure) {\n\n var FIGURE = figure.id\n\n var dis = d3.select(this)\n\n var original = dis.append(\"div\").classed(\"original\", true)\n\n original.append(\"div\")\n\n .classed(\"image\", true)\n\n .append(\"img\")\n\n .classed(\"tsne-plot\", true)\n\n .attr(\"src\", \"assets/figure\\_\" + FIGURE + \"\\_\" + \"original.png\");\n\n original.append(\"h3\").classed(\"caption\", true).text(\"Original\")\n\n // examples\n\n var runners = dis.selectAll(\".runner\").data(figure.examples)\n\n .enter().append(\"div\").classed(\"runner\", true)\n\n if(!figure.noclick) {\n\n runners\n\n .classed(\"clickable\", true)\n\n .on(\"click\", function(d) {\n\n d3.selectAll(\".runner\").classed(\"selected\", false)\n\n updateStateFromFigure(figure, d, this);\n\n d3.select(this).classed(\"selected\", true)\n\n });\n\n } else {\n\n runners.classed(\"no-click\", true)\n\n }\n\n runners.append(\"div\")\n\n .classed(\"image\", true)\n\n .append(\"img\")\n\n .attr(\"src\", function(d,i) {\n\n return \"assets/figure\\_\" + FIGURE + \"\\_\" + d.id + \".png\"\n\n });\n\n if(!figure.noclick) {\n\n runners.append(\"i\")\n\n .classed(\"material-icons\", true)\n\n .text(\"open\\_in\\_browser\")\n\n }\n\n var caption = runners.append(\"div\").classed(\"caption\", true);\n\n caption.append(\"div\").text(function(d) { return \"Step: \" + format(d.step); })\n\n }\n\n})()\n\n", "bibliography_bib": null, "id": "5f665ab1cc629a6617fd2c79e9e5436c"} +{"url": "n/a", "source": "distill", "source_type": "html", "converted_with": "python", "title": "Visualizing Weights", "authors": ["Chelsea Voss", "Nick Cammarata", "Gabriel Goh", "Michael Petrov", "Ludwig Schubert", "Ben Egan", "Swee Kiat Lim", "Chris Olah"], "date_published": "2021-02-04", "abstract": " This article is part of the Circuits thread, an experimental format collecting invited short articles and critical commentary delving into the inner workings of neural networks. ", "journal_ref": "distill-pub", "doi": "https://doi.org/10.23915/distill.00024.007", "text": "\n\n![](Visualizing%20Weights_files/multiple-pages.svg)\n\n an experimental format collecting invited short articles and critical \n\ncommentary delving into the inner workings of neural networks.\n\n \n\n[Curve Circuits](https://distill.pub/2020/circuits/curve-circuits/)\n\nIntroduction\n\n------------\n\nThe problem of understanding a neural network is a little bit \n\nlike reverse engineering a large compiled binary of a computer program. \n\nIn this analogy, the weights of the neural network are the compiled \n\nassembly instructions. At the end of the day, the weights are the \n\nfundamental thing you want to understand: how does this sequence of \n\nconvolutions and matrix multiplications give rise to model behavior?\n\nTrying to understand artificial neural networks also has a lot in\n\n common with neuroscience, which tries to understand biological neural \n\nnetworks. As you may know, one major endeavor in modern neuroscience is \n\nmapping the [connectomes](https://en.wikipedia.org/wiki/Connectome)\n\n of biological neural networks: which neurons connect to which. These \n\nconnections, however, will only tell neuroscientists which weights are \n\nnon-zero. Getting the weights – knowing whether a connection excites or \n\ninhibits, and by how much – would be a significant further step. One \n\nimagines neuroscientists might give a great deal to have the access to \n\nweights that those of us studying artificial neural networks get for \n\nfree.\n\nAnd so, it's rather surprising how little attention we actually \n\ngive to looking at the weights of neural networks. There are a few \n\nexceptions to this, of course. It's quite common for researchers to show\n\n pictures of the first layer weights in vision models\n\n (these are directly connected to RGB channels, so they're easy to \n\nunderstand as images). In some work, especially historically, we see \n\nresearchers reason about the weights of toy neural networks by hand. And\n\n we quite often see researchers discuss aggregate statistics of weights.\n\n But actually looking at the weights of a neural network other than the \n\nfirst layer is quite uncommon – to the best of our knowledge, mapping \n\nweights between hidden layers to meaningful algorithms is novel to the \n\ncircuits project.\n\nIn this article, we're focusing on visualizing weights. But \n\npeople often visualize activations, attributions, gradients, and much \n\nmore. How should we think about the meaning of visualizing these \n\ndifferent objects?\n\n* **Activations:** We generally think of these as being \"what\" \n\nthe network saw. If understanding a neural network is like reverse \n\ncompiling a computer program, the neurons are the variables, and the \n\nactivations are the values of those variables.\n\n* **Weights:** We generally think of these as being \"how\" the \n\nneural network computes one layer from the previous one. In the reverse \n\nengineering analogy, these are compiled assembly instructions.\n\n often think of this as \"why\" the neuron fired. We need to be careful \n\nwith attributions, because they're a human-defined object on top of a \n\nneural network rather than a fundamental object. They aren't always well\n\n defined, and people mean different things by them. (They are very well \n\ndefined if you are only operating across adjacent layers!)\n\nWhy it's non-trivial to study weights in hidden layers\n\n------------------------------------------------------\n\nIt seems to us that there are three main barriers to making sense\n\n of the weights in neural networks, which may have contributed to \n\nresearchers tending to not directly inspect them:\n\n* **Lack of Contextualization:** Researchers often visualize \n\nweights in the first layer, because they are linked to RGB values that \n\nwe understand. That connection makes weights in the first layer \n\nmeaningful. But weights between hidden layers are meaningless by \n\ndefault: knowing nothing about either the source or the destination, how\n\n can we make sense of them?\n\n* **Indirect Interaction:** Sometimes, the meaningful weight \n\ninteractions are between neurons which aren't literally adjacent in a \n\nneural network. For example, in a residual network, the output of one \n\nneuron can pass through the additive residual stream and linearly \n\ninteract with a neuron much later in the network. In other cases, \n\nneurons may interact through intermediate neurons without significant \n\nnonlinear interactions. How can we efficiently reason about these \n\ninteractions?\n\n* **Dimensionality and Scale:** Neural networks have lots of \n\nneurons. Those neurons connect to lots of other neurons. There's a lot \n\nof data to display! How can we reduce it to a human-scale amount of \n\ninformation?\n\n The goal of this article is to show how similar ideas can be applied to\n\n weights instead of activations. Of course, we've already implicitly \n\nused these methods in various circuit articles,\n\n but in those articles the methods have been of secondary interest to \n\nthe results. It seems useful to give some dedicated discussion to the \n\nmethods.\n\nAside: One Simple Trick\n\n-----------------------\n\nInterpretability methods often fail to take off because they're \n\nhard to use. So before diving into sophisticated approaches, we wanted \n\nto offer a simple, easy to apply method.\n\n is large. (If this is the first convolutional layer, visualize it as \n\n![](Visualizing%20Weights_files/screenshot_1.png)\n\n[1](#figure-1):\n\n NMF of input weights in InceptionV1 `mixed4d_5x5`, \n\nfor a selection of ten neurons. The red, green, and blue channels on \n\neach grid indicate the weights for each of the 3 NMF factors.\n\n \n\nThis visualization doesn't tell you very much about what your \n\nweights are doing in the context of the larger model, but it does show \n\nyou that they are learning nice spatial structures. This can be an easy \n\nsanity check that your neurons are learning, and a first step towards \n\nunderstanding your neuron's behavior. We'll also see later that this \n\ngeneral approach of factoring weights can be extended into a powerful \n\ntool for studying neurons.\n\nDespite this lack of contextualization, one-sided NMF can be a \n\ngreat technique for investigating multiple channels at a glance. One \n\nthing you may quickly discover using this method is that, in models with\n\n global average pooling at the end of their convolutional layers, the \n\nlast few layers will have all their weights be horizontal bands.\n\n![](Visualizing%20Weights_files/screenshot_2.png)\n\n[2](#figure-2):\n\n Horizontally-banded weights in InceptionV1 `mixed5b_5x5`,\n\n for a selection of eight neurons. As in Figure 1, the red, green, and \n\nblue channels on each grid indicate the weights for each of the 3 NMF \n\nfactors.\n\n \n\nContextualizing Weights with Feature Visualization\n\n--------------------------------------------------\n\n The challenge of contextualization is a recurring challenge in \n\nunderstanding neural networks: we can easily observe every activation, \n\nevery weight, and every gradient; the challenge lies in determining what\n\n those values represent.\n\n`[relative x position, relative y position,\n\n input channels, output channels]`\n\nIf we fix the input channel and the output channel, we get a 2D \n\narray we can present with traditional data visualization. Let's assume \n\nwe know which neuron we're interested in understanding, so we have the \n\noutput channel. We can pick the input channels with high magnitude \n\nweights to our output channel.\n\nBut what does the input represent? What about the output?\n\nThe key trick is that techniques like feature visualization\n\n (or deeper investigations of neurons) can help us understand what the \n\ninput and output neurons represent, contextualizing the graph. Feature \n\nvisualizations are especially attractive because they're automatic, and \n\nproduce a single image which is often very informative about the neuron.\n\n As a result, we often represent neurons as feature visualizations in \n\nweight diagrams.\n\n[3](#figure-3): Contextualizing weights.\n\n \n\nWe can liken this to how, when reverse-engineering a normal \n\ncompiled computer program, one would need to start assigning variable \n\nnames to the values stored in registers to keep track of them. Feature \n\nvisualizations are essentially automatic variable names for neurons, \n\nwhich are roughly analogous to those registers or variables.\n\n### Small Multiples\n\n \n\nAnd if we have two families of related neurons interacting, it \n\ncan sometimes even be helpful to show the weights between all of them as\n\n a grid of small multiples:\n\nAdvanced Approaches to Contextualization with Feature Visualization\n\n-------------------------------------------------------------------\n\nAlthough we most often use feature visualization to visualize \n\nneurons, we can visualize any direction (linear combination of neurons).\n\n This opens up a very wide space of possibilities for visualizing \n\nweights, of which we'll explore a couple particularly useful ones.\n\n### Visualizing Spatial Position Weights\n\n matrix. But an alternative approach is to think of there as being a \n\nvector over input neurons at each spatial position, and to apply feature\n\n visualization to each of those vectors. You can think of this as \n\ntelling us what the weights in that position are collectively looking \n\nfor.\n\n![](Visualizing%20Weights_files/screenshot_6.png)\n\n[6](#figure-6). **Left:** Feature visualization of a car neuron. **Right:**\n\n Feature visualizations of the vector over input neurons at each spatial\n\n position of the car neuron's weights. As we see, the car neuron broadly\n\n responds to window features above wheel features.\n\n \n\n from Building Blocks. It can be a nice, high density way to get an \n\noverview of what the weights for one neuron are doing. However, it will \n\nbe unable to capture cases where one position responds to multiple very \n\ndifferent things, as in a multi-faceted or polysemantic neuron.\n\n### Visualizing Weight Factors\n\nFeature visualization can also be applied to factorizations of \n\nthe weights, which we briefly discussed earlier. This is the weight \n\nanalogue to the \"Neuron Groups\" visualization from Building Blocks.\n\n or black and white vs color detectors that look are all mostly looking \n\nfor a small number of factors. For example, a large number of high-low \n\nfrequency detectors can be significantly understood as combining just \n\ntwo factors – a high frequency factor and a low-frequency factor – in \n\ndifferent patterns.\n\n .upstream-nmf {\n\n display: grid;\n\n grid-row-gap: .5rem;\n\n margin-bottom: 2rem;\n\n }\n\n .upstream-nmf .row {\n\n display: grid;\n\n grid-template-columns: min-content 1fr 6fr;\n\n grid-column-gap: 1rem;\n\n grid-row-gap: .5rem;\n\n }\n\n .units,\n\n .weights {\n\n display: grid;\n\n grid-template-columns: repeat(6, 1fr);\n\n grid-gap: 0.5rem;\n\n grid-column-start: 3;\n\n }\n\n img.fv {\n\n max-width: 100%;\n\n border-radius: 8px;\n\n }\n\n div.units img.full {\n\n margin-left: 1px;\n\n margin-bottom: 0px;\n\n }\n\n img.full {\n\n width: unset;\n\n object-fit: none;\n\n object-position: center;\n\n image-rendering: optimizeQuality;\n\n }\n\n img.weight {\n\n width: 100%;\n\n image-rendering: pixelated;\n\n align-self: center;\n\n border: 1px solid #ccc;\n\n }\n\n .annotated-image {\n\n display: grid;\n\n grid-auto-flow: column;\n\n align-items: center;\n\n }\n\n .annotated-image span {\n\n writing-mode: vertical-lr;\n\n }\n\n .layer-label {\n\n grid-row-start: span 2;\n\n text-align: end;\n\n }\n\n .layer-label label {\n\n display: inline-block;\n\n writing-mode: vertical-lr;\n\n }\n\n .layer-label.hidden {\n\n border-color: transparent;\n\n }\n\n .layer-label.nonhidden {\n\n margin-left: 1rem;\n\n }\n\n .layer-label.hidden label {\n\n visibility: hidden;\n\n }\n\nmixed3a\n\nHF-factor\n\n![](Visualizing%20Weights_files/conv2d2-hi.png)\n\n![](Visualizing%20Weights_files/neuron136-layermaxpool1-factor1.png)\n\n![](Visualizing%20Weights_files/neuron108-layermaxpool1-factor1.png)\n\n![](Visualizing%20Weights_files/neuron132-layermaxpool1-factor1.png)\n\n![](Visualizing%20Weights_files/neuron88-layermaxpool1-factor1.png)\n\n![](Visualizing%20Weights_files/neuron110-layermaxpool1-factor1.png)\n\n![](Visualizing%20Weights_files/neuron180-layermaxpool1-factor1.png)\n\nLF-factor\n\n![](Visualizing%20Weights_files/conv2d2-lo.png)\n\n![](Visualizing%20Weights_files/neuron136-layermaxpool1-factor0.png)\n\n![](Visualizing%20Weights_files/neuron108-layermaxpool1-factor0.png)\n\n![](Visualizing%20Weights_files/neuron132-layermaxpool1-factor0.png)\n\n![](Visualizing%20Weights_files/neuron88-layermaxpool1-factor0.png)\n\n![](Visualizing%20Weights_files/neuron110-layermaxpool1-factor0.png)\n\n![](Visualizing%20Weights_files/neuron180-layermaxpool1-factor0.png)\n\n[7](#figure-7):\n\n layer `conv2d2`.\n\n \n\n .upstream-neurons {\n\n display: grid;\n\n grid-gap: 1em;\n\n margin-bottom: 1em;\n\n }\n\n h5 {\n\n margin-bottom: 0px;\n\n }\n\n .upstream-neurons .row {\n\n display: grid;\n\n grid-column-gap: .25em;\n\n column-gap: .25em;\n\n align-items: center;\n\n }\n\n .units,\n\n .weights {\n\n display: grid;\n\n grid-template-columns: repeat(6, 1fr);\n\n grid-gap: 0.5rem;\n\n grid-column-start: 3;\n\n }\n\n img.fv {\n\n display: block;\n\n max-width: 100%;\n\n border-radius: 8px;\n\n }\n\n img.full {\n\n width: unset;\n\n object-fit: none;\n\n object-position: center;\n\n image-rendering: optimizeQuality;\n\n }\n\n img.weight {\n\n width: 100%;\n\n image-rendering: pixelated;\n\n align-self: center;\n\n border: 1px solid #ccc;\n\n }\n\n .layer-label {\n\n grid-row-start: span 2;\n\n }\n\n .layer-label label {\n\n display: inline-block;\n\n /\\* transform: rotate(-90deg); \\*/\n\n }\n\n .annotation {\n\n font-size: 1.5em;\n\n font-weight: 200;\n\n color: #666;\n\n margin-bottom: 0.2em;\n\n }\n\n .equal-sign {\n\n padding: 0 0.25em;\n\n }\n\n .ellipsis {\n\n padding: 0 0.25em;\n\n /\\* vertically align the ellipsis \\*/\n\n position: relative;\n\n bottom: 0.5ex;\n\n }\n\n .unit {\n\n display: block;\n\n min-width: 50px;\n\n }\n\n .factor {\n\n box-shadow: 0 0 8px #888;\n\n }\n\n .unit .bar {\n\n display: block;\n\n margin-top: 0.5em;\n\n background-color: #CCC;\n\n height: 4px;\n\n }\n\n .row h4 {\n\n border-bottom: 1px solid #ccc;\n\n }\n\n![](Visualizing%20Weights_files/conv2d2-hi.png)\n\n=\n\n+\n\n+\n\n+\n\n+\n\n+\n\n…\n\nHF-factor\n\n × 0.93\n\n × 0.73\n\n × 0.66\n\n × 0.59\n\n × 0.55\n\n![](Visualizing%20Weights_files/conv2d2-lo.png)\n\n=\n\n+\n\n+\n\n+\n\n+\n\n+\n\n…\n\nLF-factor\n\n × 0.44\n\n × 0.41\n\n × 0.38\n\n × 0.36\n\n × 0.34\n\n each factor.\n\n \n\nDealing with Indirect Interactions\n\n----------------------------------\n\nAs we mentioned earlier, sometimes the meaningful weight \n\ninteractions are between neurons which aren't literally adjacent in a \n\nneural network, or where the weights aren't directly represented in a \n\nsingle weight tensor. A few examples:\n\n* In a residual network, the output of one neuron can pass \n\nthrough the additive residual stream and linearly interact with a neuron\n\n much later in the network.\n\n* In a bottleneck architecture, neurons in the bottleneck may \n\nprimarily be a low-rank projection of neurons from the previous layer.\n\n* The behavior of an intermediate layer simply doesn't introduce\n\n much non-linear behavior, leaving two neurons in non-adjacent layers \n\nwith a significant linear interaction.\n\nAs a result, we often work with \"expanded weights\" – that is, the\n\n result of multiplying adjacent weight matrices, potentially ignoring \n\nnon-linearities. We generally implement expanded weights by taking \n\ngradients through our model, ignoring or replacing all non-linear \n\noperations with the closest linear one.\n\nThese expanded weights have the following properties:\n\n* If two layers interact **linearly**, the expanded weights \n\nwill give the true linear map, even if the model doesn't explicitly \n\nrepresent the weights in a single weight matrix.\n\n* If two layers interact **non-linearly**, the expanded \n\nweights can be seen as the expected value of the gradient up to a \n\nconstant factor, under the assumption that all neurons have an equal \n\n(and independent) probability of firing.\n\nThey also have one additional benefit, which is more of an \n\nimplementation detail: because they're implemented in terms of \n\ngradients, you don't need to know how the weights are represented. For \n\nexample, in TensorFlow, you don't need to know which variable object \n\nrepresents the weights. This can be a significant convenience when \n\nyou're working with unfamiliar models!\n\n### Benefits of Expanded Weights\n\n \n\nExpanding out the weights allows us to see an important aggregate\n\n effect of these connections: together, they look for the absence of \n\ncolor in the center one layer further back.\n\n \n\nA particularly important use of this method – which we've been \n\nimplicitly using in earlier examples – is to jump over \"bottleneck \n\nlayers.\" Bottleneck layers are layers of the network which squeeze the \n\nnumber of channels down to a much smaller number, typically in a branch,\n\n of InceptionV1 are one example. Since so much information is \n\ncompressed, these layers are often polysemantic, and it can often be \n\nmore helpful to jump over them and understand the connection to the \n\nwider layer before them.\n\n### Cases where expanded weights are misleading\n\n \n\n excited by high-frequency patterns on one side and inhibited on the \n\nother (and vice versa for low frequency), detecting both directions \n\nmeans that the expanded weights cancel out! As a result, expanded \n\nweights appear to show that boundary detectors are neither excited or \n\ninhibited by high frequency detectors two layers back, when in fact they\n\n are *both* excited and also inhibited by high frequency, depending\n\n on the context, and it's just that those two different cases cancel \n\nout.\n\n[12](#figure-12).\n\n \n\nMore sophisticated techniques for describing multi-layer \n\ninteractions can help us understand cases like this. For example, one \n\ncan determine what the \"best case\" excitation interaction between two \n\nneurons is (that is, the maximum achievable gradient between them). Or \n\nyou can look at the gradient for a particular example. Or you can factor\n\n the gradient over many examples to determine major possible cases. \n\nThese are all useful techniques, but we'll leave them for a future \n\narticle to discuss.\n\n### Qualitative properties\n\nOne qualitative property of expanding weights across many layers \n\ndeserves mention before we end our discussion of them. Expanded weights \n\noften get this kind of \"electron orbital\"-like smooth spatial \n\nstructures:\n\n Although the exact structures present may vary from neuron to \n\nneuron, this example is not cherry-picked: this smoothness is typical of\n\n most multiple-layer expanded weights.\n\n \n\nDimensionality and Scale\n\n------------------------\n\nSo far, we've addressed the challenges of contextualization and \n\nindirection interactions. But we've only given a bit of attention to our\n\n third challenge of dimensionality and scale. Neural networks contain \n\nmany neurons and each one connects to many others, creating a huge \n\namount of weights. How do we pick which connections between neurons to \n\nlook at?\n\nFor the purposes of this article, we'll put the question of which\n\n neurons we want to study outside of our scope, and only discuss the \n\nproblem of picking which connections to study. (We may be trying to \n\ncomprehensively study a model, in which case we want to study all \n\nneurons. But we might also, for example, be trying to study neurons \n\nwe've determined related to some narrower aspect of model behavior.)\n\nGenerally, we chose to look at the largest weights, as we did at \n\nthe beginning of the section on contextualization. Unfortunately, there \n\ntends to be a long tail of small weights, and at some point it generally\n\n gets impractical to look at these. How much of the story is really \n\nhiding in these small weights? We don't know, but polysemantic neurons \n\nsuggest there could be a very important and subtle story hiding here! \n\nThere's some hope that sparse neural networks might make this much \n\nbetter, by getting rid of small weights, but whether such conclusions \n\ncan be drawn about non-sparse networks is presently speculative.\n\nAn alternative strategy that we've brushed on a few times is to \n\nreduce your weights into a few components and then study those factors \n\n(for example, with NMF). Often, a very small number of components can \n\nexplain much of the variance. In fact, sometimes a small number of \n\nfactors can explain the weights of an entire set of neurons! Prominent \n\nexamples of this are high-low frequency detectors (as we saw earlier) \n\nand black and white vs color detectors.\n\nHowever, this approach also has downsides. Firstly, these \n\ncomponents can be harder to understand and even polysemantic. For \n\nexample, if you apply the basic version of this method to a boundary \n\ndetector, one component will contain both high-to-low and low-to-high \n\nfrequency detectors which will make it hard to analyze. Secondly, your \n\nfactors no longer align with activation functions, which makes analysis \n\nmuch messier. Finally, because you will be reasoning about every neuron \n\nin a different basis, it is difficult to build a bigger picture view of \n\nthe model unless you convert your components back to neurons.\n\n![](Visualizing%20Weights_files/multiple-pages.svg)\n\n an experimental format collecting invited short articles and critical \n\ncommentary delving into the inner workings of neural networks.\n\n \n\n[Curve Circuits](https://distill.pub/2020/circuits/curve-circuits/)\n\n", "bibliography_bib": [{"title": "Imagenet classification with deep convolutional neural networks"}, {"title": "Understanding neural networks through deep visualization"}, {"title": "Visualizing and understanding convolutional networks"}, {"title": "The Building Blocks of Interpretability"}, {"title": "Zoom In: An Introduction to Circuits"}, {"title": "An Overview of Early Vision in InceptionV1"}, {"title": "Curve Detectors"}, {"title": "Feature Visualization"}, {"title": "Multifaceted feature visualization: Uncovering the different types of features learned by each neuron in deep neural networks"}, {"title": "Visualizing and understanding recurrent networks"}, {"title": "Visualizing higher-layer features of a deep network"}], "id": "66eb4c6f6f37c2ecf17379961d1263a7"} +{"url": "n/a", "source": "distill", "source_type": "html", "converted_with": "python", "title": "Self-Organising Textures", "authors": ["Eyvind Niklasson", "Alexander Mordvintsev", "Ettore Randazzo", "Michael Levin"], "date_published": "2021-02-11", "abstract": " This article is part of the Differentiable Self-organizing Systems Thread, an experimental format collecting invited short articles delving into differentiable self-organizing systems, interspersed with critical commentary from several experts in adjacent fields. ", "journal_ref": "distill-pub", "doi": "https://doi.org/10.23915/distill.00027.003", "text": "\n\n### Contents\n\n* [NCA as pattern generators](#nca-as-pattern-generators)\n\n* [Related work](#related-work)\n\n[Feature Visualization](#feature-visualization)\n\n* [NCA with Inception](#nca-with-inception)\n\n[Other interesting findings](#other-interesting-findings)\n\n* [Robustness](#robustness)\n\n* [Hidden States](#hidden-states)\n\n[Conclusion](#conclusion)\n\n![](Self-Organising%20Textures_files/multiple-pages.svg)\n\n This article is part of the\n\n an experimental format collecting invited short articles delving into\n\n differentiable self-organizing systems, interspersed with critical\n\n commentary from several experts in adjacent fields.\n\n \n\n[Self-classifying MNIST Digits](https://distill.pub/2020/selforg/mnist/)\n\n The inductive bias imposed by using cellular automata is powerful. A \n\nsystem of individual agents running the same learned local rule can \n\nsolve surprisingly complex tasks. Moreover, individual agents, or cells,\n\n can learn to coordinate their behavior even when separated by large \n\ndistances. By construction, they solve these tasks in a massively \n\n way. Each cell must be able to take on the role of any other cell - as a\n\n result they tend to generalize well to unseen situations.\n\nIn this work, we apply NCA to the task of texture synthesis. This \n\ntask involves reproducing the general appearance of a texture template, \n\nas opposed to making pixel-perfect copies. We are going to focus on \n\ntexture losses that allow for a degree of ambiguity. After training NCA \n\nmodels to reproduce textures, we subsequently investigate their learned \n\nbehaviors and observe a few surprising effects. Starting from these \n\ninvestigations, we make the case that the cells learn distributed, \n\nlocal, algorithms. \n\nPatterns, textures and physical processes\n\n-----------------------------------------\n\n![](Self-Organising%20Textures_files/zebra.jpg)\n\nA pair of Zebra. Zebra are said to have unique stripes.\n\nZebra stripes are an iconic texture. Ask almost anyone to identify \n\nzebra stripes in a set of images, and they will have no trouble doing \n\nso. Ask them to describe what zebra stripes look like, and they will \n\ngladly tell you that they are parallel stripes of slightly varying \n\nwidth, alternating in black and white. And yet, they may also tell you \n\nthat no two zebra have the same set of stripes\n\n Perhaps an apocryphal claim, but at the very lowest level every zebra \n\nwill be unique. Ourp point is - \"zebra stripes\" as a concept in human \n\nunderstanding refers to the general structure of a black and white \n\nstriped pattern and not to a specific mapping from location to colour..\n\n This is because evolution has programmed the cells responsible for \n\ncreating the zebra pattern to generate a pattern of a certain quality, \n\nwith certain characteristics, as opposed to programming them with the \n\nblueprints for an exact bitmap of the edges and locations of stripes to \n\nbe moulded to the surface of the zebra's body.\n\nPut another way, patterns and textures are ill-defined concepts. The \n\nCambridge English Dictionary defines a pattern as \"any regularly \n\nrepeated arrangement, especially a design made from repeated lines, \n\nshapes, or colours on a surface\". This definition falls apart rather \n\nquickly when looking at patterns and textures that impart a feeling or \n\nquality, rather than a specific repeating property. A coloured fuzzy \n\nrug, for instance, can be considered a pattern or a texture, but is \n\ncomposed of strands pointing in random directions with small random \n\nvariations in size and color, and there is no discernable regularity to \n\nthe pattern. Penrose tilings do not repeat (they are not translationally\n\n invariant), but show them to anyone and they'll describe them as a \n\npattern or a texture. Most patterns in nature are outputs of locally \n\ninteracting processes that may or may not be stochastic in nature, but \n\nare often based on fairly simple rules. There is a large body of work on\n\n models which give rise to such patterns in nature; most of it is \n\ninspired by Turing's seminal paper on morphogenesis. \n\nSuch patterns are very common in developmental biology .\n\n In addition to coat colors and skin pigmentation, invariant large-scale\n\n patterns, arising in spite of stochastic low-level dynamics, are a key \n\nfeature of peripheral nerve networks, vascular networks, somites (blocks\n\n of tissue demarcated in embryogenesis that give rise to many organs), \n\nand segments of anatomical and genetic-level features, including whole \n\nbody plans (e.g., snakes and centipedes) and appendages (such as \n\ndemarcation of digit fields within the vertebrate limb).\n\n These kinds of patterns are generated by reaction-diffusion processes, \n\nbioelectric signaling, planar polarity, and other cell-to-cell \n\ncommunication mechanisms.\n\n Patterns in biology are not only structural, but also physiological, as\n\n in the waves of electrical activity in the brain and the dynamics of \n\ngene regulatory networks. These gene regulatory networks, for example, \n\ncan support computation sufficiently sophisticated as to be subject to \n\nLiar paradoxes See [liar paradox](https://en.wikipedia.org/wiki/Liar_paradox).\n\n In principle, gene regulatory networks can express paradoxical \n\nbehaviour, such as that expression of factor A represses the expression \n\n Studying the emergence and control of such patterns can help us to \n\nunderstand not only their evolutionary origins, but also how they are \n\nrecognized (either in the visual system of a second observer or in \n\nadjacent cells during regeneration) and how they can be modulated for \n\nthe purposes of regenerative medicine.\n\nAs a result, when having any model learn to produce textures or \n\npatterns, we want it to learn a generative process for the pattern. We \n\ncan think of such a process as a means of sampling from the distribution\n\n governing this pattern. The first hurdle is to choose an appropriate \n\nloss function, or qualitative measure of the pattern. To do so, we \n\nemploy ideas from Gatys et. al .\n\n NCA become the parametrization for an image which we \"stylize\" in the \n\nstyle of the target pattern. In this case, instead of restyling an \n\nexisting image, we begin with a fully unconstrained setting: the output \n\nof an untrained, randomly initialized, NCA. The NCA serve as the \n\n\"renderer\" or \"generator\", and a pre-trained differentiable model serves\n\n as a distinguisher of the patterns, providing the gradient necessary \n\nfor the renderer to learn to produce a pattern of a certain style.\n\n### From Turing, to Cellular Automata, to Neural Networks\n\nNCA are well suited for generating textures. To understand why, we'll\n\n demonstrate parallels between texture generation in nature and NCA. \n\nGiven these parallels, we argue that NCA are a good model class for \n\ntexture generation.\n\n#### PDEs\n\nIn \"The Chemical Basis of Morphogenesis\" ,\n\n Alan Turing suggested that simple physical processes of reaction and \n\ndiffusion, modelled by partial differential equations, lie behind \n\npattern formation in nature, such as the aforementioned zebra stripes. \n\nExtensive work has since been done to identify PDEs modeling \n\nreaction-diffusion and evaluating their behaviour. One of the more \n\ncelebrated examples is the Gray-Scott model of reaction diffusion (,).\n\n This process has a veritable zoo of interesting behaviour, explorable \n\nby simply tuning the two parameters. We strongly encourage readers to \n\nvisit this [interactive atlas](http://mrob.com/pub/comp/xmorphia/)\n\n of the different regions of the Gray-Scott reaction diffusion model to \n\nget a sense for the extreme variety of behaviour hidden behind two \n\nTo tackle the problem of reproducing our textures, we propose a more \n\ngeneral version of the above systems, described by a simple Partial \n\nDifferential Equation (PDE) over the state space of an image. \n\nIntuitively, we have defined a system where every point of the image \n\nchanges with time, in a way that depends on how the image currently \n\nchanges across space, with respect to its immediate neighbourhood. \n\nReaders may start to recognize the resemblance between this and another \n\nsystem based on immediately local interactions.\n\n#### To CAs\n\nDifferential equations governing natural phenomena are usually \n\nevaluated using numerical differential equation solvers. Indeed, this is\n\n sometimes the **only** way to solve them, as many PDEs and\n\n ODEs of interest do not have closed form solutions. This is even the \n\n Numerically solving PDEs and ODEs is a vast and well-established field.\n\n One of the biggest hammers in the metaphorical toolkit for numerically \n\nevaluating differential equations is discretization: the process of \n\nconverting the variables of the system from continuous space to a \n\ndiscrete space, where numerical integration is tractable. When using \n\nsome ODEs to model a change in a phenomena over time, for example, it \n\nmakes sense to advance through time in discrete steps, possibly of \n\nvariable size. \n\nWe now show that numerically integrating the aforementioned PDE is \n\nThe logical approach to discretizing the space the PDE operates on is\n\n to discretize the continuous 2D image space into a 2D raster grid. \n\nBoundary conditions are of concern but we can address them by moving to a\n\n toroidal world where each dimension wraps around on itself. \n\nSimilarly to space, we choose to treat time in a discretized fashion \n\nand evaluate our NCA at fixed-sized time steps. This is equivalent to \n\nexplicit Euler integration. However, here we make an important deviation\n\n from traditional PDE numerical integration methods for two reasons. \n\nFirst, if all cells are updated synchronously, initial conditions s0s\\_0s0​\n\n must vary from cell-to-cell in order to break the symmetry. Second, the\n\n physical implementation of the synchronous model would require the \n\nexistence of a global clock, shared by all cells. One way to work around\n\n the former is by initializing the grid with random noise, but in the \n\nspirit of self organisation we instead choose to decouple the cell \n\nupdates by asynchronously evaluating the CA. We sample a subset of all \n\ncells at each time-step to update. This introduces both asynchronicity \n\nin time (cells will sometimes operate on information from their \n\nneighbours that is several timesteps old), and asymmetry in space, \n\nsolving both aforementioned issues.\n\nOur next step towards representing a PDE with cellular automata is to\n\negin{bmatrix}\n\n-1 & 0 & 1\\-2 & 0 & 2 \\-1 & 0 & 1\n\n\\end{bmatrix}\n\n&\n\negin{bmatrix}\n\n-1 & -2 & -1\\ 0 & 0 & 0 \\1 & 2 & 1\n\n\\end{bmatrix}\n\n&\n\negin{bmatrix}\n\n1 & 2 & 1\\2 & -12 & 2 \\1 & 2 & 1\n\n\\end{bmatrix}\n\n\\\n\nSobel\\_x & Sobel\\_y & Laplacian\n\nWith all the pieces in place, we now have a space-discretized version\n\n of our PDE that looks very much like a Cellular Automata: the time \n\nevolution of each discrete point in the raster grid depends only on its \n\nimmediate neighbours. These discrete operators allow us to formalize our\n\n PDE as a CA. To double check that this is true, simply observe that as \n\nour grid becomes very fine, and the asynchronous updates approach \n\nuniformity, the dynamics of these discrete operators will reproduce the \n\ncontinuous dynamics of the original PDE as we defined it.\n\n#### To Neural Networks\n\nThe final step in implementing the above general PDE for texture \n\ngeneration is to translate it to the language of deep learning. \n\nFortunately, all the operations involved in iteratively evaluating the \n\ngeneralized PDE exist as common operations in most deep learning \n\nframeworks. We provide both a Tensorflow and a minimal PyTorch \n\nimplementation for reference, and refer readers to these for details on \n\nour implementation. \n\n### NCA as pattern generators\n\n#### Model:\n\n![](Self-Organising%20Textures_files/texture_model.svg)\n\nTexture NCA model.\n\nWe build on the Growing CA NCA model ,\n\n complete with built-in quantization of weights, stochastic updates, and\n\n the batch pool mechanism to approximate long-term training. For further\n\n details on the model and motivation, we refer readers to this work.\n\n#### Loss function:\n\n \n\n![](Self-Organising%20Textures_files/texture_training.svg)\n\nTexture NCA model.\n\n in the form of the raw activation values of the neurons in these \n\nlayers. Finally, we run our NCA forward for between 32 and 64 \n\n of activations of these neurons with the NCA as input and their \n\nactivations with the template image as input. We keep the weights of VGG\n\n frozen and use ADAM to update the weights of the NCA.\n\n#### Dataset:\n\n The aim of this dataset is to provide a benchmark for measuring the \n\nability of vision models to recognize and categorize textures and \n\ndescribe textures using words. The textures were collected to match 47 \n\n\"attributes\" such as \"bumpy\" or \"polka-dotted\". These 47 attributes were\n\n in turn distilled from a set of common words used to describe textures \n\nidentified by Bhusan, Rao and Lohse . \n\n#### Results:\n\nAfter a few iterations of training, we see the NCA converge to a \n\nsolution that at first glance looks similar to the input template, but \n\nnot pixel-wise identical. The very first thing to notice is that the \n\nThis is not completely unexpected. In *Differentiable Parametrizations*,\n\n the authors noted that the images produced when backpropagating into \n\nimage space would end up different each time the algorithm was run due \n\nto the stochastic nature of the parametrizations. To work around this, \n\nthey introduced some tricks to maintain **alignment** \n\nbetween different visualizations. In our model, we find that we attain \n\nsuch alignment along the temporal dimension without optimizing for it; a\n\n welcome surprise. We believe the reason is threefold. First, reaching \n\nand maintaining a static state in an NCA appears to be non-trivial in \n\ncomparison to a dynamic one, so much so that in Growing CA a pool of NCA\n\n states at various iteration times had to be maintained and sampled as \n\nstarting states to simulate loss being applied after a time period \n\nlonger than the NCAs iteration period, to achieve a static stability. We\n\n employ the same sampling mechanism here to prevent the pattern from \n\ndecaying, but in this case the loss doesn't enforce a static fixed \n\ntarget; rather it guides the NCA towards any one of a number of states \n\nthat minimizes the style loss. Second, we apply our loss after a random \n\nnumber of iterations of the NCA. This means that, at any given time \n\nstep, the pattern must be in a state that minimizes the loss. Third, the\n\n stochastic updates, local communication, and quantization all limit and\n\n regularize the magnitude of updates at each iteration. This encourages \n\nchanges to be small between one iteration and the next. We hypothesize \n\nthat these properties combined encourage the NCA to find a solution \n\nwhere each iteration is **aligned** with the previous \n\niteration. We perceive this alignment through time as motion, and as we \n\niterate the NCA we observe it traversing a manifold of locally aligned \n\nsolutions. \n\n based on the aforementioned findings and qualitative observation of the\n\n NCA. We proceed to demonstrate some exciting behaviours of NCA trained \n\non different template images. \n\nAn NCA trained to create a pattern in the style of **chequered\\_0121.jpg**.\n\nWe notice that: \n\n* Initially, a non-aligned grid of black and white quadrilaterals is formed.\n\n to more closely approximate squares. Quadrilaterals of both colours \n\neither emerge or disappear. Both of these behaviours seem to be an \n\nattempt to find local consistency.\n\n* After a longer time, the grid tends to achieve perfect consistency.\n\nSuch behaviour is not entirely unlike what one would expect in a \n\nhand-engineered algorithm to produce a consistent grid with local \n\ncommunication. For instance, one potential hand-engineered approach \n\nwould be to have cells first try and achieve local consistency, by \n\nchoosing the most common colour from the cells surrounding them, then \n\nattempting to form a diamond of correct size by measuring distance to \n\nthe four edges of this patch of consistent colour, and moving this \n\nboundary if it were incorrect. Distance could be measured by using a \n\nhidden channel to encode a gradient in each direction of interest, with \n\neach cell decreasing the magnitude of this channel as compared to its \n\nneighbour in that direction. A cell could then localize itself within a \n\ndiamond by measuring the value of two such gradient channels. The \n\nappearance of such an algorithm would bear resemblance to the above - \n\nwith patches of cells becoming either black, or white, diamonds then \n\nresizing themselves to achieve consistency.\n\nAn NCA trained to create a pattern in the style of **bubbly\\_0101.jpg**.\n\nIn this video, the NCA has learned to reproduce a texture based on a \n\ntemplate of clear bubbles on a blue background. One of the most \n\ninteresting behaviours we observe is that the density of the bubbles \n\nremains fairly constant. If we re-initialize the grid states, or \n\ninteractively destroy states, we see a multitude of bubbles re-forming. \n\nHowever, as soon as two bubbles get too close to each other, one of them\n\n spontaneously collapses and disappears, ensuring a constant density of \n\nIf we speed the animation up, we see that different bubbles move at \n\ndifferent speeds, yet they never collide or touch each other. Bubbles \n\nalso maintain their structure by self-correcting; a damaged bubble can \n\nre-grow.\n\nThis behaviour is remarkable because it arises spontaneously, without\n\n any external or auxiliary losses. All of these properties are learned \n\nfrom a combination of the template image, the information stored in the \n\nlayers of VGG, and the inductive bias of the NCA. The NCA learned a rule\n\n that effectively approximates many of the properties of the bubbles in \n\nthe original image. Moreover, it has learned a process that generates \n\nthis pattern in a way that is robust to damage and looks realistic to \n\nhumans. \n\nAn NCA trained to create a pattern in the style of **interlaced\\_0172.jpg**.\n\nHere we see one of our favourite patterns: a simple geometric \n\n\"weave\". Again, we notice the NCA seems to have learned an algorithm for\n\n producing this pattern. Each \"thread\" alternately joins or detaches \n\nfrom other threads in order to produce the final pattern. This is \n\nstrikingly similar to what one would attempt to implement, were one \n\nasked to programmatically generate the above pattern. One would try to \n\ndesign some sort of stochastic algorithm for weaving individual threads \n\ntogether with other nearby threads.\n\nAn NCA trained to create a pattern in the style of **banded\\_0037.jpg**.\n\nHere, misaligned stripe fragments travel up or down the stripe until \n\neither they merge to form a single straight stripe or a stripe shrinks \n\nand disappears. Were this to be implemented algorithmically with local \n\ncommunication, it is not infeasible that a similar algorithm for finding\n\n consistency among the stripes would be used.\n\n### Related work\n\nThis foray into pattern generation is by no means the first. There \n\nhas been extensive work predating deep-learning, in particular \n\nsuggesting deep connections between spatial patterning of anatomical \n\nstructure and temporal patterning of cognitive and computational \n\nprocesses (e.g., reviewed in ).\n\n Hans Spemann, one of the heroes of classical developmental biology, \n\nsaid \"Again and again terms have been used which point not to physical \n\nbut to psychical analogies. It was meant to be more than a poetical \n\nmetaphor. It was meant to my conviction that the suitable reaction of a \n\ngerm fragment, endowed with diverse potencies, in an embryonic 'field'… \n\nis not a common chemical reaction, like all vital processes, are \n\ncomparable, to nothing we know in such degree as to vital processes of \n\nwhich we have the most intimate knowledge.\" .\n\n More recently, Grossberg quantitatively laid out important \n\nsimilarities between developmental patterning and computational \n\nneuroscience . As briefly touched \n\nupon, the inspiration for much of the work came from Turing's work on \n\npattern generation through local interaction, and later papers based on \n\nthis principle. However, we also wish to acknowledge some works that we \n\nfeel have a particular kinship with ours. \n\n#### Patch sampling\n\nEarly work in pattern generation focused on texture sampling. Patches\n\n were often sampled from the original image and reconstructed or \n\nrejoined in different ways to obtain an approximation of the texture. \n\nThis method has also seen recent success with the work of Gumin .\n\n#### Deep learning\n\nGatys et. al's work , \n\nreferenced throughout, has been seminal with regards to the idea that \n\nstatistics of certain layers in a pre-trained network can capture \n\ntextures or styles in an image. There has been extensive work building \n\non this idea, including playing with other parametrisations for image \n\ngeneration and optimizing the generation process . \n\nOther work has focused on using a convolutional generator combined \n\nwith path sampling and trained using an adversarial loss to produce \n\ntextures of similar quality . \n\n#### Interactive Evolution of Camouflage\n\n Craig Reynolds uses a texture description language, consisting of \n\ngenerators and operators, to parametrize a texture patch, which is \n\npresented to human viewers who have to decide which patches are the \n\nworst at \"camouflaging\" themselves against a chosen background texture. \n\nThe population is updated in an evolutionary fashion to maximize \n\n\"camouflage\", resulting in a texture exhibiting the most camouflage (to \n\nhuman eyes) after a number of iterations. We see strong parallels with \n\nour work - instead of a texture generation language, we have an NCA \n\nparametrize the texture, and instead of human reviewers we use VGG as an\n\n evaluator of the quality of a generated pattern. We believe a \n\nfundamental difference lies in the solution space of an NCA. A texture \n\ngeneration language comes with a number of inductive biases and learns a\n\n deterministic mapping from coordinates to colours. Our method appears \n\nto learn more general algorithms and behaviours giving rise to the \n\ntarget pattern.\n\nFeature visualization\n\n---------------------\n\n![](Self-Organising%20Textures_files/butterfly_eye.jpg)\n\nA butterfly with an \"eye-spot\" on the wings.\n\nWe have now explored some of the fascinating behaviours learned by \n\nthe NCA when presented with a template image. What if we want to see \n\nthem learn even more \"unconstrained\" behaviour? \n\nSome butterflies have remarkably lifelike eyes on their wings. It's \n\nunlikely the butterflies are even aware of this incredible artwork on \n\ntheir own bodies. Evolution placed these there to trigger a response of \n\nfear in potential predators or to deflect attacks from them .\n\n It is likely that neither the predator nor the butterfly has a concept \n\n regarding the consciousness of the other, but evolution has identified a\n\n region of morphospace for this organism that exploits \n\npattern-identifying features of predators to trick them into fearing a \n\nharmless bug instead of consuming it. \n\nEven more remarkable is the fact that the individual cells composing \n\nthe butterfly's wings can self assemble into coherent, beautiful, shapes\n\n The coordination required to produce these features implies \n\nself-organization over hundreds or thousands of cells to generate a \n\ncoherent image of an eye that evolved simply to act as a visual stimuli \n\nfor an entirely different species, because of the local nature of \n\ncell-to-cell communication. Of course, this pales in comparison to the \n\nmorphogenesis that occurs in animal and plant bodies, where structures \n\nconsisting of millions of cells will specialize and coordinate to \n\ngenerate the target morphology. \n\n Just as neuroscientists and biologists have often treated cells and \n\ncell structures and neurons as black-box models to be investigated, \n\nmeasured and reverse-engineered, there is a large contemporary body of \n\nwork on doing the same with neural networks. For instance the work by \n\nBoettiger .\n\nWe can explore this idea with minimal effort by taking our \n\npattern-generating NCA and exploring what happens if we task it to enter\n\n a state that excites a given neuron in Inception. One of the common \n\nresulting NCAs we notice is eye and eye-related shapes - such as the \n\nvideo below - likely as a result of having to detect various animals in \n\nImageNet. In the same way that cells form eye patterns on the wings of \n\nbutterflies to excite neurons in the brains of predators, our NCA's \n\npopulation of cells has learned to collaborate to produce a pattern that\n\n excites certain neurons in an external neural network.\n\nAn NCA trained to excite **mixed4a\\_472** in Inception.\n\n### NCA with Inception\n\n#### Model:\n\nWe use a model identical to the one used for exploring pattern \n\ngeneration, but with a different discriminator network: Imagenet-trained\n\n Inception v1 network .\n\n#### Loss function:\n\nOur loss maximizes the activations of chosen neurons, when evaluated \n\non the output of the NCA. We add an auxiliary loss to encourage the \n\n#### Dataset:\n\nThere is no explicit dataset for this task. Inception is trained on \n\nImageNet. The layers and neurons we chose to excite are chosen \n\nqualitatively using OpenAI Microscope.\n\n#### Results:\n\nSimilar to the pattern generation experiment, we see quick \n\nconvergence and a tendency to find temporally dynamic solutions. In \n\nother words, resulting NCAs do not stay still. We also observe that the \n\nmajority of the NCAs learn to produce solitons of various kinds. We \n\ndiscuss a few below, but encourage readers to explore them in the demo. \n\nAn NCA trained to excite **mixed4c\\_439** in Inception.\n\nSolitons in the form of regular circle-like shapes with internal \n\nstructure are quite commonly observed in the inception renderings. Two \n\nsolitons approaching each other too closely may cause one or both of \n\nthem to decay. We also observe that solitons can divide into two new \n\nsolitons.\n\nAn NCA trained to excite **mixed3b\\_454** in Inception.\n\nIn textures that are composed of threads or lines, or in certain \n\nexcitations of Inception neurons where the resulting NCA has a \n\n\"thread-like\" quality, the threads grow in their respective directions \n\nand will join other threads, or grow around them, as required. This \n\nbehaviour is similar to the regular lines observed in the striped \n\npatterns during pattern generation.\n\nOther interesting findings\n\n--------------------------\n\n### Robustness\n\n#### Switching manifolds\n\nWe encode local information flow within the NCA using the same fixed \n\nLaplacian and gradient filters. As luck would have it, these can be \n\ndefined for most underlying manifolds, giving us a way of placing our \n\ncells on various surfaces and in various configurations without having \n\nto modify the learned model. Suppose we want our cells to live in a \n\nhexagonal world. We can redefine our kernels as follows:\n\n![](Self-Organising%20Textures_files/hex_kernels.svg)\n\nHexagonal grid convolutional filters.\n\nOur model, trained in a purely square environment, works out of the \n\nbox on a hexagonal grid! Play with the corresponding setting in the demo\n\n to experiment with this. Zooming in allows observation of the \n\nindividual hexagonal or square cells. As can be seen in the demo, the \n\ncells have no problem adjusting to a hexagonal world and producing \n\nidentical patterns after a brief period of re-alignment.\n\n \n\n![](Self-Organising%20Textures_files/coral_square.png)\n\n![](Self-Organising%20Textures_files/coral_hex.png)\n\nThe same texture evaluated on a square and hexagonal grid, respectively.\n\n#### Rotation\n\n![](Self-Organising%20Textures_files/mond_rot0.png)\n\n![](Self-Organising%20Textures_files/mond_rot1.png)\n\n![](Self-Organising%20Textures_files/mond_rot2.png)\n\n![](Self-Organising%20Textures_files/mond_rot3.png)\n\neralises to this new rotated paradigm without issue.\n\nIn theory, the cells can be evaluated on any manifold where one can \n\ndefine approximations to the Sobel kernel and the Laplacian kernel. We \n\ndemonstrate this in our demo by providing an aforementioned \"hexagonal\" \n\nworld for the cells to live in. Instead of having eight equally-spaced \n\nneighbours, each cell now has six equally-spaced neighbours. We further \n\ndemonstrate this versatility by rotating the Sobel and Laplacian \n\nkernels. Each cell receives an innate global orientation based on these \n\nkernels, because they are defined with respect to the coordinate system \n\nof the state. Redefining the Sobel and Laplacian kernel with a rotated \n\ncoordinate system is straightforward and can even be done on a per-cell \n\nlevel. Such versatility is exciting because it mirrors the extreme \n\nrobustness found in biological cells in nature. Cells in most tissues \n\nwill generally continue to operate whatever their location, direction, \n\nor exact placement relative to their neighbours. We believe this \n\nversatility in our model could even extend to a setting where the cells \n\nare placed on a manifold at random, rather than on an ordered grid.\n\n#### Time-synchronization\n\nTwo NCAs running next to each other, at different speeds, \n\nwith some stochasticity in speed. They can communicate through their \n\nshared edge; the vertical boundary between them in the center of the \n\nstate space.\n\nStochastic updates teach the cells to be robust to asynchronous \n\nupdates. We investigate this property by taking it to an extreme and \n\n The result is surprisingly stable; the CA is still able to construct \n\nand maintain a consistent texture across the combined manifold. The time\n\n discrepancy between the two CAs sharing the state is far larger than \n\nanything the NCA experiences during training, showing remarkable \n\nrobustness of the learned behaviour. Parallels can be drawn to organic \n\nmatter self repairing, for instance a fingernail can regrow in adulthood\n\n despite the underlying finger already having fully developed; the two \n\ndo not need to be sync. This result also hints at the possibility of \n\ndesigning distributed systems without having to engineer for a global \n\nclock, synchronization of compute units or even homogenous compute \n\ncapacity. \n\nAn NCA is evaluated for a number of steps. The surrounding \n\nborder of cells are then also turned into NCA cells. The cells have no \n\ndifficulty communicating with the \"finished\" pattern and achieving \n\nconsistency. \n\nAn even more drastic example of this robustness to time \n\nasynchronicity can be seen above. Here, an NCA is iterated until it \n\nachieves perfect consistency in a pattern. Then, the state space is \n\nexpanded, introducing a border of new cells around the existing state. \n\nThis border quickly interfaces with the existing cells and settles in a \n\nconsistent pattern, with almost no perturbation to the already-converged\n\n inner state.\n\n#### Failure cases\n\nThe failure modes of a complex system can teach us a great deal about\n\n its internal structure and process. Our model has many quirks and \n\nsometimes these prevent it from learning certain patterns. Below are \n\nsome examples.\n\n![](Self-Organising%20Textures_files/fail_mondrian.jpeg)\n\n![](Self-Organising%20Textures_files/fail_sprinkle.jpeg)\n\n![](Self-Organising%20Textures_files/fail_chequerboard.jpeg)\n\nThree failure cases of the NCA. Bottom row shows target \n\ntexture samples, top row are corresponding NCA outputs. Failure modes \n\ninclude incorrect colours, chequerboard artefacts, and incoherent image \n\nstructure.\n\nSome patterns are reproduced somewhat accurately in terms of \n\nstructure, but not in colour, while some are the opposite. Others fail \n\ncompletely. It is difficult to determine whether these failure cases \n\nhave their roots in the parametrization (the NCA), or in the \n\nhard-to-interpret gradient signals from VGG, or Inception. Existing work\n\n with style transfer suggests that using a loss on Gram matrices in VGG \n\ncan introduce instabilities ,\n\n that are similar to the ones we see here. We hypothesize that this \n\neffect explains the failures in reproducing colors. The structural \n\nfailures, meanwhile, may be caused by the NCA parameterization, which \n\nmakes it difficult for cells to establish long-distance communication \n\nwith one another.\n\n### Hidden states\n\nWhen biological cells communicate with each other, they do so through\n\n a multitude of available communication channels. Cells can emit or \n\nabsorb different ions and proteins, sense physical motion or \"stiffness\"\n\n of other cells, and even emit different chemical signals to diffuse \n\nover the local substrate . \n\nThere are various ways to visualize communication channels in real \n\ncells. One of them is to add to cells a potential-activated dye. Doing \n\nso gives a clear picture of the voltage potential the cell is under with\n\n respect to the surrounding substrate. This technique provides useful \n\ninsight into the communication patterns within groups of cells and helps\n\n scientists visualize both local and global communication over a variety\n\n of time-scales.\n\nAs luck would have it, we can do something similar with our Neural \n\nCellular Automata. Our NCA model contains 12 channels. The first three \n\nare visible RGB channels and the rest we treat as latent channels which \n\nare visible to adjacent cells during update steps, but excluded from \n\nloss functions. Below we map the first three principle components of the\n\n hidden channels to the R,G, and B channels respectively. Hidden \n\nchannels can be considered \"floating,\" to abuse a term from circuit \n\ntheory. In other words, they are not pulled to any specific final state \n\nor intermediate state by the loss. Instead, they converge to some form \n\nof a dynamical system which assists the cell in fulfilling its objective\n\n with respect to its visible channels. There is no pre-defined \n\nassignment of different roles or meaning to different hidden channels, \n\nand there is almost certainly redundancy and correlation between \n\ndifferent hidden channels. Such correlation may not be visible when we \n\nvisualize the first three principal components in isolation. But this \n\nconcern aside, the visualization yields some interesting insights \n\nanyways.\n\n \n\n \n\n An NCA trained to excite **mixed4b\\_70**\n\n in Inception. Notice the hidden states appear to encode information \n\nabout structure. \"Threads\" along the major diagonal (NW - SE) appear \n\nprimarily green, while those running along the anti-diagonal appear \n\nblue, indicating that these have differing internal states, despite \n\nbeing effectively indistinguishable in RGB space.\n\nIn the principal components of this coral-like texture, we see a \n\npattern which is similar to the visible channels. However, the \"threads\"\n\n pointing in each diagonal direction have different colours - one \n\ndiagonal is green and the other is a pale blue. This suggests that one \n\nof the things encoded into the hidden states is the direction of a \n\n\"thread\", likely to allow cells that are inside one of these threads to \n\nkeep track of which direction the thread is growing, or moving, in. \n\n \n\n An NCA trained to produce a texture based on DTD image **cheqeuered\\_0121**.\n\n Notice the structure of squares - with a gradient occurring inside the \n\nstructure of each square, evidencing that structure is being encoded in \n\nhidden state.\n\nThe chequerboard pattern likewise lends itself to some qualitative \n\nanalysis and hints at a fairly simple mechanism for maintaining the \n\nshape of squares. Each square has a clear gradient in PCA space across \n\nthe diagonal, and the values this gradient traverses differ for the \n\nwhite and black squares. We find it likely the gradient is used to \n\nprovide a local coordinate system for creating and sizing the squares. \n\n \n\n An NCA trained to excite **mixed4c\\_208**\n\n in Inception. The visible body of the eye is clearly demarcated in the \n\nhidden states. There is also a \"halo\" which appears to modulate growth \n\nof any solitons immediately next to each other. This halo is barely \n\nvisible in the RGB channels.\n\nWe find surprising insight in NCA trained on Inception as well. In \n\nthis case, the structure of the eye is clearly encoded in the hidden \n\nstate with the body composed primarily of one combination of principal \n\ncomponents, and an halo, seemingly to prevent collisions of the eye \n\nsolitons, composed of another set of principal components.\n\nAnalysis of these hidden states is something of a dark art; it is not\n\n always possible to draw rigorous conclusions about what is happening. \n\nWe welcome future work in this direction, as we believe qualitative \n\nanalysis of these behaviours will be useful for understanding more \n\ncomplex behaviours of CAs. We also hypothesize that it may be possible \n\nto modify or alter hidden states in order to affect the morphology and \n\nbehaviour of NCA. \n\nConclusion\n\n----------\n\nIn this work, we selected texture templates and individual neurons as\n\n targets and then optimized NCA populations so as to produce similar \n\nexcitations in a pre-trained neural network. This procedure yielded NCAs\n\n that could render nuanced and hypnotic textures. During our analysis, \n\nwe found that these NCAs have interesting and unexpected properties. \n\nMany of the solutions for generating certain patterns in an image appear\n\n similar to the underlying model or physical behaviour producing the \n\npattern. For example, our learned NCAs seem to have a bias for treating \n\nobjects in the pattern as individual objects and letting them move \n\nfreely across space. While this effect was present in many of our \n\nmodels, it was particularly strong in the bubble and eye models. The NCA\n\n is forced to find algorithms that can produce such a pattern with \n\npurely local interaction. This constraint seems to produce models that \n\nfavor high-level consistency and robustness.\n\n![](Self-Organising%20Textures_files/multiple-pages.svg)\n\n This article is part of the\n\n an experimental format collecting invited short articles delving into\n\n differentiable self-organizing systems, interspersed with critical\n\n commentary from several experts in adjacent fields.\n\n \n\n[Self-classifying MNIST Digits](https://distill.pub/2020/selforg/mnist/)\n\n", "bibliography_bib": [{"title": "Growing Neural Cellular Automata"}, {"title": "Image segmentation via Cellular Automata"}, {"title": "Self-classifying MNIST Digits"}, {"title": "Differentiable Image Parameterizations"}, {"title": "The chemical basis of morphogenesis"}, {"title": "Turing patterns in development: what about the horse part?"}, {"title": "A unified design space of synthetic stripe-forming networks"}, {"title": "On the Formation of Digits and Joints during Limb Development"}, {"title": "Modeling digits. Digit patterning is controlled by a Bmp-Sox9-Wnt Turing network modulated by morphogen gradients"}, {"title": "Pattern formation mechanisms of self-organizing reaction-diffusion systems"}, {"title": "Bioelectric\n gene and reaction networks: computational modelling of genetic, \nbiochemical and bioelectrical dynamics in pattern regulation"}, {"title": "Turing-like patterns can arise from purely bioelectric mechanisms"}, {"title": "Dissipative structures in biological systems: bistability, oscillations, spatial patterns and waves"}, {"title": "Gene networks and liar paradoxes"}, {"title": "Texture Synthesis Using Convolutional Neural Networks"}, {"title": "The chemical basis of morphogenesis. 1953"}, {"title": "Pattern formation by interacting chemical fronts"}, {"title": "Complex patterns in a simple system"}, {"title": "Very Deep Convolutional Networks for Large-Scale Image Recognition"}, {"title": "Adam: A Method for Stochastic Optimization"}, {"title": "Describing Textures in the Wild"}, {"title": "The texture lexicon: Understanding the categorization of visual texture terms and their relationship to texture images"}, {"title": "Re-membering\n the body: applications of computational neuroscience to the top-down \ncontrol of regeneration of limbs and other complex organs"}, {"title": "Embryonic Development and Induction"}, {"title": "Communication, Memory, and Development"}, {"title": "WaveFunctionCollapse"}, {"title": "Texture Networks: Feed-forward Synthesis of Textures and Stylized Images"}, {"title": "TextureGAN: Controlling deep image synthesis with texture patches"}, {"title": "Interactive evolution of camouflage"}, {"title": "A parametric texture model based on joint statistics of complex wavelet coefficients"}, {"title": "Trainable Nonlinear Reaction Diffusion: A Flexible Framework for Fast and Effective Image Restoration"}, {"title": "The evolutionary significance of butterfly eyespots"}, {"title": "Live Cell Imaging of Butterfly Pupal and Larval Wings In Vivo"}, {"title": "Focusing on butterfly eyespot focus: uncoupling of white spots from eyespot bodies in nymphalid butterflies"}, {"title": "OpenAI Microscope"}, {"title": "The neural origins of shell structure and pattern in aquatic mollusks"}, {"title": "Emergent complexity in simple neural systems"}, {"title": "Going deeper with convolutions"}, {"title": "Stable and Controllable Neural Texture Synthesis and Style Transfer Using Histogram Losses"}, {"title": "Stem cell migration and mechanotransduction on linear stiffness gradient hydrogels"}], "id": "cfda083c3e1e282cffc3b0dcf2435644"} +{"url": "n/a", "source": "distill", "source_type": "html", "converted_with": "python", "title": "AI Safety Needs Social Scientists", "authors": ["Geoffrey Irving", "Amanda Askell"], "date_published": "2019-02-19", "abstract": " The goal of long-term artificial intelligence (AI) safety is to ensure that advanced AI systems are reliably aligned with human values — that they reliably do things that people want them to do.Roughly by human values we mean whatever it is that causes people to choose one option over another in each case, suitably corrected by reflection, with differences between groups of people taken into account. There are a lot of subtleties in this notion, some of which we will discuss in later sections and others of which are beyond the scope of this paper. Since it is difficult to write down precise rules describing human values, one approach is to treat aligning with human values as another learning problem. We ask humans a large number of questions about what they want, train an ML model of their values, and optimize the AI system to do well according to the learned values. ", "journal_ref": "distill-pub", "doi": "https://doi.org/10.23915/distill.00014", "text": "\n\n The goal of long-term artificial intelligence (AI) safety is to \n\nensure that advanced AI systems are reliably aligned with human \n\nvalues — that they reliably do things that people want them to do.Roughly\n\n by human values we mean whatever it is that causes people to choose one\n\n option over another in each case, suitably corrected by reflection, \n\nwith differences between groups of people taken into account. There are\n\n a lot of subtleties in this notion, some of which we will discuss in \n\nlater sections and others of which are beyond the scope of this paper.\n\n Since it is difficult to write down precise rules describing human \n\nvalues, one approach is to treat aligning with human values as another \n\nlearning problem. We ask humans a large number of questions about what \n\nthey want, train an ML model of their values, and optimize the AI system\n\n to do well according to the learned values.\n\n \n\n If humans reliably and accurately answered all questions about their\n\n values, the only uncertainties in this scheme would be on the machine \n\nlearning (ML) side. If the ML works, our model of human values would \n\nimprove as data is gathered, and broaden to cover all the decisions \n\nrelevant to our AI system as it learns. Unfortunately, humans have \n\nlimited knowledge and reasoning ability, and exhibit a variety of \n\ncognitive and ethical biases.\n\n If we learn values by asking humans questions, we expect different ways\n\n of asking questions to interact with human biases in different ways, \n\nproducing higher or lower quality answers. Direct questions about \n\n Different people may vary significantly in their ability to answer \n\nquestions well, and disagreements will persist across people even \n\nsetting aside answer quality. Although we have candidates for ML \n\n \n\n We believe the AI safety community needs to invest research effort \n\nin the human side of AI alignment. Many of the uncertainties involved \n\nare empirical, and can only be answered by experiment. They relate to \n\nthe psychology of human rationality, emotion, and biases. Critically, \n\nwe believe investigations into how people interact with AI alignment \n\nalgorithms should not be held back by the limitations of existing \n\nmachine learning. Current AI safety research is often limited to simple\n\n tasks in video games, robotics, or gridworlds,\n\n but problems on the human side may only appear in more realistic \n\nscenarios such as natural language discussion of value-laden questions. \n\n This is particularly important since many aspects of AI alignment \n\nchange as ML systems [increase in capability](#harder).\n\n \n\n To avoid the limitations of ML, we can instead conduct experiments \n\nconsisting entirely of people, replacing ML agents with people playing \n\nthe role of those agents. This is a variant of the \"Wizard of Oz\" \n\ntechnique from the human-computer interaction (HCI) community,\n\n though in our case the replacements will not be secret. These \n\nexperiments will be motivated by ML algorithms but will not involve any \n\nML systems or require an ML background. In all cases, they will require\n\n careful experimental design to build constructively on existing \n\nknowledge about how humans think. Most AI safety researchers are \n\nfocused on machine learning, which we do not believe is sufficient \n\nbackground to carry out these experiments. To fill the gap, we need \n\nsocial scientists with experience in human cognition, behavior, and \n\nethics, and in the careful design of rigorous experiments. Since the \n\nquestions we need to answer are interdisciplinary and somewhat unusual \n\nrelative to existing research, we believe many fields of social science \n\nare applicable, including experimental psychology, cognitive science, \n\neconomics, political science, and social psychology, as well as adjacent\n\n fields like neuroscience and law.\n\n \n\n This paper is a call for social scientists in AI safety. We believe\n\n close collaborations between social scientists and ML researchers will \n\nbe necessary to improve our understanding of the human side of AI \n\nalignment, and hope this paper sparks both conversation and \n\ncollaboration. We do not claim novelty: previous work mixing AI safety \n\n Our main goal is to enlarge these collaborations and emphasize their \n\nimportance to long-term AI safety, particularly for tasks which current \n\nML cannot reach.\n\n \n\nAn overview of AI alignment\n\n---------------------------\n\n Before discussing how social scientists can help with AI safety and \n\nthe AI alignment problem, we provide some background. We do not attempt\n\n to be exhaustive: the goal is to provide sufficient background for the \n\nremaining sections on social science experiments. Throughout, we will \n\nspeak primarily about aligning to the values of an individual human \n\nrather than a group: this is because the problem is already hard for a \n\nsingle person, not because the group case is unimportant.\n\n \n\n distinguish between training AI systems to identify actions that humans\n\n consider good and training AI systems to identify actions that are \n\n\"good\" in some objective and universal sense, even if most current \n\nhumans do not consider them so. Whether there are actions that are good\n\n in this latter sense is a subject of debate.\n\n Regardless of what position one takes on this philosophical question, \n\nthis sense of good is not yet available as a target for AI training.\n\n Here we focus on the machine learning approach to AI: gathering a \n\nlarge amount of data about what a system should do and using learning \n\nalgorithms to infer patterns from that data that generalize to other \n\nsituations. Since we are trying to behave in accord with people's \n\nvalues, the most important data will be data from humans about their \n\nvalues. Within this frame, the AI alignment problem breaks down into a \n\nfew interrelated subproblems:\n\n \n\n1. Have a satisfactory definition of human values.\n\n2. Gather data about human values, in a manner compatible with the definition.\n\n3. Find reliable ML algorithms that can learn and generalize from this data.\n\n We have significant uncertainty about all three of these problems. \n\nWe will leave the third problem to other ML papers and focus on the \n\nfirst two, which concern uncertainties about people.\n\n \n\n### Learning values by asking humans questions\n\n We start with the premise that human values are too complex to \n\ndescribe with simple rules. By \"human values\" we mean our full set of \n\ndetailed preferences, not general goals such as \"happiness\" or \n\n\"loyalty\". One source of complexity is that values are entangled with a\n\n large number of facts about the world, and we cannot cleanly separate \n\nfacts from values when building ML models. For example, a rule that \n\nrefers to \"gender\" would require an ML model that accurately recognizes \n\nthis concept, but Buolamwini and Gebru found that several commercial \n\ngender classifiers with a 1% error rate on white men failed to recognize\n\n Finally, our values may vary across cultures, legal systems, or \n\nsituations: no learned model of human values will be universally \n\napplicable.\n\n \n\n If humans can't reliably report the reasoning behind their \n\nintuitions about values, perhaps we can make value judgements in \n\nspecific cases. To realize this approach in an ML context, we ask \n\nhumans a large number of questions about whether an action or outcome is\n\n better or worse, then train on this data. \"Better or worse\" will \n\ninclude both factual and value-laden components: for an AI system \n\ntrained to say things, \"better\" statements might include \"rain falls \n\nfrom clouds\", \"rain is good for plants\", \"many people dislike rain\", \n\netc. If the training works, the resulting ML system will be able to \n\nreplicate human judgement about particular situations, and thus have the\n\n same \"fuzzy access to approximate rules\" about values as humans. We \n\nalso train the ML system to come up with proposed actions, so that it \n\nknows both how to perform a task and how to judge its performance. This\n\n approach works at least in simple cases, such as Atari games and simple\n\n robotics tasks and language-specified goals in gridworlds.\n\n The questions we ask change as the system learns to perform different \n\ntypes of actions, which is necessary as the model of what is better or \n\nworse will only be accurate if we have applicable data to generalize \n\nfrom.\n\n \n\n In practice, data in the form of interactive human questions may be \n\nquite limited, since people are slow and expensive relative to computers\n\n on many tasks. Therefore, we can augment the \"train from human \n\nquestions\" approach with static data from other sources, such as books \n\nor the internet. Ideally, \n\nthe static data can be treated only as information about the world \n\ndevoid of normative content: we can use it to learn patterns about the \n\nworld, but the human data is needed to distinguish good patterns from \n\nbad.\n\n \n\n### Definitions of alignment: reasoning and reflective equilibrium\n\n So far we have discussed asking humans direct questions about \n\nwhether something is better or worse. Unfortunately, we do not expect \n\npeople to provide reliably correct answers in all cases, for several \n\nreasons:\n\n \n\n1. **Cognitive and ethical biases:**\n\n In general, we expect direct answers to questions to reflect primarily\n\n Type 1 thinking (fast heuristic judgment), while we would like to \n\ntarget a combination of Type 1 and Type 2 thinking (slow, deliberative \n\njudgment).\n\n2. **Lack of domain knowledge:**\n\n We may be interested in questions that require domain knowledge \n\nunavailable to people answering the questions. For example, a correct \n\nanswer to whether a particular injury constitutes medical malpractice \n\nmay require detailed knowledge of medicine and law. In some cases, a \n\nquestion might require so many areas of specialized expertise that no \n\none person is sufficient, or (if AI is sufficiently advanced) deeper \n\nexpertise than any human possesses.\n\n3. **Limited cognitive capacity:**\n\n Some questions may require too much computation for a human to \n\nreasonably evaluate, especially in a short period of time. This \n\nincludes synthetic tasks such as chess and Go (where AIs already surpass\n\n4. **\"Correctness\" may be local:**\n\n For questions involving a community of people, \"correct\" may be a\n\n function of complex processes or systems. For example, in a trust game,\n\n the correct action for a trustee in one community may be to return at \n\nleast half of the money handed over by the investor, and the \n\n\"correctness\" of this answer could be determined by asking a group of \n\nparticipants in a previous game \"how much should the trustee return to \n\nthe investor\" but not by asking them \"how much do most trustees return?\"\n\n The answer may be different in other communities or cultures.\n\n In these cases, a human may be unable to provide the right answer, \n\nbut we still believe the right answer exists as a meaningful concept. \n\nWe have many conceptual biases: imagine we point out these biases in a \n\nway that helps the human to avoid them. Imagine the human has access to\n\n all the knowledge in the world, and is able to think for an arbitrarily\n\n long time. We could define alignment as \"the answer they give then, \n\nafter these limitations have been removed\"; in philosophy this is known \n\n \n\n However, the behavior of reflective equilibrium with actual humans \n\nis subtle; as Sugden states, a human is not \"a neoclassically rational \n\nentity encased in, and able to interact with the world only through, an \n\nerror-prone psychological shell.\"\n\n Our actual moral judgments are made via a messy combination of many \n\ndifferent brain areas, where reasoning plays a \"restricted but \n\nsignificant role\". A reliable \n\nsolution to the alignment problem that uses human judgment as input will\n\n need to engage with this complexity, and ask how specific alignment \n\ntechniques interact with actual humans.\n\n \n\n### Disagreements, uncertainty, and inaction: a hopeful note\n\n A solution to alignment does not mean knowing the answer to every \n\nquestion. Even at reflective equilibrium, we expect disagreements will \n\npersist about which actions are good or bad, across both different \n\nindividuals and different cultures. Since we lack perfect knowledge \n\nabout the world, reflective equilibrium will not eliminate uncertainty \n\nabout either future predictions or values, and any real ML system will \n\nbe at best an approximation of reflective equilibrium. In these cases, \n\nwe consider an AI aligned if it recognizes what it does not know and \n\nchooses actions which work however that uncertainty plays out.\n\n \n\n Admitting uncertainty is not always enough. If our brakes fail \n\nwhile driving a car, we may be uncertain whether to dodge left or right \n\naround an obstacle, but we have to pick one — and fast. For long-term \n\nsafety, however, we believe a safe fallback usually exists: inaction. \n\nIf an ML system recognizes that a question hinges on disagreements \n\nbetween people, it can either choose an action which is reasonable \n\nregardless of the disagreement or fall back to further human \n\ndeliberation. If we are about to make a decision that might be \n\ncatastrophic, we can delay and gather more data. Inaction or indecision\n\n may not be optimal, but it is hopefully safe, and matches the default \n\nscenario of not having any powerful AI system.\n\n \n\n### Alignment gets harder as ML systems get smarter\n\n and mismatch between human values and easily available data sources \n\n(such as training news feeds based on clicks and likes instead of \n\ndeliberate human preferences). However, we expect the alignment problem\n\n to get harder as AI systems grow more advanced, for two reasons. \n\nFirst, advanced systems will apply to increasingly consequential tasks: \n\nhiring, medicine, scientific analysis, public policy, etc. Besides \n\nraising the stakes, these tasks require more reasoning, leading to more \n\ncomplex alignment algorithms.\n\n \n\n Second, advanced systems may be capable of answers that sound \n\nplausible but are wrong in nonobvious ways, even if an AI is better than\n\n humans only in a limited domain (examples of which already exist).\n\n This type of misleading behavior is not the same as intentional \n\ndeception: an AI system trained from human data might have no notion of \n\ntruth separate from what answers humans say are best. Ideally, we want \n\nAI alignment algorithms to reveal misleading behavior as part of the \n\ntraining process, surfacing failures to humans and helping us provide \n\nmore accurate data. As with human-to-human deception, misleading \n\nbehavior might take advantage of our biases in complicated ways, such as\n\n learning to express policy arguments in coded racial language to sound \n\nmore convincing.\n\n \n\nDebate: learning human reasoning\n\n--------------------------------\n\n Before we discuss social science experiments for AI alignment in \n\ndetail, we need to describe a particular method for AI alignment. \n\nAlthough the need for social science experiments applies even to direct \n\nquestioning, this need intensifies for methods which try to get at \n\nreasoning and reflective equilibrium. As discussed above, it is unclear\n\n whether reflective equilibrium is a well defined concept when applied \n\nto humans, and at a minimum we expect it to interact with cognitive and \n\nethical biases in complex ways. Thus, for the remainder of this paper \n\nwe focus on a specific proposal for learning reasoning-oriented \n\n \n\n We describe the debate approach to AI alignment in the question \n\nanswering setting. Given a question, we have two AI agents engage in a \n\ndebate about the correct answer, then show the transcript of the debate \n\nto a human to judge. The judge decides which debater gave the most \n\ntrue, useful information, and declares that debater the winner.We\n\n can also allow ties. Indeed, if telling the truth is the winning \n\nstrategy ties will be common with strong play, as disagreeing with a \n\ntrue statement would lose. This defines a two player zero \n\nsum game between the debaters, where the goal is to convince the human \n\nthat one's answer is correct. Arguments in a debate can consist of \n\nanything: reasons for an answer, rebuttals of reasons for the alternate \n\nanswer, subtleties the judge might miss, or pointing out biases which \n\nmight mislead the judge. Once we have defined this game, we can train \n\nAI systems to play it similarly to how we train AIs to play other games \n\nsuch as Go or Dota 2. Our hope is that the following hypothesis holds:\n\n \n\n \n\n### An example of debate\n\n Imagine we're building a personal assistant that helps people decide\n\n where to go on vacation. The assistant has knowledge of people's \n\nvalues, and is trained via debate to come up with convincing arguments \n\nthat back up vacation decisions. As the human judge, you know what \n\ndestinations you intuitively think are better, but have limited \n\nknowledge about the wide variety of possible vacation destinations and \n\ntheir advantages and disadvantages. A debate about the question \"Where \n\nshould I go on vacation?\" might open as follows:\n\n \n\n1. Where should I go on vacation?\n\n2. Alaska.\n\n3. Bali.\n\n If you are able to reliably decide between these two destinations, \n\nwe could end here. Unfortunately, Bali has a hidden flaw:\n\n \n\n3. Bali is out since your passport won't arrive in time.\n\n At this point it looks like Red wins, but Blue has one more countermove:\n\n \n\n4. Expedited passport service only takes two weeks.\n\n Here Red fails to think of additional points, and loses to Blue and \n\nBali. Note that a debate does not need to cover all possible arguments.\n\n There are many other ways the debate could have gone, such as:\n\n \n\n1. Alaska.\n\n2. Bali.\n\n3. Bali is way too hot.\n\n4. You prefer too hot to too cold.\n\n5. Alaska is pleasantly warm in the summer.\n\n6. It's January.\n\n This debate is also a loss for Red (arguably a worse loss). Say we \n\nbelieve Red is very good at debate, and is able to predict in advance \n\nwhich debates are more likely to win. If we see only the first debate \n\nabout passports and decide in favor of Bali, we can take that as \n\nevidence that any other debate would have also gone for Bali, and thus \n\nthat Bali is the correct answer. A larger portion of this hypothetical \n\ndebate tree is shown below:\n\n \n\n[1](#figure-debate-tree)\n\n A hypothetical partial debate tree for the question \"Where \n\nshould I go on vacation?\" A single debate would explore only one of \n\nthese paths, but a single path chosen by good debaters is evidence that \n\nother paths would not change the result of the game.\n\n \n\n If trained debaters are bad at predicting which debates will win, \n\nanswer quality will degrade since debaters will be unable to think of \n\nimportant arguments and counterarguments. However, as long as the two \n\nsides are reasonably well matched, we can hope that at least the results\n\n are not malicious: that misleading behavior is still a losing strategy.\n\n Let's set aside the ability of the debaters for now, and turn to the \n\nability of the judge.\n\n \n\n### Are people good enough as judges?\n\n> \n\n> \"In fact, almost everything written at a practical level about the\n\n> Turing test is about how to make good bots, with a small remaining \n\n> fraction about how to be a good judge.\"\n\n> Brian Christian, The Most Human Human\n\n> \n\n As with learning by asking humans direct questions, whether debate \n\nproduces aligned behavior depends on the reasoning abilities of the \n\nhuman judge. Unlike direct questioning, debate has the potential to \n\ngive correct answers beyond what the judge could provide without \n\nassistance. This is because a sufficiently strong judge could follow \n\nalong with arguments the judge could not come up with on their own, \n\nchecking complex reasoning for both self consistency and consistency \n\nwith human-checkable facts. A judge who is biased but willing to adjust\n\n once those biases are revealed could result in unbiased debates, or a \n\njudge who is able to check facts but does not know where to look could \n\nbe helped along by honest debaters. If the hypothesis holds, a \n\nmisleading debater would not be able to counter the points of an honest \n\ndebater, since the honest points would appear more consistent to the \n\njudge.\n\n \n\n On the other hand, we can also imagine debate going the other way: \n\namplifying biases and failures of reason. A judge with an ethical bias \n\nwho is happy to accept statements reinforcing that bias could result in \n\neven more biased debates. A judge with too much confirmation bias might\n\n happily accept misleading sources of evidence, and be unwilling to \n\naccept arguments showing why that evidence is wrong. In this case, an \n\noptimal debate agent might be quite malicious, taking advantage of \n\nbiases and weakness in the judge to win with convincing but wrong \n\narguments.The difficulties that cognitive \n\nbiases, prejudice, and social influence introduce to persuasion ‒ as \n\nwell as methods for reducing these factors ‒ are being increasingly \n\nexplored in psychology, communication science, and neuroscience.\n\n In both these cases, debate acts as an amplifier. For strong \n\njudges, this amplification is positive, removing biases and simulating \n\nextra reasoning abilities for the judge. For weak judges, the biases \n\nand weaknesses would themselves be amplified. If this model holds, \n\ndebate would have threshold behavior: it would work for judges above \n\nsome threshold of ability and fail below the threshold.The\n\n threshold model is only intuition, and could fail for a variety of \n\nreasons: the intermediate region could be very large, or the threshold \n\ncould differ widely per question so that even quite strong judges are \n\ninsufficient for many questions. Assuming the threshold \n\nexists, it is unclear whether people are above or below it. People are \n\ncapable of general reasoning, but our ability is limited and riddled \n\nwith cognitive biases. People are capable of advanced ethical sentiment\n\n but also full of biases, both conscious and unconscious.\n\n \n\n Thus, if debate is the method we use to align an AI, we need to know\n\n if people are strong enough as judges. In other words, whether the \n\nhuman judges are sufficiently good at discerning whether a debater is \n\ntelling the truth or not. This question depends on many details: the \n\ntype of questions under consideration, whether judges are trained or \n\nnot, and restrictions on what debaters can say. We believe experiment \n\nwill be necessary to determine whether people are sufficient judges, and\n\n which form of debate is most truth-seeking.\n\n \n\n### From superforecasters to superjudges\n\n An analogy with the task of probabilistic forecasting is useful \n\nhere. Tetlock's \"Good Judgment Project\" showed that some amateurs were \n\nsignificantly better at forecasting world events than both their peers \n\nand many professional forecasters. These \"superforecasters\" maintained \n\ntheir prediction accuracy over years (without regression to the mean), \n\n p. 234-236). The superforecasting trait was not immutable: it was \n\ntraceable to particular methods and thought processes, improved with \n\ncareful practice, and could be amplified if superforecasters were \n\ncollected into teams. For forecasters in general, brief probabilistic \n\ntraining significantly improved forecasting ability even 1-2 years after\n\n the training. We believe a similar research program is possible for \n\ndebate and other AI alignment algorithms. In the best case, we would be\n\n able to find, train, or assemble \"superjudges\", and have high \n\nconfidence that optimal debate with them as judges would produce aligned\n\n behavior.\n\n \n\n In the forecasting case, much of the research difficulty lay in \n\nassembling a large corpus of high quality forecasting questions. \n\nSimilarly, measuring how good people are as debate judges will not be \n\neasy. We would like to apply debate to problems where there is no other\n\n source of truth: if we had that source of truth, we would train ML \n\nmodels on it directly. But if there is no source of truth, there is no \n\nway to measure whether debate produced the correct answer. This problem\n\n can be avoided by starting with simple, verifiable domains, where the \n\nexperimenters know the answer but the judge would not. \"Success\" then \n\nmeans that the winning debate argument is telling the externally known \n\ntruth. The challenge gets harder as we scale up to more complex, \n\nvalue-laden questions, as we discuss in detail later.\n\n \n\n### Debate is only one possible approach\n\n As mentioned, debate is not the only scheme trying to learn human \n\nreasoning. Debate is a modified version of iterated amplification,\n\n which uses humans to break down hard questions into easier questions \n\nand trains ML models to be consistent with this decomposition. \n\nRecursive reward modeling is a further variant.\n\n Inverse reinforcement learning, inverse reward design, and variants \n\ntry to back out goals from human actions, taking into account \n\nlimitations and biases that might affect this reasoning.\n\n The need to study how humans interact with AI alignment applies to any\n\n of these approaches. Some of this work has already begun: Ought's \n\nFactored Cognition project uses teams of humans to decompose questions \n\nand reassemble answers, mimicking iterated amplification.\n\n We believe knowledge gained about how humans perform with one approach\n\n is likely to partially generalize to other approaches: knowledge about \n\nhow to structure truth-seeking debates could inform how to structure \n\ntruth-seeking amplification, and vice versa.\n\n \n\n### Experiments needed for debate\n\n To recap, in debate we have two AI agents engaged in debate, trying \n\nto convince a human judge. The debaters are trained only to win the \n\ngame, and are not motivated by truth separate from the human's \n\njudgments. On the human side, we would like to know whether people are \n\nstrong enough as judges in debate to make this scheme work, or how to \n\nmodify debate to fix it if it doesn't. Unfortunately, actual debates in\n\n natural language are well beyond the capabilities of present AI \n\nsystems, so previous work on debate and similar schemes has been \n\nrestricted to synthetic or toy tasks.\n\n \n\n Rather than waiting for ML to catch up to natural language debate, \n\nwe propose simulating our eventual setting (two AI debaters and one \n\nhuman judge) with all human debates: two human debaters and one human \n\njudge. Since an all human debate doesn't involve any machine learning, \n\nit becomes a pure social science experiment: motivated by ML \n\nconsiderations but not requiring ML expertise to run. This lets us \n\nfocus on the component of AI alignment uncertainty specific to humans.\n\n \n\n Helvetica \n\n[2](#figure-debate-experiments)\n\n Our goal is ML+ML+human debates, but ML is currently too \n\nprimitive to do many interesting tasks.\n\n Therefore, we propose replacing ML debaters with human \n\ndebaters, learning how to best conduct debates in this human-only \n\nsetting, and eventually applying what we learn to the ML+ML+human case.\n\n \n\n To make human+human+human debate experiments concrete, we must \n\nchoose who to use as judges and debaters and which tasks to consider. \n\nWe also can choose to structure the debate in various ways, some of \n\nwhich overlaps with the choice of judge since we can instruct a judge to\n\n penalize deviations from a given format. By task we mean the questions\n\n our debates will try to resolve, together with any information provided\n\n to the debaters or to the judge. Such an experiment would then try to \n\nanswer the following question:\n\n \n\n**Question:** For a given task and judge, is the winning debate strategy honest?\n\n \n\n The \"winning strategy\" proviso is important: an experiment that \n\npicked debaters at random might conclude that honest behavior won, \n\nmissing the fact that more practiced debaters would learn to \n\nsuccessfully lie. We can try to solve this by training debaters, \n\nletting them practice against each other, filtering out debaters that \n\nwin more often, and so on, but we will still be left with uncertainty \n\nabout whether a better strategy exists. Even assuming we can find or \n\ntrain strong debaters, the choice of task and judge is quite tricky if \n\nwe want an informative proxy for our eventual AI+AI+human setting. Here\n\n are some desiderata constraining our choice of task:\n\n \n\n1. **True answers are known:**\n\n Although our eventual goal is debates where no source of truth \n\nis available, to conduct a useful experiment we must be able to measure \n\nsuccess. This means we must know what the correct answer is separate \n\nfrom debate, in order to compare with the results of debate.\n\n2. **False answers are plausible:**\n\n If the truth is obvious or no plausible counternarrative exists,\n\n honest debates will win for uninteresting and uninformative reasons \n\nregardless of judge ability. In particular, the judge shouldn't know \n\nthe answer upfront.\n\n3. **Debaters know more than the judge:**\n\n Debate can produce interesting results only when the debaters \n\nknow more than the judge; otherwise asking direct questions is enough.\n\n4. **Definitive argument longer than debate limit:**\n\n If one debater can write out a full proof of their answer \n\n(ignoring their opponent's moves), the task won't be a good test of \n\ninteractive debate.\n\n5. **Some checkable facts:**\n\n There must be some facts which the judge is able to check, \n\neither because they can recognize them as true once presented or look \n\nthem up.It is impossible to usefully \n\ndebate a question where the judge has nothing to check: consider \n\ndebating the result of a coin flip shown to the two debaters but not the\n\n judge.\n\n6. **No \"tells\":**\n\n Human tells of deception could result in honest debaters winning\n\n for reasons that wouldn't apply to an AI. These tells include tone of \n\nvoice, eye contact, or additional time required to construct plausible \n\nlies. These tells can be reduced by showing judges completed debate \n\ntranscripts instead of engaging in interactive debates, but others might\n\n remain.\n\n7. **Available data:**\n\n We need a large enough pool of questions, judges, and debaters \n\nto achieve statistical significance. This is made more difficult \n\nbecause we may have a large number of hypotheses to test, in the form of\n\n many variations on debate or interventions to improve judging.\n\n8. **Known biases (optional):**\n\n We are specifically interested in debate tasks which test \n\nspecific types of cognitive or ethical biases.\n\n Are judges with some racial or gender bias able to set those \n\nbiases aside if they are highlighted, or does debate amplify bias? Do \n\ndebates about statistical or probabilistic questions make it too easy to\n\n lie with statistics?\n\n9. **Realistic tasks (ideally):**\n\n If possible, we would like to try debate with interesting, real \n\nworld tasks that reflect the types of questions we would like to apply \n\nAI to in the future, including science, mathematics, ethics, etc.\n\n It may not be possible to meet all of these criteria with a single \n\nexperiment. Several of the criteria are in tension: (1) and (2) are \n\nessentially \"not too hard\" and \"not too easy\", and any restriction on \n\nthe types of questions may make it difficult to find large numbers of \n\nquestions, judges, or debaters. Realistic tasks are much harder than \n\nsynthetic tasks, which easily fulfill many of the criteria as discussed \n\nbelow. Thus, we may need to begin with synthetic tasks and move up \n\ntowards realistic tasks over time.\n\n We turn next to a few examples of experiments to see how many \n\ncriteria we can meet simultaneously.\n\n \n\n### Synthetic experiments: single pixel image debate\n\n We choose an image of a cat or dog, and show the image to the two \n\ndebaters but not the judge. One debater is honest and argues for the \n\ntrue contents of the image; the other debater lies. The debaters can \n\ntalk to the judge and illustrate their points by drawing rectangles on \n\nthe image, but the judge sees only the rectangles. At the end of the \n\ndebate, each debater is allowed to reveal a single pixel to the judge, \n\nwhich is the only part of the debate which cannot be a lie.\n\n \n\n[3](#dog-debate)\n\n An example debate with two human debaters and a human judge. \n\nOnly the debaters can see the image. Red is arguing that the image is a \n\n In informal play, the honest debater wins most of the time. \n\nAlthough the judge only gets to check a single pixel, the honest player \n\ncan try to pin down the liar towards disagreements on smaller and \n\nsmaller regions of the image until one pixel is enough to reveal the \n\nlie. However, it is hard to know whether this is a real result or an \n\nartifact that would not carry across to training an AI. There are many \n\nways to ruin the game, such as the judge asking each player to spend 15 \n\nseconds naming as many details about the image as possible. Worse, this\n\n trick ruins the game in favor of the honest player, who can easily name\n\n many true details while a human liar may have difficulty lying fluently\n\n at speed. Therefore, we are wary of trying to perform a full version \n\nof this experiment without more careful experimental design building on \n\nthe experimental knowledge base of existing social science research.\n\n \n\n As a synthetic experiment, single pixel debate satisfies many of our\n\n experimental criteria for all human debate. The true answer is obvious\n\n to the debaters, and they know more than the judge since only the \n\ndebaters see the image. Anything is plausible to the judge who sees \n\nonly one or two pixels over the course of the debate. Since the \n\nargument bottoms out at single pixels, the \"full argument\" would be a \n\nhuge list of pixels and how they relate, forcing the debaters to pin \n\neach other down and focus in on disagreements. The single pixels \n\nconstitute the checkable facts, and we have an endless supply of \n\nquestions in the form of images chosen at random. Less fortunately, it \n\nis easy for the judge to force \"tells\" which reveal who is lying, the \n\ntask has no obvious relation to biases, and is quite unrealistic.\n\n \n\n### Realistic experiments: domain expert debate\n\n For a more interesting task, we can find two debaters who are \n\nexperts in a domain, pick a question in their area of expertise, and use\n\n a layperson as the judge. The debaters could be experts in some area \n\nof science, law, or ethics, but \"domain expertise\" could also mean \n\nknowledge about hobbies, local customs, sports, or any other subject the\n\n judge does not know. We again choose one of the debaters to lie and \n\none to tell the truth. To guarantee a source of truth, we can choose a \n\nquestion with an agreed upon answer, either between the two debaters or \n\nmore broadly in their field. This is only approximate truth, but is \n\ngood enough for informative experiments. We also specify what facts the\n\n judge can access: a debate about physics might allow the debaters to \n\nquote a sentence or paragraph from Wikipedia, perhaps with restrictions \n\non what pages are allowed.\n\n \n\n Expert debate satisfies most of our desiderata, and it is likely \n\npossible to target specific biases (such as race or gender bias) by \n\npicking domain areas that overlap with these biases. It may be quite \n\ndifficult or expensive to find suitable debaters, but this may be \n\nsolvable either by throwing resources at the problem (ML is a well \n\nfunded field), enlarging the kinds of domain expertise considered \n\n(soccer, football, cricket), or by making the experiments interesting \n\nenough that volunteers are available. However, even if domain experts \n\ncan be found, there is no guarantee that they will be experts in debate \n\nviewed as a game. With the possible exception of law, politics, or \n\nphilosophy, domain experts \n\nmay not be trained to construct intentionally misleading but self \n\nconsistent narratives: they may be experts only in trying to tell the \n\ntruth.\n\n \n\n We've tried a few informal expert debates using theoretical computer\n\n science questions, and the main lesson is that the structure of the \n\ndebate matters a great deal. The debaters were allowed to point to a \n\nsmall snippet of a mathematical definition on Wikipedia, but not to any \n\npage that directly answered the question. To reduce tells, we first \n\ntried to write a full debate transcript with only minimal interaction \n\nwith a layperson, then showed the completed transcript to several more \n\nlaypeople judges. Unfortunately, even the layperson present when the \n\ndebate was conducted picked the lying debater as honest, due to a \n\nmisunderstanding of the question (which was whether the complexity \n\nclasses PPP and BPPBPPBPP\n\n are probably equal). As a result, throughout the debate the honest \n\ndebater did not understand what the judge was thinking, and failed to \n\ncorrect an easy but important misunderstanding. We fixed this in a \n\nsecond debate by letting a judge ask questions throughout, but still \n\nshowing the completed transcript to a second set of judges to reduce \n\ntells. See [the appendix](#quantum) for the transcript of this second debate.\n\n \n\n### Other tasks: bias tests, probability puzzles, etc.\n\n Synthetic image debates and expert debates are just two examples of \n\npossible tasks. More thought will be required to find tasks that \n\nsatisfy all our criteria, and these criteria will change as experiments \n\nprogress. Pulling from existing social science research will be useful,\n\n as there are many cognitive tasks with existing research results. If \n\nwe can map these tasks to debate, we can compare debate directly against\n\n baselines in psychology and other fields.\n\n \n\n For example, Bertrand and Mullainathan sent around 5000 resumes in \n\nresponse to real employment ads, randomizing the resumes between White \n\nand African American sounding names.\n\n With otherwise identical resumes, the choice of name significantly \n\nchanged the probability of a response. This experiment corresponds to \n\nthe direct question \"Should we call back given this resume?\" What if we\n\n introduce a few steps of debate? An argument against a candidate based\n\n on name or implicit inferences from that name might come across as \n\nobviously racist, and convince at least some judges away from \n\ndiscrimination. Unfortunately, such an experiment would necessarily \n\ndiffer from Bertrand et al.'s original, where employers did not realize \n\nthey were part of an experiment. Note that this experiment works even \n\nthough the source of truth is partial: we do not know whether a \n\nparticular resume should be hired or not, but most would agree that the \n\nanswer should not depend on the candidate's name.\n\n \n\n For biases affecting probabilistic reasoning and decision making, \n\nthere is a long literature exploring how people decide between gambles \n\n For example, Erev et al. constructed an 11-dimensional space of gambles\n\n sufficient to reproduce 14 known cognitive biases, from which new \n\ninstances can be algorithmically generated.\n\n Would debates about gambles reduce cognitive biases? One difficulty \n\nhere is that simple gambles might fail the \"definitive argument longer \n\nthan debate limit\" criteria if an expected utility calculation is \n\nsufficient to prove the answer, making it difficult for a lying debater \n\nto meaningfully compete.\n\n \n\n Interestingly, Chen et al. used a similar setup to human+human+human\n\n debate to improve the quality of human data collected in a synthetic \n\n\"Relation Extraction\" task. \n\nPeople were first asked for direct answers, then pairs of people who \n\ndisagreed were asked to discuss and possibly update their answers. Here\n\n the debaters and judges are the same, but the overall goal of \n\nextracting higher quality information from humans is shared with debate.\n\n \n\nQuestions social science can help us answer\n\n-------------------------------------------\n\n We've laid out the general program for learning AI goals by asking \n\nhumans questions, and discussed how to use debate to strengthen what we \n\ncan learn by targeting the reasoning behind conclusions. Whether we use\n\n direct questions or something like debate, any intervention that gives \n\nus higher quality answers is more likely to produce aligned AI. The \n\nquality of those answers depends on the human judges, and social science\n\n research can help to measure answer quality and improve it. Let's go \n\ninto more detail about what types of questions we want to answer, and \n\nwhat we hope to do with that information. Although we will frame these \n\nquestions as they apply to debate, most of them apply to any other \n\nmethod which learns goals from humans.\n\n \n\n1. **How skilled are people as judges by default?**\n\n If we ran debate using a person chosen at random as the judge, and \n\ngave them no training, would the result be aligned? A person picked at \n\nrandom might be vulnerable to convincing fallacious reasoning,\n\n leading AI to employ such reasoning. Note that the debaters are not \n\nchosen at random: once the judge is fixed, we care about debaters who \n\neither learn to help the judge (in the good case) or to exploit the \n\njudge's weaknesses (in the bad case).\n\n2. **Can we distinguish good judges from bad judges?**\n\n People likely differ in the ability to judge debates. There are \n\nmany filters we could use to identify good judges: comparing their \n\nverdicts to those of other judges, to people given more time to think, \n\nor to known expert judgmentNote that \n\ndomain expertise may be quite different from what makes a good judge of \n\n Ideally we would like filters that do not require an independent \n\nsource of truth, though at experiment time we will need a source of \n\ntruth to know whether a filter works. It is not obvious a priori that \n\ngood filters exist, and any filter would need careful scrutiny to ensure\n\n it does not introduce bias into our choice of judges.\n\n3. **Does judge ability generalize across domains?**\n\n If judge ability in one domain fails to transfer to other domains, \n\nwe will have low confidence that it transfers to new questions and \n\narguments arising from highly capable AI debaters. This generalization \n\nis necessary to trust debate as a method for alignment, especially once \n\nwe move to questions where no independent source of truth is available. \n\n We emphasize that judge ability is not the same as knowledge: there is \n\n4. **Can we train people to be better judges?**\n\n5. **What questions are people better at answering?**\n\n If we know that humans are bad at answering certain types of \n\nquestions, we can switch to reliable formulations. For example, \n\nphrasing questions in frequentist terms may reduce known cognitive \n\nbiases.\n\n Graham et al. argue that different political views follow from \n\ndifferent weights placed on fundamental moral considerations, and \n\nsimilar analysis could help understand where we can expect moral \n\ndisagreements to persist after reflective equilibrium.\n\n In cases where reliable answers are unavailable, we need to ensure \n\nthat trained models know their own limits, and express uncertainty or \n\ndisagreement as required.\n\n6. **Are there ways to restrict debate to make it easier to judge?**\n\n People might be better at judging debates formulated in terms of \n\ncalm, factual statements, and worse at judging debates designed to \n\ntrigger strong emotions. Or, counterintuitively, it could be the other \n\nway around. If we know which styles of debates that people are\n\n better at judging, we may be able to restrict AI debaters to these styles.\n\n7. **How can people work together to improve quality?**\n\n If individuals are insufficient judges, are teams of judges better? \n\n Majority vote is the simplest option, but perhaps several people \n\ntalking through an answer together is stronger, either actively or after\n\n the fact through peer review. Condorcet's jury theorem implies that \n\nmajority votes can amplify weakly good judgments to strong judgments (or\n\n \n\n Given our lack of experience outside of ML, we are not able to \n\nprecisely articulate all of the different experiments we need. The only\n\n way to fix this is to talk to more people with different backgrounds \n\nand expertise. We have started this process, but are eager for more \n\nconversations with social scientists about what experiments could be \n\nrun, and encourage other AI safety efforts to engage similarly.\n\n \n\nReasons for optimism\n\n--------------------\n\n We believe that understanding how humans interact with long-term AI \n\nalignment is difficult but possible. However, this would be a new \n\nresearch area, and we want to be upfront about the uncertainties \n\ninvolved. In this section and the next, we discuss some reasons for \n\noptimism and pessimism about whether this research will succeed. We \n\nfocus on issues specific to human uncertainty and associated social \n\nscience research; for similar discussion on ML uncertainty in the case \n\nof debate we refer to our previous work.\n\n \n\n### Engineering vs. science\n\n Most social science seeks to understand humans \"in the wild\": \n\nresults that generalize to people going about their everyday lives. \n\nWith limited control over these lives, differences between laboratory \n\nand real life are bad from the scientific perspective. In contrast, AI \n\nalignment seeks to extract the best version of what humans want: our \n\ngoal is engineering rather than science, and we have more freedom to \n\nintervene. If judges in debate need training to perform well, we can \n\nprovide that training. If some people still do not provide good data, \n\nwe can remove them from experiments (as long as this filter does not \n\ncreate too much bias). This freedom to intervene means that some of the\n\n difficulty in understanding and improving human reasoning may not \n\napply. However, science is still required: once our interventions are \n\nin place, we need to correctly know whether our methods work. Since our\n\n experiments will be an imperfect model of the final goal, careful \n\ndesign will be necessary to minimize this mismatch, just as is required \n\nby existing social science.\n\n \n\n### We don't need to answer all questions\n\n Our most powerful intervention is to give up: to recognize that we \n\nare unable to answer some types of questions, and instead prevent AI \n\nsystems from pretending to answer. Humans might be good judges on some \n\ntopics but not others, or with some types of reasoning but not others; \n\nif we discover that we can adjust our goals appropriately. Giving up on\n\n some types of questions is achievable either on the ML side, using \n\ncareful uncertainty modeling to know when we do not know, or on the \n\nhuman side by training judges to understand their own areas of \n\nuncertainty. Although we will attempt to formulate ML systems that \n\nautomatically detect areas of uncertainty, any information we can gain \n\non the social science side about human uncertainty can be used both to \n\naugment ML uncertainty modeling and to test whether ML uncertainty \n\nmodeling works.\n\n \n\n### Relative accuracy may be enough\n\n Say we have a variety of different ways to structure debate with \n\nhumans. Ideally, we would like to achieve results of the form \"debate \n\nstructure AAA\n\n is truth-seeking with 90% confidence\". Unfortunately, we may be \n\nunconfident that an absolute result of this form will generalize to \n\nadvanced AI systems: it may hold for an experiment with simple tasks but\n\n break down later on. However, even if we can't achieve such absolute \n\nresults, we can still hope for relative results of the form \"debate \n\n \n\n### We don't need to pin down the best alignment scheme\n\n As the AI safety field progresses to increasingly advanced ML \n\nsystems, we expect research on the ML side and the human side to merge. \n\n Starting social science experiments prior to this merging will give the\n\n field a head start, but we can also take advantage of the expected \n\nmerging to make our goals easier. If social science research narrows \n\nthe design space of human-friendly AI alignment algorithms but does not \n\nproduce a single best scheme, we can test the smaller design space once \n\nthe machines are ready.\n\n \n\n### A negative result would be important!\n\n If we test an AI alignment scheme from the social science \n\nperspective and it fails, we've learned valuable information. There are\n\n a variety of proposed alignment schemes, and learning which don't work \n\nearly gives us more time to switch to others, or to intervene on a \n\npolicy level to slow down dangerous development. In fact, given our \n\nbelief that AI alignment is harder for more advanced agents, a negative \n\nresult might be easier to believe and thus more valuable that a less \n\ntrustworthy positive result.\n\n \n\nReasons to worry\n\n----------------\n\n We turn next to reasons social science experiments about AI \n\nalignment might fail to produce useful results. We emphasize that \n\nuseful results might be both positive and negative, so these are not \n\nreasons why alignment schemes might fail. Our primary worry is one \n\nsided, that experiments would say an alignment scheme works when in fact\n\n it does not, though errors in the other direction are also undesirable.\n\n \n\n### Our desiderata are conflicting\n\n As mentioned before, some of our criteria when picking experimental \n\ntasks are in conflict. We want tasks that are sufficiently interesting \n\n(not too easy), with a source of verifiable ground truth, are not too \n\nhard, etc. \"Not too easy\" and \"not too hard\" are in obvious conflict, \n\nbut there are other more subtle difficulties. Domain experts with the \n\nknowledge to debate interesting tasks may not be the same people capable\n\n of lying effectively, and both restrictions make it hard to gather \n\nlarge volumes of data. Lying effectively is required for a meaningful \n\nexperiment, since a trained AI may have no trouble lying unless lying is\n\n a poor strategy to win debates. Experiments to test whether ethical \n\nbiases interfere with judgment may make it more difficult to find tasks \n\nwith reliable ground truth, especially on subjects with significant \n\ndisagreement across people. The natural way out is to use many \n\ndifferent experiments to cover different aspects of our uncertainty, but\n\n this would take more time and might fail to notice interactions between\n\n desiderata.\n\n \n\n### We want to measure judge quality given optimal debaters\n\n For debate, our end goal is to understand if the judge is capable of\n\n determining who is telling the truth. However, we specifically care \n\nwhether the judge performs well given that the debaters are performing \n\nwell. Thus our experiments have an inner/outer optimization structure: \n\nwe first train the debaters to debate well, then measure how well the \n\njudges perform. This increases time and cost: if we change the task, we\n\n may need to find new debaters or retrain existing debaters. Worse, the\n\n human debaters may be bad at performing the task, either out of \n\ninclination or ability. Poor performance is particularly bad if it is \n\none sided and applies only to lying: a debater might be worse at lying \n\nout of inclination or lack of practice, and thus a win for the honest \n\ndebater might be misleading.\n\n \n\n### ML algorithms will change\n\n It is unclear when or if ML systems will reach various levels of \n\ncapability, and the algorithms used to train them will evolve over time.\n\n The AI alignment algorithms of the future may be similar to the \n\nproposed algorithms of today, or they may be very different. However, \n\nwe believe that knowledge gained on the human side will partially \n\ntransfer: results about debate will teach us about how to gather data \n\nfrom humans even if debate is superseded. The algorithms may change; \n\nhumans will not.\n\n \n\n### Need strong out-of-domain generalization\n\n Regardless of how carefully designed our experiments are, \n\nhuman+human+human debate will not be a perfect match to AI+AI+human \n\ndebate. We are seeking research results that generalize to the setting \n\nwhere we replace the human debaters (or similar) with AIs of the future,\n\n which is a hard ask. This problem is fundamental: we do not have the \n\nadvanced AI systems of the future to play with, and want to learn about \n\nhuman uncertainty starting now.\n\n \n\n### Lack of philosophical clarity\n\n Any AI alignment scheme will be both an algorithm for training ML \n\nsystems and a proposed definition of what it means to be aligned. \n\nHowever, we do not expect humans to conform to any philosophically \n\nconsistent notion of values, and concepts like reflective equilibrium \n\nmust be treated with caution in case they break down when applied to \n\nreal human judgement. Fortunately, algorithms like debate need not \n\npresuppose philosophical consistency: a back and forth conversation to \n\nconvince a human judge makes sense even if the human is leaning on \n\nheuristics, intuition, and emotion. It is not obvious that debate works\n\n in this messy setting, but there is hope if we take advantage of \n\ninaction bias, uncertainty modeling, and other escape hatches. We \n\nbelieve lack of philosophical clarity is an argument for investing in \n\nsocial science research: if humans are not simple, we must engage with \n\ntheir complexity.\n\n \n\nThe scale of the challenge\n\n--------------------------\n\n Long-term AI safety is particularly important if we develop \n\nartificial general intelligence (AGI), which the OpenAI Charter defines \n\nas highly autonomous systems that outperform humans at most economically\n\n valuable work. If we want to \n\ntrain an AGI with reward learning from humans, it is unclear how many \n\nsamples will be required to align it. As much as possible, we can try \n\nto replace human samples with knowledge about the world gained by \n\nreading language, the internet, and other sources of information. But \n\nit is likely that a fairly large number of samples from people will \n\nstill be required. Since more samples means less noise and more safety,\n\n if we are uncertain about how many samples we need then we will want a \n\nlot of samples.\n\n \n\n A lot of samples would mean recruiting a lot of people. We cannot \n\nrule out needing to involve thousands to tens of thousands of people for\n\n millions to tens of millions of short interactions: answering \n\nquestions, judging debates, etc. We may need to train these people to \n\nbe better judges, arrange for peers to judge each other's reasoning, \n\ndetermine who is doing better at judging and give them more weight or a \n\nmore supervisory role, and so on. Many researchers would be required on\n\n the social science side to extract the highest quality information from\n\n the judges.\n\n \n\n A task of this scale would be a large interdisciplinary project, \n\nrequiring close collaborations in which people of different backgrounds \n\nfill in each other's missing knowledge. If machine learning reaches \n\nthis scale, it is important to get a head start on the collaborations \n\nsoon.\n\n \n\nConclusion: how you can help\n\n----------------------------\n\n We have argued that the AI safety community needs social scientists \n\nto tackle a major source of uncertainty about AI alignment algorithms: \n\nwill humans give good answers to questions? This uncertainty is \n\ndifficult to tackle with conventional machine learning experiments, \n\nsince machine learning is primitive. We are still in the early days of \n\nperformance on natural language and other tasks, and problems with human\n\n reward learning may only show up on tasks we cannot yet tackle.\n\n \n\n Our proposed solution is to replace machine learning with people, at\n\n least until ML systems can participate in the complexity of debates we \n\nare interested in. If we want to understand a game played with ML and \n\nhuman participants, we replace the ML participants with people, and see \n\nhow the all human game plays out. For the specific example of debate, \n\nwe start with debates with two ML debaters and a human judge, then \n\nswitch to two human debaters and a human judge. The result is a pure \n\nhuman experiment, motivated by machine learning but available to anyone \n\nwith a solid background in experimental social science. It won't be an \n\neasy experiment, which is all the more reason to start soon.\n\n \n\n If you are a social scientist interested in these questions, please \n\ntalk to AI safety researchers! We are interested in both conversation \n\nand close collaboration. There are many institutions engaged with \n\n \n\n If you are a machine learning researcher interested in or already \n\nworking on safety, please think about how alignment algorithms will work\n\n once we advance to tasks beyond the abilities of current machine \n\nlearning. If your preferred alignment scheme uses humans in an \n\nimportant way, can you simulate the future by replacing some or all ML \n\ncomponents with people? If you can imagine these experiments but don't \n\nfeel you have the expertise to perform them, find someone who does.\n\n \n\n", "bibliography_bib": [{"title": "Deep reinforcement learning from human preferences"}, {"title": "Judgment under uncertainty: heuristics and biases"}, {"title": "Intergroup bias"}, {"title": "AI safety via debate"}, {"title": "Supervising strong learners by amplifying weak experts"}, {"title": "Reward learning from human preferences and demonstrations in Atari"}, {"title": "AI safety gridworlds"}, {"title": "An empirical methodology for writing user-friendly natural language computer applications"}, {"title": "Factored Cognition"}, {"title": "Learning the Preferences of Ignorant, Inconsistent Agents"}, {"title": "Comparing human-centric and robot-centric sampling for robot deep learning from demonstrations"}, {"title": "Computational Social Science: Towards a collaborative future"}, {"title": "Mirror Mirror: Reflections on Quantitative Fairness"}, {"title": "Moral Anti-Realism"}, {"title": "Gender shades: Intersectional accuracy disparities in commercial gender classification"}, {"title": "Moral dumbfounding: When intuition finds no reason"}, {"title": "Batch active preference-based learning of reward functions"}, {"title": "Learning to understand goal specifications by modelling reward"}, {"title": "Improving language understanding by generative pre-training"}, {"title": "Thinking, fast and slow"}, {"title": "Deep Blue"}, {"title": "Mastering chess and shogi by self-play with a general reinforcement learning algorithm"}, {"title": "Deviant or Wrong? The Effects of Norm Information on the Efficacy of Punishment"}, {"title": "The weirdest people in the world?"}, {"title": "Fact, fiction, and forecast"}, {"title": "A theory of justice"}, {"title": "Looking for a psychology for the inner rational agent"}, {"title": "How (and where) does moral judgment work?"}, {"title": "Scalable agent alignment via reward modeling: a research direction"}, {"title": "OpenAI Five"}, {"title": "The Most Human Human: What Talking with Computers Teaches Us About What It Means to Be Alive"}, {"title": "How to overcome prejudice"}, {"title": "The nature and origins of misperceptions: Understanding false and unsupported beliefs about politics"}, {"title": "Persuasion, influence, and value: Perspectives from communication and social neuroscience"}, {"title": "Identifying and cultivating superforecasters as a method of improving probabilistic predictions"}, {"title": "Superforecasting: The art and science of prediction"}, {"title": "Cooperative inverse reinforcement learning"}, {"title": "Inverse reward design"}, {"title": "The art of being right"}, {"title": "Are Emily and Greg more employable than Lakisha and Jamal? A field experiment on labor market discrimination"}, {"title": "Prospect theory: An analysis of decisions under risk"}, {"title": "Advances in prospect theory: Cumulative representation of uncertainty"}, {"title": "From anomalies to forecasts: Toward a descriptive model of decisions under risk, under ambiguity, and from experience"}, {"title": "Cicero: Multi-Turn, Contextual Argumentation for Accurate Crowdsourcing"}, {"title": "The rationality of informal argumentation: A Bayesian approach to reasoning fallacies"}, {"title": "Rationality in medical decision making: a review of the literature on doctors’ decision-making biases"}, {"title": "Expert political judgment: How good is it? How can we know?"}, {"title": "Two approaches to the study of experts’ characteristics"}, {"title": "Debiasing"}, {"title": "An evaluation of argument mapping as a method of enhancing critical thinking performance in e-learning environments"}, {"title": "Forecasting tournaments: Tools for increasing transparency and improving the quality of debate"}, {"title": "How to make cognitive illusions disappear: Beyond \"heuristics and biases\""}, {"title": "Liberals and conservatives rely on different sets of moral foundations"}, {"title": "Negative emotions can attenuate the influence of beliefs on logical reasoning"}, {"title": "Epistemic democracy: Generalizing the Condorcet jury theorem"}, {"title": "Aggregating sets of judgments: An impossibility result"}, {"title": "The Delphi technique as a forecasting tool: issues and analysis"}, {"title": "OpenAI Charter"}], "id": "97545e97bc0f1ba62fe3cd35257c58a1"} +{"url": "n/a", "source": "distill", "source_type": "html", "converted_with": "python", "title": "A Discussion of 'Adversarial Examples Are Not Bugs, They Are Features': Two Examples of Useful, Non-Robust Features", "authors": ["Gabriel Goh"], "date_published": "2019-08-06", "abstract": " This article is part of a discussion of the Ilyas et al. paper “Adversarial examples are not bugs, they are features”. You can learn more in the main discussion article . ", "journal_ref": "distill-pub", "doi": "https://doi.org/10.23915/distill.00019.3", "text": "\n\n #rebuttal,\n\n .comment-info {\n\n background-color: hsl(54, 78%, 96%);\n\n border-left: solid hsl(54, 33%, 67%) 1px;\n\n padding: 1em;\n\n color: hsla(0, 0%, 0%, 0.67);\n\n }\n\n #header-info {\n\n margin-top: 0;\n\n margin-bottom: 1.5rem;\n\n display: grid;\n\n grid-template-columns: 65px max-content 1fr;\n\n grid-template-areas:\n\n \"icon explanation explanation\"\n\n \"icon back comment\";\n\n grid-column-gap: 1.5em;\n\n }\n\n #header-info .icon-multiple-pages {\n\n grid-area: icon;\n\n padding: 0.5em;\n\n content: url(images/multiple-pages.svg);\n\n }\n\n #header-info .explanation {\n\n grid-area: explanation;\n\n font-size: 85%;\n\n }\n\n #header-info .back {\n\n grid-area: back;\n\n }\n\n #header-info .back::before {\n\n content: \"←\";\n\n margin-right: 0.5em;\n\n }\n\n #header-info .comment {\n\n grid-area: comment;\n\n scroll-behavior: smooth;\n\n }\n\n #header-info .comment::before {\n\n content: \"↓\";\n\n margin-right: 0.5em;\n\n }\n\n #header-info a.back,\n\n #header-info a.comment {\n\n font-size: 80%;\n\n font-weight: 600;\n\n border-bottom: none;\n\n text-transform: uppercase;\n\n color: #2e6db7;\n\n display: block;\n\n margin-top: 0.25em;\n\n letter-spacing: 0.25px;\n\n }\n\n This article is part of a discussion of the Ilyas et al. paper\n\n *\"Adversarial examples are not bugs, they are features\".*\n\n You can learn more in the\n\n [main discussion article](https://distill.pub/2019/advex-bugs-discussion/) .\n\n \n\n[Other Comments](https://distill.pub/2019/advex-bugs-discussion/#commentaries)\n\n[Comment by Ilyas et al.](#rebuttal)\n\n Ilyas et al. define a *feature* as a function fff that\n\n label. But in the presence of an adversary Ilyas et al. argues\n\n the metric that truly matters is a feature's *robust usefulness*,\n\n \n\nE[inf∥δ∥≤ϵyf(x+δ)],\n\n \\mathbf{E}\\left[\\inf\\_{\\|\\delta\\|\\leq\\epsilon}yf(x+\\delta)\night],\n\n E[∥δ∥≤ϵinf​yf(x+δ)],\n\n its correlation with the label while under attack. Ilyas et al. \n\n like?\n\n \n\n### Non-Robust Features in Linear Models\n\n nonlinear models encountered in deep learning. As Ilyas et al \n\n to linear features of the form:\n\n \n\n \text{and} \\quad \\mathbf{E}[x] = 0.\n\n f(x)=∥a∥Σ​aTx​whereΣ=E[xxT]andE[x]=0.\n\n The robust usefulness of a linear feature admits an elegant decomposition\n\n This\n\n egin{aligned}\n\n \n\n\\mathbf{E}\\left[\\inf\\_{\\|\\delta\\|\\leq\\epsilon}yf(x+\\delta)\night] &\n\n \n\n=\\mathbf{E}\\left[yf(x)+\\inf\\_{\\|\\delta\\|\\leq\\epsilon}yf(\\delta)\night]\\\n\n & \n\n &\n\n \n\n \\end{aligned}\n\n into two terms:\n\n \n\n .undomargin {\n\n position: relative;\n\n left: -1em;\n\n top: 0.2em;\n\n }\n\n \n\nE[inf∥δ∥≤ϵyf(x+δ)]\n\n \\mathbf{E}\\left[\\inf\\_{\\|\\delta\\|\\leq\\epsilon}yf(x+\\delta)\night]\n\n E[∥δ∥≤ϵinf​yf(x+δ)]\n\n===\n\nE[yf(x)]\\mathop{\\mathbf{E}[yf(x)]}E[yf(x)]\n\n−-−\n\nϵ∥a∥∗∥a∥Σ\\epsilon\frac{\\|a\\|\\_{\\*}}{\\|a\\|\\_{\\Sigma}}ϵ∥a∥Σ​∥a∥∗​​\n\n The robust usefulness of a feature\n\n \n\n the correlation of the feature with the label\n\n \n\n the feature's non-robustness\n\n \n\n dimensional plot.\n\n \n\n \n\n Subject to an\n\n L\\_2\n\n adversery, observe that high frequency features are both less useful and\n\n less robust.\n\n Useful Non-Useful A B C D E F \n\n Pareto frontier of points in the non-robustness and usefulness space.\n\n \n\n positive label.\n\n \\log \\left( \frac{\\|a\\_i\\|\\_\\Sigma}{\\|a\\_i\\|} \night) =\n\n \\log(\\lambda\\_i) Feature's log robustness. When\n\n a\\_i's\n\n are the\n\n i^{th}\n\n eigenvalues of\n\n \\Sigma\n\n , the robustness reduces to the\n\n i^{th}\n\n singular value of\n\n \\lambda\\_i ABCDEFf-12-11-10-9-8-7-6-5-4-3-4-3-2-10 \n\n \n\n We demonstrate two constructions:\n\n \n\n**Ensembles** The work of Tsipras et al \n\n suggests a collection of non-robust and non-useful features, if \n\nsufficiently uncorrelated, can be ensembled into a single useful, \n\nnon-robust useful feature f.\n\n process is illustrated above numerically. We choose a set of non-robust\n\n features by excluding all features above a threshold, and naively \n\nensembling them according to:\n\n (1-lpha) \\cdot a\\_{\text{non-robust}} + lpha \\cdot a\\_{\text{robust}}, \n\n \n\n It is surprising, thus, that the experiments of Madry et al. \n\n \n\n To cite Ilyas et al.'s response, please cite their\n\n**Response Summary**: The construction of explicit non-robust features is\n\n the useful non-robust features detected by our experiments. We also agree that\n\n non-robust features arising as \"distractors\" is indeed not precluded by our\n\n theoretical framework, even if it is precluded by our experiments.\n\n This simple theoretical framework sufficed for reasoning about and\n\n predicting the outcomes of our experiments\n\n We also presented a theoretical setting where we can\n\n analyze things fully rigorously in Section 4 of our paper..\n\n However, this comment rightly identifies finding a more comprehensive\n\n definition of feature as an important future research direction.\n\n \n\n**Response**: These experiments (visualizing the robustness and\n\n corroborate the existence of useful, non-robust features and make progress\n\n towards visualizing what these non-robust features actually look like. \n\nWe also appreciate the point made by the provided construction of non-robust\n\n features (as defined in our theoretical framework) that are combinations of\n\n useful+robust and useless+non-robust features. Our theoretical framework indeed\n\n enables such a scenario, even if — as the commenter already notes — our\n\n framework technically captures.) Specifically, in such a scenario, during the\n\n construction of the D^det\\widehat{\\mathcal{D}}\\_{det}D\n\ndet​ dataset, only the non-robust\n\n and useless term of the feature would be flipped. Thus, a classifier trained on\n\n such a dataset would associate the predictive robust feature with the\n\n *wrong* label and would thus not generalize on the test set. In contrast,\n\ndet​\n\n do generalize.\n\nOverall, our focus while developing our theoretical framework was on\n\n the comment points out, putting forth a theoretical framework that captures\n\n non-robust features in a very precise way is an important future research\n\n direction in itself. \n\n", "bibliography_bib": [{"title": "Adversarial examples are not bugs, they are features"}], "id": "81597626909837395a812e81f5e0e642"} +{"url": "n/a", "source": "distill", "source_type": "html", "converted_with": "python", "title": "Adversarial Reprogramming of Neural Cellular Automata", "authors": ["Ettore Randazzo", "Alexander Mordvintsev", "Eyvind Niklasson", "Michael Levin"], "date_published": "2021-05-06", "abstract": " This article is part of the Differentiable Self-organizing Systems Thread, an experimental format collecting invited short articles delving into differentiable self-organizing systems, interspersed with critical commentary from several experts in adjacent fields. ", "journal_ref": "distill-pub", "doi": "https://doi.org/10.23915/distill.00027.004", "text": "\n\n### Contents\n\n[Adversarial MNIST CAs](#adversarial-mnist-cas) | \n\n[Perturbing the states of Growing CAs](#perturbing-the-states-of-growing-cas) | \n\n[Related Work](#related-work)\n\n[Discussion](#discussion)\n\n \n\n \n\n This article is part of the\n\n an experimental format collecting invited short articles delving into\n\n differentiable self-organizing systems, interspersed with critical\n\n commentary from several experts in adjacent fields.\n\n \n\n[Self-Organising Textures](https://distill.pub/selforg/2021/textures/)\n\nIn a complex system, whether biological, technological, or social, \n\nhow can we discover signaling events that will alter system-level \n\nbehavior in desired ways? Even when the rules governing the individual \n\ncomponents of these complex systems are known, the inverse problem - \n\ngoing from desired behaviour to system design - is at the heart of many \n\nbarriers for the advance of biomedicine, robotics, and other fields of \n\nimportance to society.\n\nBiology, specifically, is transitioning from a focus on mechanism \n\n(what is required for the system to work) to a focus on information \n\n(what algorithm is sufficient to implement adaptive behavior). Advances \n\nin machine learning represent an exciting and largely untapped source of\n\n inspiration and tooling to assist the biological sciences. Growing \n\nNeural Cellular Automata and Self-classifying MNIST Digits \n\n introduced the Neural Cellular Automata (Neural CA) model and \n\ndemonstrated how tasks requiring self-organisation, such as pattern \n\ngrowth and self-classification of digits, can be trained in an \n\nend-to-end, differentiable fashion. The resulting models were robust to \n\nvarious kinds of perturbations: the growing CA expressed regenerative \n\ncapabilities when damaged; the MNIST CA were responsive to changes in \n\nthe underlying digits, triggering reclassification whenever necessary. \n\nThese computational frameworks represent quantitative models with which \n\nto understand important biological phenomena, such as scaling of single \n\ncell behavior rules into reliable organ-level anatomies. The latter is a\n\n kind of anatomical homeostasis, achieved by feedback loops that must \n\nrecognize deviations from a correct target morphology and progressively \n\nreduce anatomical error.\n\nIn this work, we *train adversaries* whose goal is to reprogram \n\nCA into doing something other than what they were trained to do. In \n\norder to understand what kinds of lower-level signals alter \n\nsystem-level behavior of our CA, it is important to understand how these\n\n CA are constructed and where local versus global information resides.\n\nThe system-level behavior of Neural CA is affected by:\n\n* **Individual cell states.** States store \n\ninformation which is used for both diversification among cell behaviours\n\n and for communication with neighbouring cells.\n\n* **The model parameters.** These describe the \n\ninput/output behavior of a cell and are shared by every cell of the same\n\n family. The model parameters can be seen as *the way the system works*.\n\n* **The perceptive field.** This is how cells perceive \n\ntheir environment. In Neural CA, we always restrict the perceptive field\n\n to be the eight nearest neighbors and the cell itself. The way cells \n\nare perceived by each other is different between the Growing CA and \n\nMNIST CA. The Growing CA perceptive field is a set of weights fixed both\n\n during training and inference, while the MNIST CA perceptive field is \n\nlearned as part of the model parameters.\n\nWe will explore two kinds of adversarial attacks: 1) injecting a few \n\nadversarial cells into an existing grid running a pretrained model; and \n\n2) perturbing the global state of all cells on a grid.\n\nFor the first type of adversarial attacks we train a new CA model \n\nthat, when placed in an environment running one of the original models \n\ndescribed in the previous articles, is able to hijack the behavior of \n\nthe collective mix of adversarial and non-adversarial CA. This is an \n\nexample of injecting CA with differing *model parameters* into the \n\nsystem. In biology, numerous forms of hijacking are known, including \n\n Especially fascinating are the many cases of non-cell-autonomous \n\nsignaling developmental biology and cancer, showing that some cell \n\nbehaviors can significantly alter host properties both locally and at \n\nlong range. For example, bioelectrically-abnormal cells can trigger \n\nmetastatic conversion in an otherwise normal body (with no genetic \n\n All of these phenomena underlie the importance of understanding how \n\ncell groups make collective decisions, and how those tissue-level \n\ndecisions can be subverted by the activity of a small number of cells. \n\nIt is essential to develop quantitative models of such dynamics, in \n\norder to drive meaningful progress in regenerative medicine that \n\ncontrols system-level outcomes top-down, where cell- or molecular-level \n\nmicromanagement is infeasible .\n\n We apply a global state perturbation to all living cells. This can be \n\nseen as inhibiting or enhancing combinations of state values, in turn \n\nhijacking proper communications among cells and within the cell's own \n\nstates. Models like this represent not only ways of thinking about \n\nadversarial relationships in nature (such as parasitism and evolutionary\n\n arms races of genetic and physiological mechanisms), but also a roadmap\n\n for the development of regenerative medicine strategies. \n\nNext-generation biomedicine will need computational tools for inferring \n\nminimal, least-effort interventions that can be applied to biological \n\nsystems to predictively change their large-scale anatomical and \n\nbehavioral properties.\n\nRecall how the Self-classifying MNIST digits task consisted of \n\nplacing CA cells on a plane forming the shape of an MNIST digit. The \n\ncells then had to communicate among themselves in order to come to a \n\ncomplete consensus as to which digit they formed.\n\n (a) Local information neighbourhood - each cell can only observe itself\n\n and its neighbors' states, or the absence of neighbours. \n\nYour browser does not support the video tag.\n\n \n\n and freeze its parameters. We then train a new CA whose model \n\narchitecture is identical to the frozen model but is randomly \n\ninitialized. The training regime also closely approximates that of \n\nself-classifying MNIST digits CA. There are three important differences:\n\n* For each batch and each pixel, the CA is randomly chosen to be \n\neither the pretrained model or the new adversarial one. The adversarial \n\nCA is used 10% of the time, and the pre-trained, frozen, model the rest \n\nof the time.\n\nThe adversarial attack as defined here only modifies a small \n\npercentage of the overall system, but the goal is to propagate signals \n\nthat affect all the living cells. Therefore, these adversaries have to \n\nsomehow learn to communicate deceiving information that causes wrong \n\nclassifications in their neighbours and further cascades in the \n\npropagation of deceiving information by 'unaware' cells. The unaware \n\ncells' parameters cannot be changed so the only means of attack by the \n\nadversaries is to cause a change in the cells' states. Cells' states are\n\n responsible for communication and diversification.\n\nThe task is remarkably simple to optimize, reaching convergence in as\n\n little as 2000 training steps (as opposed to the two orders of \n\nmagnitude more steps needed to construct the original MNIST CA). By \n\nvisualising what happens when we remove the adversaries, we observe that\n\n the adversaries must be constantly communicating with their \n\nnon-adversarial neighbours to keep them convinced of the malicious \n\nclassification. While some digits don't recover after the removal of \n\nadversaries, most of them self-correct to the right classification. \n\nBelow we show examples where we introduce the adversaries at 200 steps \n\nand remove them after a further 200 steps.\n\nYour browser does not support the video tag.\n\n \n\nWe introduce the adversaries (red pixels) after 200 steps and remove \n\nthem after 200 more steps. Most digits recover, but not all. We \n\nhighlight mistakes in classification with a red background.\n\nWhile we trained the adversaries with a 10-to-90% split of \n\nadversarial vs. non-adversarial cells, we observe that often \n\nsignificantly fewer adversaries are needed to succeed in the deception. \n\nBelow we evaluate the experiment with just one percent of cells being \n\nadversaries.\n\nYour browser does not support the video tag.\n\n \n\nAdversaries constituting up 1% of the cell collective (red pixels). We \n\nhighlight mistakes in classification with a red background.\n\nWe created a demo playground where the reader can draw digits and \n\nplace adversaries with surgical precision. We encourage the reader to \n\nplay with the demo to get a sense of how easily non-adversarial cells \n\nare swayed towards the wrong classification.\n\nThe natural follow up question is whether these adversarial attacks \n\nwork on Growing CA, too. The Growing CA goal is to be able to grow a \n\ncomplex image from a single cell, and having its result be persistent \n\nover time and robust to perturbations. In this article, we focus on the \n\nlizard pattern model from Growing CA.\n\nYour browser does not support the video tag.\n\n \n\nThe target CA to hijack.\n\nThe goal is to have some adversarial cells change the global \n\nconfiguration of all the cells. We choose two new targets we would like \n\nthe adversarial cells to try and morph the lizard into: a tailless \n\nlizard and a red lizard.\n\nThe desired mutations we want to apply.\n\nThese targets have different properties: \n\n* **Red lizard:** converting a lizard from green to \n\nred would show a global change in the behaviour of the cell collective. \n\nThis behavior is not present in the dynamics observed by the original \n\nmodel. The adversaries are thus tasked with fooling other cells into \n\ndoing things they have never done before (create the lizard shape as \n\nbefore, but now colored in red).\n\n* **Tailless lizard:** having a severed tail is a more \n\nlocalized change that only requires some cells to be fooled into \n\nbehaving in the wrong way: the cells at the base of the tail need to be \n\nconvinced they constitute the edge or silhouette of the lizard, instead \n\nof proceeding to grow a tail as before.\n\nWe first train adversaries for the tailless target with a 10% chance \n\nfor any given cell to be an adversary. We prohibit cells to be \n\nadversaries if they are outside the target pattern; i.e. the tail \n\ncontains no adversaries.\n\nYour browser does not support the video tag.\n\n \n\n10% of the cells are adversarial.\n\nThe video above shows six different instances of the same model with \n\ndiffering stochastic placement of the adversaries. The results vary \n\nconsiderably: sometimes the adversaries succeed in removing the tail, \n\nsometimes the tail is only shrunk but not completely removed, and other \n\ntimes the pattern becomes unstable. Training these adversaries required \n\nmany more gradient steps to achieve convergence, and the pattern \n\nconverged to is qualitatively worse than what was achieved for the \n\nadversarial MNIST CA experiment.\n\nThe red lizard pattern fares even worse. Using only 10% adversarial \n\ncells results in a complete failure: the original cells are unaffected \n\nby the adversaries. Some readers may wonder whether the original \n\npretrained CA has the requisite skill, or 'subroutine' of producing a \n\nred output at all, since there are no red regions in the original \n\ntarget, and may suspect this was an impossible task to begin with. \n\nTherefore, we increased the proportion of adversarial cells until we \n\nmanaged to find a successful adversarial CA, if any were possible.\n\nYour browser does not support the video tag.\n\n \n\nIn the video above we can see how, at least in the first stages of \n\nmorphogenesis, 60% of adversaries are capable of coloring the lizard \n\n where we hide the adversarial cells and show only the original cells. \n\nThere, we see how a handful of original cells are colored in red. This \n\nis proof that the adversaries successfully managed to steer neighboring \n\ncells to color themselves red, where needed.\n\nHowever, the model is very unstable when iterated for periods of time\n\n longer than seen during training. Moreover, the learned adversarial \n\nattack is dependent on a majority of cells being adversaries. For \n\ninstance, when using fewer adversaries on the order of 20-30%, the \n\nconfiguration is unstable.\n\nIn comparison to the results of the previous experiment, the Growing \n\nCA model shows a greater resistance to adversarial perturbation than \n\nthose of the MNIST CA. A notable difference between the two models is \n\nthat the MNIST CA cells have to always be ready and able to change an \n\nopinion (a classification) based on information propagated through \n\nseveral neighbors. This is a necessary requirement for that model \n\nbecause at any time the underlying digit may change, but most of the \n\ncells would not observe any change in their neighbors' placements. For \n\ninstance, imagine the case of a one turning into a seven where the lower\n\n stroke of each overlap perfectly. From the point of view of the cells \n\nin the lower stroke of the digit, there is no change, yet the digit \n\nformed is now a seven. We therefore hypothesise MNIST CA are more \n\nreliant and 'trusting' of continuous long-distance communication than \n\nGrowing CA, where cells never have to reconfigure themselves to generate\n\n something different to before.\n\nWe suspect that more general-purpose Growing CA that have learned a \n\nvariety of target patterns during training are more likely to be \n\nsusceptible to adversarial attacks.\n\nWe observed that it is hard to fool Growing CA into changing their \n\nmorphology by placing adversarial cells inside the cell collective. \n\nThese adversaries had to devise complex local behaviors that would cause\n\n the non-adversarial cells nearby, and ultimately globally throughout \n\nthe image, to change their overall morphology.\n\nIn this section, we explore an alternative approach: perturbing the \n\nglobal state of all cells without changing the model parameters of any \n\ncell.\n\nAs before, we base our experiments on the Growing CA model trained to\n\n produce a lizard. Every cell of a Growing CA has an internal state \n\nvector with 16 elements. Some of them are phenotypical elements (the \n\nRGBA states) and the remaining 12 serve arbitrary purposes, used for \n\nstoring and communicating information. We can perturb the states of \n\nthese cells to hijack the overall system in certain ways (the discovery \n\nof such perturbation strategies is a key goal of biomedicine and \n\nsynthetic morphology). There are a variety of ways we can perform state \n\nperturbations. We will focus on *global state perturbations*, \n\ndefined as perturbations that are applied on every living cell at every \n\ntime step (analogous to \"systemic\" biomedical interventions, that are \n\ngiven to the whole organism (e.g., a chemical taken internally), as \n\nopposed to highly localized delivery systems). The new goal is to \n\ndiscover a certain type of global state perturbation that results in a \n\nstable new pattern.\n\nDiagram showing some possible stages for perturbing a lizard\n\n pattern. (a) We start from a seed that grows into a lizard (b) Fully \n\nconverged lizard. (c) We apply a global state perturbation at every \n\nstep. As a result, the lizard loses its tail. (d) We stop perturbing the\n\n state. We observe the lizard immediately grows back its tail.\n\nWe show 6 target patterns: the tailless and red lizard from the \n\nprevious experiment, plus a blue lizard and lizards with various severed\n\n limbs and severed head.\n\nMosaic of the desired mutations we want to apply.\n\n To give insight on why we chose this, an even simpler \"state addition\" \n\nmutation (a mutation consisting only of the addition of a vector to \n\nevery state) would be insufficient because the value of the states of \n\nour models are unbounded, and often we would want to suppress something \n\nby setting it to zero. The latter is generally impossible with constant \n\nstate additions, as a constant addition or subtraction of a value would \n\ngenerally lead to infinity, except for some fortunate cases where the \n\nnatural residual updates of the cells would cancel out with the constant\n\n addition at precisely state value zero. However, matrix multiplications\n\n have the possibility of amplifying/suppressing combinations of elements\n\n in the states: multiplying a state value repeatedly for a constant \n\nvalue less than one can easily suppress a state value to zero. We \n\nconstrain the matrix to be symmetric for reasons that will become clear \n\nin the following section.\n\n* The underlying CA parameters are frozen and we only train AAA.\n\n* We consider the set of initial image configurations to be both the \n\nseed state and the state with a fully grown lizard (as opposed to the \n\nGrowing CA article, where initial configurations consisted of the seed \n\nstate only).\n\nYour browser does not support the video tag.\n\n \n\nEffect of applying the trained perturbations.\n\nThe video above shows the model successfully discovering global state\n\n perturbations able to change a target pattern to a desired variation. \n\nWe show what happens when we stop perturbing the states (an \n\nout-of-training situation) at step 500 through step 1000, then \n\nreapplying the mutation. This demonstrates the ability of our \n\nperturbations to achieve the desired result both when starting from a \n\nseed, and when starting from a fully grown pattern. Furthermore it \n\ndemonstrates that the original CA easily recover from these state \n\nperturbations once it goes away. This last result is perhaps not \n\nsurprising given how robust growing CA models are in general.\n\nNot all perturbations are equally effective. In particular, the \n\nheadless perturbation is the least successful as it results in a loss of\n\n other details across the whole lizard pattern such as the white \n\ncoloring on its back. We hypothesize that the best perturbation our \n\ntraining regime managed to find, due to the simplicity of the \n\nperturbation, was suppressing a \"structure\" that contained both the \n\nmorphology of the head and the white colouring. This may be related to \n\nthe concept of differentiation and distinction of biological organs. \n\nPredicting what kinds of perturbations would be harder or impossible to \n\nbe done, before trying them out empirically, is still an open research \n\nquestion in biology. On the other hand, a variant of this kind of \n\nsynthetic analysis might help with defining higher order structures \n\nwithin biological and synthetic systems.\n\n### Directions and compositionality of perturbations\n\nOur choice of using a symmetric matrix for representing global state \n\nperturbations is justified by a desire to have compositionality. Every \n\ncomplex symmetric matrix AAA can be diagonalized as follows: \n\nA=QΛQ⊺A = Q \\Lambda Q^\\intercalA=QΛQ⊺\n\nwhere Λ\\LambdaΛ is the diagonal eigenvalues matrix and QQQ\n\n is the unitary matrix of its eigenvectors. Another way of seeing this \n\nis applying a change of basis transformation, scaling each component \n\nproportional to the eigenvalues, and then changing back to the original \n\nbasis. This should also give a clearer intuition on the ease of \n\nsuppressing or amplifying combinations of states. Moreover, we can now \n\ninfer what would happen if all the eigenvalues were to be one. In that \n\nLet us then take the tailless perturbation and see what happens as we vary kkk:\n\nYour browser does not support the video tag.\n\n \n\n negative, the lizard grows a longer tail. Unfortunately, the further \n\naway we go, the more unstable the system becomes and eventually the \n\nlizard pattern grows in an unbounded fashion. This behaviour likely \n\nstems from that perturbations applied on the states also affect the \n\nhomeostatic regulation of the system, making some cells die out or grow \n\nin different ways than before, resulting in a behavior akin to \"cancer\" \n\nin biological systems.\n\nIn that case, \n\n this could result in a stable perturbation. An intuitive understanding \n\nof this is interpolating stable perturbations using the direction \n\ncoefficients.\n\nIn practice, however, the eigenvectors are also different, so the \n\nresults of the combination will likely be worse the more different the \n\nrespective eigenvector bases are.\n\nBelow, we interpolate the direction coefficients, while keeping their\n\n sum to be one, of two types of perturbations: tailless and no-leg \n\nlizards.\n\nYour browser does not support the video tag.\n\n \n\nWhile it largely achieves what we expect, we observe some unintended \n\neffects such as the whole pattern starting to traverse vertically in the\n\n grid. Similar results happen with other combinations of perturbations. \n\nWhat happens if we remove the restriction of the sum of kkks\n\n being equal to one, and instead add both perturbations in their \n\nentirety? We know that if the two perturbations were the same, we would \n\nend twice as far away from the identity perturbation, and in general we \n\nexpect the variance of these perturbations to increase. Effectively, \n\nthis means going further and further away from the stable perturbations \n\ndiscovered during training. We would expect more unintended effects that\n\n may disrupt the CA as the sum of kkks increases.\n\nBelow, we demonstrate what happens when we combine the tailless and \n\nthe no-leg lizard perturbations at their fullest. Note that when we set \n\nYour browser does not support the video tag.\n\n \n\nEffect of composing two perturbations.\n\nSurprisingly, the resulting pattern is almost as desired. However, it\n\n also suffers from the vertical movement of the pattern observed while \n\ninterpolating kkks.\n\n \n\nThis framework can be generalized to any arbitrary number of \n\nperturbations. Below, we have created a small playground that allows the\n\n reader to input their desired combinations. Empirically, we were \n\nsurprised by how many of these combinations result in the intended \n\nRelated work\n\n------------\n\nThis work is inspired by Generative Adversarial Networks (GANs) .\n\n While with GANs it is typical to cotrain pairs of models, in this work \n\nwe froze the original CA and trained the adversaries only. This setup is\n\n### Influence maximization\n\nAdversarial cellular automata have parallels to the field of \n\ninfluence maximization. Influence maximization involves determining the \n\noptimal nodes to influence in order to maximize influence over an entire\n\n graph, commonly a social graph, with the property that nodes can in \n\nturn influence their neighbours. Such models are used to model a wide \n\nvariety of real-world applications involving information spread in a \n\ngraph. \n\n A common setting is that each vertex in a graph has a binary state, \n\nwhich will change if and only if a sufficient fraction of its \n\nneighbours' states switch. Examples of such models are social influence \n\nmaximization (maximally spreading an idea in a network of people), \n\n (when small perturbations to a system bring about a larger 'phase \n\nchange'). At the time of writing this article, for instance, contagion \n\nminimization is a model of particular interest. NCA are a graph - each \n\ncell is a vertex and has edges to its eight neighbours, through which it\n\n can pass information. This graph and message structure is significantly\n\n more complex than the typical graph underlying much of the research in \n\ninfluence maximization, because NCA cells pass vector-valued messages \n\nand have a complex update rules for their internal states, whereas \n\ngraphs in influence maximization research typically consist of more \n\nsimple binary cells states and threshold functions on edges determining \n\nwhether a node has switched states. Many concepts from the field could \n\nbe applied and are of interest, however.\n\nFor example, in this work, we have made an assumption that our \n\nadversaries can be positioned anywhere in a structure to achieve a \n\ndesired behaviour. A common focus of investigation in influence \n\nmaximization problems is deciding which nodes in a graph will result in \n\nmaximal influence on the graph, referred to as target set selection .\n\n This problem isn't always tractable, often NP-hard, and solutions \n\nfrequently involve simulations. Future work on adversarial NCA may \n\ninvolve applying techniques from influence maximization in order to find\n\n the optimal placement of adversarial cells.\n\nDiscussion\n\n----------\n\nThis article showed two different kinds of adversarial attacks on Neural CA.\n\nInjections of adversarial CA in a pretrained Self-classifying MNIST \n\nCA showed how an existing system of cells that are heavily reliant on \n\nthe passing of information among each other is easily swayed by \n\ndeceitful signaling. This problem is routinely faced by biological \n\nsystems, which face hijacking of behavioral, physiological, and \n\nmorphological regulatory mechanisms by parasites and other agents in the\n\n biosphere with which they compete. Future work in this field of \n\ncomputer technology can benefit from research on biological \n\ncommunication mechanisms to understand how cells maximize reliability \n\nand fidelity of inter- and intra-cellular messages required to implement\n\n adaptive outcomes. \n\nThe adversarial injection attack was much less effective against \n\nGrowing CA and resulted in overall unstable CA. This dynamic is also of \n\nimportance to the scaling of control mechanisms (swarm robotics and \n\nnested architectures): a key step in \"multicellularity\" (joining \n\ntogether to form larger systems from sub-agents )\n\n is informational fusion, which makes it difficult to identify the \n\nsource of signals and memory engrams. An optimal architecture would need\n\n to balance the need for validating control messages with a possibility \n\nof flexible merging of subunits, which wipes out metadata about the \n\nspecific source of informational signals. Likewise, the ability to \n\nrespond successfully to novel environmental challenges is an important \n\ngoal for autonomous artificial systems, which may import from biology \n\nstrategies that optimize tradeoff between maintaining a specific set of \n\nsignals and being flexible enough to establish novel signaling regimes \n\nwhen needed.\n\nThe global state perturbation experiment on Growing CA shows how it \n\nis still possible to hijack these CA towards stable out-of-training \n\nconfigurations and how these kinds of attacks are somewhat composable in\n\n a similar way to how embedding spaces are manipulable in the natural \n\n We hypothesize that this is partially due to the regenerative \n\ncapabilities of the pretrained CA, and that other models may be less \n\ncapable of recovery from arbitrary perturbations.\n\n", "bibliography_bib": [{"title": "Growing Neural Cellular Automata"}, {"title": "Self-classifying MNIST Digits"}, {"title": "Herpes Simplex Virus: The Hostile Guest That Takes Over Your Home"}, {"title": "The\n role of gut microbiota (commensal bacteria) and the mucosal barrier in \nthe pathogenesis of inflammatory and autoimmune diseases and cancer: \ncontribution of germ-free and gnotobiotic animal models of human \ndiseases"}, {"title": "Regulation of axial and head patterning during planarian regeneration by a commensal bacterium"}, {"title": "Toxoplasma gondii infection and behavioral outcomes in humans: a systematic review"}, {"title": "Resting potential, oncogene-induced tumorigenesis, and metastasis: the bioelectric basis of cancer in vivo"}, {"title": "Transmembrane voltage potential of somatic cells controls oncogene-mediated tumorigenesis at long-range"}, {"title": "Cross-limb communication during Xenopus hindlimb regenerative response: non-local bioelectric injury signals"}, {"title": "Local\n and long-range endogenous resting potential gradients antagonistically \nregulate apoptosis and proliferation in the embryonic CNS"}, {"title": "Top-down models in biology: explanation and control of complex living systems above the molecular level"}, {"title": "Generative Adversarial Networks"}, {"title": "Adversarial Reprogramming of Neural Networks"}, {"title": "Efficient Estimation of Word Representations in Vector Space"}, {"title": "Fader Networks: Manipulating Images by Sliding Attributes"}, {"title": "Maximizing the spread of influence through a social network"}, {"title": "The Independent Cascade and Linear Threshold Models"}, {"title": "A Survey on Influence Maximization in a Social Network"}, {"title": "Simplicial models of social contagion"}, {"title": "Cascading Behavior in Networks: Algorithmic and Economic Issues"}, {"title": "On the Approximability of Influence in Social Networks"}, {"title": "The Computational Boundary of a “Self”: Developmental Bioelectricity Drives Multicellularity and Scale-Free Cognition"}], "id": "a0ac102158ebc90fd28c1dd20fc71673"} +{"url": "n/a", "source": "distill", "source_type": "html", "converted_with": "python", "title": "Visualizing the Impact of Feature Attribution Baselines", "authors": ["Pascal Sturmfels", "Scott Lundberg", "Su-In Lee"], "date_published": "2020-01-10", "abstract": "Path attribution methods are a gradient-based way of explaining deep models. These methods require choosing a hyperparameter known as the baseline input. What does this hyperparameter mean, and how important is it? In this article, we investigate these questions using image classification networks as a case study. We discuss several different ways to choose a baseline input and the assumptions that are implicit in each baseline. Although we focus here on path attribution methods, our discussion of baselines is closely connected with the concept of missingness in the feature space - a concept that is critical to interpretability research. ", "journal_ref": "distill-pub", "doi": "https://doi.org/10.23915/distill.00022", "text": "\n\nPath attribution methods are a gradient-based way\n\n of explaining deep models. These methods require choosing a\n\n hyperparameter known as the *baseline input*.\n\n What does this hyperparameter mean, and how important is it? In this article,\n\n we investigate these questions using image classification networks\n\n as a case study. We discuss several different ways to choose a baseline\n\n input and the assumptions that are implicit in each baseline.\n\n Although we focus here on path attribution methods, our discussion of baselines\n\n is closely connected with the concept of missingness in the feature space -\n\n a concept that is critical to interpretability research.\n\n \n\nIntroduction\n\n------------\n\n If you are in the business of training neural networks,\n\n you might have heard of the integrated gradients method, which\n\n was introduced at \n\n .\n\n The method computes which features are important \n\n to a neural network when making a prediction on a \n\n particular data point. This helps users\n\n understand which features their network relies on.\n\n Since its introduction,\n\n integrated gradients has been used to interpret \n\n networks trained on a variety of data types, \n\n including retinal fundus images \n\n and electrocardiogram recordings .\n\n \n\n If you've ever used integrated gradients,\n\n you know that you need to define a baseline input x'x'x' before\n\n using the method. Although the original paper discusses the need for a baseline\n\n and even proposes several different baselines for image data - including \n\n the constant black image and an image of random noise - there is\n\n little existing research about the impact of this baseline. \n\n Is integrated gradients sensitive to the \n\n hyperparameter choice? Why is the constant black image \n\n a \"natural baseline\" for image data? Are there any alternative choices?\n\n \n\n In this article, we will delve into how this hyperparameter choice arises,\n\n and why understanding it is important when you are doing model interpretation.\n\n As a case-study, we will focus on image classification models in order \n\n to visualize the effects of the baseline input. We will explore several \n\n notions of missingness, including both constant baselines and baselines\n\n defined by distributions. Finally, we will discuss different ways to compare\n\n baseline choices and talk about why quantitative evaluation\n\n remains a difficult problem.\n\n \n\nImage Classification\n\n--------------------\n\n We focus on image classification as a task, as it will allow us to visually\n\n plot integrated gradients attributions, and compare them with our intuition\n\n , a convolutional \n\n neural network designed for the ImageNet dataset ,\n\n On the ImageNet validation set, Inception V4 has a top-1 accuracy of over 80%.\n\n We download weights from TensorFlow-Slim ,\n\n and visualize the predictions of the network on four different images from the \n\n validation set.\n\n \n\n \n\n Right: The predicted logits of the network on the original image. The\n\n network correctly classifies all images with high confidence.\n\n relative to the true class label.\n\n \n\n Although state of the art models perform well on unseen data,\n\n users may still be left wondering: *how* did the model figure\n\n out which object was in the image? There are a myriad of methods to\n\n interpret machine learning models, including methods to\n\n visualize and understand how the network represents inputs internally , \n\n feature attribution methods that assign an importance score to each feature \n\n for a specific input ,\n\n and saliency methods that aim to highlight which regions of an image\n\n the model was looking at when making a decision\n\n .\n\n visualized as a saliency method, and a saliency method can assign importance\n\n scores to each individual pixel. In this article, we will focus\n\n on the feature attribution method integrated gradients.\n\n \n\n Formally, given a target input xxx and a network function fff, \n\n to the iiith feature value representing how much that feature\n\n indicates that feature strongly increases or decreases the network output \n\n the feature in question did not influence f(x)f(x)f(x).\n\n \n\n prediction using integrated gradients. \n\n The pixels in white indicate more important pixels. In order to plot\n\n attributions, we follow the same design choices as .\n\n That is, we plot the absolute value of the sum of feature attributions\n\n high-magnitude attributions dominating the color scheme.\n\n \n\nA Better Understanding of Integrated Gradients\n\n----------------------------------------------\n\n As you look through the attribution maps, you might find some of them\n\n To better understand this behavior, we need to explore how\n\n we generated feature attributions. Formally, integrated gradients\n\n defines the importance value for the iiith feature value as follows:\n\n \\times \\underbrace{\\int\\_{\\alpha = 0}^ 1}\\_{\\text{From baseline to input…}}\n\n where xxx is the current input,\n\n \"absence\" of feature input. The subscript iii is used\n\n to denote indexing into the iiith feature.\n\n \n\n As the formula above states, integrated gradients gets importance scores\n\n But why would doing this make sense? Recall that the gradient of\n\n a function represents the direction of maximum increase. The gradient\n\n is telling us which pixels have the steepest local slope with respect\n\n to the output. For this reason, the gradient of a network at the input\n\n was one of the earliest saliency methods.\n\n \n\n Unfortunately, there are many problems with using gradients to interpret\n\n deep neural networks . \n\n One specific issue is that neural networks are prone to a problem\n\n sample even if the network depends heavily on those features. This can happen\n\n Intuitively, shifting the pixels in an image by a small amount typically\n\n doesn't change what the network sees in the image. We can illustrate\n\n saturation by plotting the network output at all\n\n images between the baseline x'x'x' and the current image. The figure\n\n below displays that the network\n\n output for the correct class increases initially, but then quickly flattens.\n\n \n\n A plot of network outputs at x'+α(x−x')x' + \\alpha (x - x')x'+α(x−x').\n\n Notice that the network output saturates the correct class\n\n at small values of α\\alphaα. By the time α=1\\alpha = 1α=1,\n\n the network output barely changes.\n\n \n\n What we really want to know is how our network got from \n\n predicting essentially nothing at x'x'x' to being \n\n completely saturated towards the correct output class at xxx.\n\n Which pixels, when scaled along this path, most\n\n increased the network output for the correct class? This is\n\n exactly what the formula for integrated gradients gives us.\n\n \n\n By integrating over a path, \n\n integrated gradients avoids problems with local gradients being\n\n saturated. We can break the original equation\n\n down and visualize it in three separate parts: the interpolated image between\n\n the baseline image and the target image, the gradients at the interpolated\n\n image, and accumulating many such gradients over α\\alphaα.\n\n \\int\\_{\\alpha' = 0}^{\\alpha} \\underbrace{(x\\_i - x'\\_i) \\times \n\n {\\delta x\\_i} d \\alpha'}\\_{\\text{(2): Gradients at Interpolation}} \n\n approximation of the integral with 500 linearly-spaced points between 0 and 1.\n\n Integrated gradients, visualized. In the line chart, the red line refers to\n\n accumulate at small values of α\\alphaα.\n\n \n\n We have casually omitted one part of the formula: the fact\n\n that we multiply by a difference from a baseline. Although\n\n we won't go into detail here, this term falls out because we\n\n care about the derivative of the network\n\n function fff with respect to the path we are integrating over.\n\n That is, if we integrate over the\n\n straight-line between x'x'x' and xxx, which\n\n we can represent as γ(α)=x'+α(x−x')\\gamma(\\alpha) =\n\n x' + \\alpha(x - x')γ(α)=x'+α(x−x'), then:\n\n δf(γ(α))δα=δf(γ(α))δγ(α)×δγ(α)δα=δf(x'+α'(x−x'))δxi×(xi−x'i)\n\n \\frac{\\delta f(\\gamma(\\alpha))}{\\delta \\alpha} =\n\n \\frac{\\delta f(\\gamma(\\alpha))}{\\delta \\gamma(\\alpha)} \\times \n\n \\frac{\\delta \\gamma(\\alpha)}{\\delta \\alpha} = \n\n \\frac{\\delta f(x' + \\alpha' (x - x'))}{\\delta x\\_i} \\times (x\\_i - x'\\_i) \n\n δαδf(γ(α))​=δγ(α)δf(γ(α))​×δαδγ(α)​=δxi​δf(x'+α'(x−x'))​×(xi​−x'i​)\n\n The difference from baseline term is the derivative of the \n\n path function γ\\gammaγ with respect to α\\alphaα.\n\n The theory behind integrated gradients is discussed\n\n in more detail in the original paper. In particular, the authors\n\n show that integrated gradients satisfies several desirable\n\n properties, including the completeness axiom:\n\n Axiom 1: Completeness∑iϕiIG(f,x,x')=f(x)−f(x')\n\n \\textrm{Axiom 1: Completeness}\\\\\n\n \\sum\\_i \\phi\\_i^{IG}(f, x, x') = f(x) - f(x')\n\n Axiom 1: Completenessi∑​ϕiIG​(f,x,x')=f(x)−f(x')\n\n Note that this theorem holds for any baseline x'x'x'.\n\n Completeness is a desirable property because it states that the \n\n importance scores for each feature break down the output of the network:\n\n each importance score represents that feature's individual contribution to\n\n Although it's not essential to our discussion here, we can prove \n\n that integrated gradients satisfies this axiom using the\n\n [fundamental\n\n full discussion of all of the properties that integrated \n\n gradients satisfies to the original paper, since they hold\n\n independent of the choice of baseline. The completeness \n\n axiom also provides a way to measure convergence.\n\n \n\n In practice, we can't compute the exact value of the integral. Instead,\n\n we use a discrete sum approximation with kkk linearly-spaced points between\n\n 0 and 1 for some value of kkk. If we only chose 1 point to \n\n approximate the integral, that feels like too few. Is 10 enough? 100?\n\n Intuitively 1,000 may seem like enough, but can we be certain?\n\n As proposed in the original paper, we can use the completeness axiom\n\n as a sanity check on convergence: run integrated gradients with kkk\n\n and if the difference is large, re-run with a larger kkk \n\n Of course, this brings up a new question: what is \"large\" in this context?\n\n One heuristic is to compare the difference with the magnitude of the\n\n output itself.\n\n .\n\n \n\n The line chart above plots the following equation in red:\n\n ∑iϕiIG(f,x,x';α)⏟(4): Sum of Cumulative Gradients up to α\n\n (4): Sum of Cumulative Gradients up to αi∑​ϕiIG​(f,x,x';α)​​\n\n That is, it sums all of the pixel attributions in the saliency map.\n\n We can see that with 500 samples, we seem (at least intuitively) to\n\n have converged. But this article isn't about how \n\n to get good convergence - it's about baselines! In order\n\n to advance our understanding of the baseline, we will need a brief excursion\n\n into the world of game theory.\n\n \n\nGame Theory and Missingness\n\n---------------------------\n\n Integrated gradients is inspired by work\n\n from cooperative game theory, specifically the Aumann-Shapley value\n\n . In cooperative game theory,\n\n a non-atomic game is a construction used to model large-scale economic systems\n\n Aumann-Shapley values provide a theoretically grounded way to\n\n determine how much different groups of participants contribute to the system.\n\n \n\n In game theory, a notion of missingness is well-defined. Games are defined\n\n on coalitions - sets of participants - and for any specific coalition,\n\n a participant of the system can be in or out of that coalition. The fact\n\n that games can be evaluated on coalitions is the foundation of\n\n the Aumann-Shapley value. Intuitively, it computes how\n\n much value a group of participants adds to the game \n\n by computing how much the value of the game would increase\n\n if we added more of that group to any given coalition.\n\n \n\n Unfortunately, missingness is a more difficult notion when\n\n we are speaking about machine learning models. In order\n\n to evaluate how important the iiith feature is, we\n\n want to be able to compute how much the output of\n\n the network would increase if we successively increased\n\n the \"presence\" of the iiith feature. But what does this mean, exactly?\n\n In order to increase the presence of a feature, we would need to start\n\n with the feature being \"missing\" and have a way of interpolating \n\n between that missingness and its current, known value.\n\n \n\n Hopefully, this is sounding awfully familiar. Integrated gradients\n\n has a baseline input x'x'x' for exactly this reason: to model a\n\n feature being absent. But how should you choose\n\n x'x'x' in order to best represent this? It seems to be common practice\n\n to choose a baseline input x'x'x' to be the vector of\n\n all zeros. But consider the following scenario: you've learned a model\n\n on a healthcare dataset, and one of the features is blood sugar level.\n\n The model has correctly learned that excessively low levels of blood sugar,\n\n which correspond to hypoglycemia, is dangerous. Does\n\n a blood sugar level of 000 seem like a good choice to represent missingness?\n\n \n\n The point here is that fixed feature values may have unintended meaning.\n\n The problem compounds further when you consider the difference from\n\n baseline term xi−x'ix\\_i - x'\\_ixi​−x'i​.\n\n To understand why our machine learning model thinks this patient\n\n is at high risk, you run integrated gradients on this data point with a\n\n because xi−x'i=0x\\_i - x'\\_i = 0xi​−x'i​=0. This is despite the fact that \n\n a blood sugar level of 000 would be fatal!\n\n \n\n We find similar problems when we move to the image domain.\n\n If you use a constant black image as a baseline, integrated gradients will\n\n not highlight black pixels as important even if black pixels make up\n\n authors in , and is in fact\n\n central to the definition of a baseline: we wouldn't want integrated gradients\n\n to highlight missing features as important! But then how do we avoid\n\n giving zero importance to the baseline color?\n\n \n\n Mouse over the segmented image to choose a different color\n\n as a baseline input x'x'x'. Notice that pixels\n\n of the baseline color are not highlighted as important, \n\n even if they make up part of the main object in the image.\n\n \n\nAlternative Baseline Choices\n\n----------------------------\n\n It's clear that any constant color baseline will have this problem.\n\n Are there any alternatives? In this section, we\n\n compare four alternative choices for a baseline in the image domain.\n\n Before proceeding, it's important to note that this article isn't\n\n the first article to point out the difficulty of choosing a baselines.\n\n Several articles, including the original paper, discuss and compare\n\n several notions of \"missingness\", both in the\n\n context of integrated gradients and more generally \n\n .\n\n Nonetheless, choosing the right baseline remains a challenge. Here we will\n\n present several choices for baselines: some based on existing literature,\n\n others inspired by the problems discussed above. The figure at the end \n\n of the section visualizes the four baselines presented here.\n\n \n\n### The Maximum Distance Baseline\n\n If we are worried about constant baselines that are blind to the baseline\n\n color, can we explicitly construct a baseline that doesn't suffer from this\n\n problem? One obvious way to construct such a baseline is to take the \n\n farthest image in L1 distance from the current image such that the\n\n baseline is still in the valid pixel range. This baseline, which\n\n we will refer to as the maximum distance baseline (denoted\n\n *max dist.* in the figure below),\n\n avoids the difference from baseline issue directly. \n\n \n\n### The Blurred Baseline\n\n The issue with the maximum distance baseline is that it doesn't \n\n really represent *missingness*. It actually contains a lot of\n\n information about the original image, which means we are no longer\n\n explaining our prediction relative to a lack of information. To better\n\n preserve the notion of missingness, we take inspiration from \n\n . In their paper,\n\n Fong and Vedaldi use a blurred version of the image as a \n\n domain-specific way to represent missing information. This baseline\n\n is attractive because it captures the notion of missingness in images\n\n in a very human intuitive way. In the figure below, this baseline is\n\n denoted *blur*. The figure lets you play with the smoothing constant\n\n used to define the baseline.\n\n \n\n### The Uniform Baseline\n\n One potential drawback with the blurred baseline is that it is biased\n\n to highlight high-frequency information. Pixels that are very similar\n\n to their neighbors may get less importance than pixels that are very \n\n different than their neighbors, because the baseline is defined as a weighted\n\n from both and the original integrated\n\n gradients paper. Another way to define missingness is to simply sample a random\n\n uniform image in the valid pixel range and call that the baseline. \n\n We refer to this baseline as the *uniform* baseline in the figure below.\n\n \n\n### The Gaussian Baseline\n\n Of course, the uniform distribution is not the only distribution we can\n\n touch on in the next section), Smilkov et al. \n\n variance σ\\sigmaσ. We can use the same distribution as a baseline for \n\n range, which means that as σ\\sigmaσ approaches ∞\\infty∞, the gaussian\n\n baseline approaches the uniform baseline.\n\n \n\n Comparing alternative baseline choices. For the blur and gaussian\n\n baselines, you can vary the parameter σ\\sigmaσ, which refers\n\n to the width of the smoothing kernel and the standard deviation of\n\n noise respectively.\n\n \n\nAveraging Over Multiple Baselines\n\n---------------------------------\n\n You may have nagging doubts about those last two baselines, and you\n\n would be right to have them. A randomly generated baseline\n\n can suffer from the same blindness problem that a constant image can. If \n\n we draw a uniform random image as a baseline, there is a small chance\n\n that a baseline pixel will be very close to its corresponding input pixel\n\n in value. Those pixels will not be highlighted as important. The resulting\n\n saliency map may have artifacts due to the randomly drawn baseline. Is there\n\n any way we can fix this problem?\n\n \n\n Perhaps the most natural way to do so is to average over multiple\n\n different baselines, as discussed in \n\n .\n\n Although doing so may not be particularly natural for constant color images\n\n (which colors do you choose to average over and why?), it is a\n\n very natural notion for baselines drawn from distributions. Simply\n\n draw more samples from the same distribution and average the\n\n importance scores from each sample.\n\n \n\n### Assuming a Distribution\n\n At this point, it's worth connecting the idea of averaging over multiple\n\n baselines back to the original definition of integrated gradients. When\n\n we average over multiple baselines from the same distribution DDD,\n\n we are attempting to use the distribution itself as our baseline. \n\n We use the distribution to define the notion of missingness: \n\n if we don't know a pixel value, we don't assume its value to be 0 - instead\n\n we assume that it has some underlying distribution DDD. Formally, given\n\n a baseline distribution DDD, we integrate over all possible baselines\n\n x'∈Dx' \\in Dx'∈D weighted by the density function pDp\\_DpD​:\n\n )}^{\\text{integrated gradients \n\n with baseline } x'\n\n } \\times \\underbrace{p\\_D(x') dx'}\\_{\\text{…and weight by the density}} \\bigg)\n\n In terms of missingness, assuming a distribution might intuitively feel \n\n like a more reasonable assumption to make than assuming a constant value.\n\n But this doesn't quite solve the issue: instead of having to choose a baseline\n\n x'x'x', now we have to choose a baseline distribution DDD. Have we simply\n\n postponed the problem? We will discuss one theoretically motivated\n\n way to choose DDD in an upcoming section, but before we do, we'll take\n\n a brief aside to talk about how we compute the formula above in practice,\n\n and a connection to an existing method that arises as a result.\n\n \n\n### Expectations, and Connections to SmoothGrad\n\n Now that we've introduced a second integral into our formula,\n\n we need to do a second discrete sum to approximate it, which\n\n requires an additional hyperparameter: the number of baselines to sample. \n\n In , Erion et al. make the \n\n observation that both integrals can be thought of as expectations:\n\n the first integral as an expectation over DDD, and the second integral \n\n as an expectation over the path between x'x'x' and xxx. This formulation,\n\n called *expected gradients*, is defined formally as:\n\n {\\text{Expectation over \\(D\\) and the path…}} \n\n \\bigg[ \\overbrace{(x\\_i - x'\\_i) \\times \n\n \\frac{\\delta f(x' + \\alpha (x - x'))}{\\delta x\\_i}}^{\\text{…of the \n\n importance of the } i\\text{th pixel}} \\bigg]\n\n Expected gradients and integrated gradients belong to a family of methods\n\n known as \"path attribution methods\" because they integrate gradients\n\n over one or more paths between two valid inputs. \n\n Both expected gradients and integrated gradients use straight-line paths,\n\n in more detail in the original paper. To compute expected gradients in\n\n practice, we use the following formula:\n\n ϕ^iEG(f,x;D)=1k∑j=1k(xi−x'ij)×δf(x'j+αj(x−x'j))δxi\n\n \\frac{\\delta f(x'^j + \\alpha^{j} (x - x'^j))}{\\delta x\\_i}\n\n ϕ^​iEG​(f,x;D)=k1​j=1∑k​(xi​−x'ij​)×δxi​δf(x'j+αj(x−x'j))​\n\n where x'jx'^jx'j is the jjjth sample from DDD and \n\n αj\\alpha^jαj is the jjjth sample from the uniform distribution between\n\n 0 and 1. Now suppose that we use the gaussian baseline with variance\n\n \n\n ϕ^iEG(f,x;N(x,σ2I))=1k∑j=1kϵσj×δf(x+(1−αj)ϵσj)δxi\n\n \\hat{\\phi}\\_i^{EG}(f, x; N(x, \\sigma^2 I)) \n\n = \\frac{1}{k} \\sum\\_{j=1}^k \n\n \\epsilon\\_{\\sigma}^{j} \\times \n\n \\frac{\\delta f(x + (1 - \\alpha^j)\\epsilon\\_{\\sigma}^{j})}{\\delta x\\_i}\n\n ϕ^​iEG​(f,x;N(x,σ2I))=k1​j=1∑k​ϵσj​×δxi​δf(x+(1−αj)ϵσj​)​\n\n \n\n To see how we arrived\n\n at the above formula, first observe that \n\n x'∼N(x,σ2I)=x+ϵσx'−x=ϵσ \n\n \\begin{aligned}\n\n x' \\sim N(x, \\sigma^2 I) &= x + \\epsilon\\_{\\sigma} \\\\\n\n x'- x &= \\epsilon\\_{\\sigma} \\\\\n\n \\end{aligned}\n\n x'∼N(x,σ2I)x'−x​=x+ϵσ​=ϵσ​​\n\n by definition of the gaussian baseline. Now we have: \n\n x'+α(x−x')=x+ϵσ+α(x−(x+ϵσ))=x+(1−α)ϵσ\n\n \\begin{aligned}\n\n x' + \\alpha(x - x') &= \\\\\n\n x + \\epsilon\\_{\\sigma} + \\alpha(x - (x + \\epsilon\\_{\\sigma})) &= \\\\\n\n x + (1 - \\alpha)\\epsilon\\_{\\sigma}\n\n \\end{aligned}\n\n x'+α(x−x')x+ϵσ​+α(x−(x+ϵσ​))x+(1−α)ϵσ​​==​\n\n The above formula simply substitues the last line\n\n of each equation block back into the formula.\n\n . \n\n \n\n This looks awfully familiar to an existing method called SmoothGrad\n\n . If we use the (gradients ×\\times× input image)\n\n variant of SmoothGrad SmoothGrad is\n\n was a method designed to sharpen saliency maps and was meant to be run\n\n on top of an existing saliency method. The idea is simple:\n\n instead of running a saliency method once on an image, first\n\n add some gaussian noise to an image, then run the saliency method.\n\n Do this several times with different draws of gaussian noise, then\n\n is discussed in more detail in the original SmoothGrad paper., \n\n then we have the following formula:\n\n ϕiSG(f,x;N(0ˉ,σ2I))=1k∑j=1k(x+ϵσj)×δf(x+ϵσj)δxi\n\n \\phi\\_i^{SG}(f, x; N(\\bar{0}, \\sigma^2 I)) \n\n = \\frac{1}{k} \\sum\\_{j=1}^k \n\n (x + \\epsilon\\_{\\sigma}^{j}) \\times \n\n \\frac{\\delta f(x + \\epsilon\\_{\\sigma}^{j})}{\\delta x\\_i}\n\n ϕiSG​(f,x;N(0ˉ,σ2I))=k1​j=1∑k​(x+ϵσj​)×δxi​δf(x+ϵσj​)​\n\n We can see that SmoothGrad and expected gradients with a\n\n gaussian baseline are quite similar, with two key differences:\n\n gradients multiplies by just ϵσ\\epsilon\\_{\\sigma}ϵσ​, and while expected\n\n gradients samples uniformly along the path, SmoothGrad always\n\n samples the endpoint α=0\\alpha = 0α=0.\n\n \n\n Can this connection help us understand why SmoothGrad creates\n\n assuming that each of our pixel values is drawn from a\n\n gaussian *independently* of the other pixel values. But we know\n\n this is far from true: in images, there is a rich correlation structure\n\n between nearby pixels. Once your network knows the value of a pixel, \n\n it doesn't really need to use its immediate neighbors because\n\n it's likely that those immediate neighbors have very similar intensities.\n\n \n\n \n\n Assuming each pixel is drawn from an independent gaussian\n\n breaks this correlation structure. It means that expected gradients\n\n tabulates the importance of each pixel independently of\n\n the other pixel values. The generated saliency maps\n\n will be less noisy and better highlight the object of interest\n\n because we are no longer allowing the network to rely \n\n on only pixel in a group of correlated pixels. This may be\n\n why SmoothGrad is smooth: because it is implicitly assuming\n\n independence among pixels. In the figure below, you can compare\n\n integrated gradients with a single randomly drawn baseline\n\n to expected gradients sampled over a distribution. For\n\n the gaussian baseline, you can also toggle the SmoothGrad\n\n option to use the SmoothGrad formula above. For all figures,\n\n k=500k=500k=500.\n\n \n\n The difference between a single baseline and multiple\n\n baselines from the same distribution. Use the \n\n \"Multi-Reference\" button to toggle between the two. For the gaussian\n\n baseline, you can also toggle the \"Smooth Grad\" button\n\n to toggle between expected gradients and SmoothGrad\n\n with gradients \\* inputs.\n\n \n\n### Using the Training Distribution\n\n Is it really reasonable to assume independence among\n\n pixels while generating saliency maps? In supervised learning, \n\n we make the assumption that the data is drawn\n\n share a common, underlying distribution is what allows us to \n\n do supervised learning and make claims about generalizability. Given\n\n this assumption, we don't need to\n\n model missingness using a gaussian or a uniform distribution:\n\n we can use DdataD\\_{\\text{data}}Ddata​ to model missingness directly.\n\n \n\n The only problem is that we do not have access to the underlying distribution.\n\n But because this is a supervised learning task, we do have access to many \n\n independent draws from the underlying distribution: the training data!\n\n We can simply use samples from the training data as random draws\n\n from DdataD\\_{\\text{data}}Ddata​. This brings us to the variant\n\n of expected gradients used in ,\n\n which we again visualize in three parts:\n\n \\frac{1}{k} \\sum\\_{j=1}^k \n\n \\underbrace{(x\\_i - x'^j\\_i) \\times \n\n \\frac{\\delta f(\\text{ } \n\n \\overbrace{x'^j + \\alpha^{j} (x - x'^j)}^{\\text{(1): Interpolated Image}}\n\n \\text{ })}{\\delta x\\_i}}\\_{\\text{ (2): Gradients at Interpolation}}\n\n = \\overbrace{\\hat{\\phi\\_i}^{EG}(f, x, k; D\\_{\\text{data}})}\n\n ^{\\text{(3): Cumulative Gradients up to }\\alpha}\n\n A visual representation of expected gradients. Instead of taking contributions\n\n from a single path, expected gradients averages contributions from \n\n all paths defined by the underlying data distribution. Note that\n\n this figure only displays every 10th sample to avoid loading many images.\n\n \n\n In (4) we again plot the sum of the importance scores over pixels. As mentioned\n\n gradients, satisfy the completeness axiom. We can definitely see that\n\n completeness is harder to satisfy when we integrate over both a path\n\n and a distribution: that is, with the same number\n\n of samples, expected gradients doesn't converge as quickly as \n\n integrated gradients does. Whether or not this is an acceptable price to\n\n pay to avoid color-blindness in attributions seems subjective.\n\n \n\nComparing Saliency Methods\n\n--------------------------\n\n So we now have many different choices for a baseline. How do we choose\n\n which one we should use? The different choices of distributions and constant\n\n baselines have different theoretical motivations and practical concerns.\n\n Do we have any way of comparing the different baselines? In this section,\n\n we will touch on several different ideas about how to compare\n\n of all of the existing evaluation metrics, but is instead meant to \n\n emphasize that evaluating interpretability methods remains a difficult problem.\n\n \n\n### The Dangers of Qualitative Assessment\n\n One naive way to evaluate our baselines is to look at the saliency maps \n\n they produce and see which ones best highlight the object in the image. \n\n reasonable results, as does using a gaussian baseline or the blurred baseline.\n\n But is visual inspection really a good way judge our baselines? For one thing,\n\n we've only presented four images from the test set here. We would need to\n\n conduct user studies on a much larger scale with more images from the test\n\n set to be confident in our results. But even with large-scale user studies,\n\n qualitative assessment of saliency maps has other drawbacks.\n\n \n\n When we rely on qualitative assessment, we are assuming that humans\n\n know what an \"accurate\" saliency map is. When we look at saliency maps\n\n on data like ImageNet, we often check whether or not the saliency map\n\n highlights the object that we see as representing the true class in the image.\n\n We make an assumption between the data and the label, and then further assume\n\n that a good saliency map should reflect that assumption. But doing so\n\n has no real justification. Consider the figure below, which compares \n\n two saliency methods on a network that gets above 99% accuracy\n\n on (an altered version of) MNIST.\n\n The first saliency method is just an edge detector plus gaussian smoothing,\n\n while the second saliency method is expected gradients using the training\n\n data as a distribution. Edge detection better reflects what we humans\n\n think is the relationship between the image and the label.\n\n \n\nOriginal Image:Edge Detection:Expected Gradients:\n\n Qualitative assessment can be dangerous because we rely\n\n on our human knowledge of the relationship between\n\n the data and the labels, and then we assume\n\n that an accurate model has learned that very relationship.\n\n \n\n Unfortunately, the edge detection method here does not highlight \n\n what the network has learned. This dataset is a variant of \n\n decoy MNIST, in which the top left corner of the image has\n\n been altered to directly encode the image's class\n\n . That is, the intensity\n\n of the top left corner of each image has been altered to \n\n be 255×y9255 \\times \\frac{y}{9} 255×9y​ where yyy is the class\n\n the image belongs to. We can verify by removing this\n\n patch in the test set that the network heavily relies on it to make\n\n predictions, which is what the expected gradients saliency maps show.\n\n \n\n This is obviously a contrived example. Nonetheless, the fact that\n\n visual assessment is not necessarily a useful way to evaluate \n\n saliency maps and attribution methods has been extensively\n\n discussed in recent literature, with many proposed qualitative\n\n tests as replacements \n\n .\n\n At the heart of the issue is that we don't have ground truth explanations:\n\n we are trying to evaluate which methods best explain our network without\n\n actually knowing what our networks are doing.\n\n \n\n### Top K Ablation Tests\n\n One simple way to evaluate the importance scores that \n\n expected/integrated gradients produces is to see whether \n\n ablating the top k features as ranked by their importance\n\n decreases the predicted output logit. In the figure below, we\n\n ablate either by mean-imputation or by replacing each pixel\n\n for 1000 different correctly classified test-set images using each\n\n of the baselines proposed above \n\n For the blur baseline and the blur\n\n ablation test, we use σ=20\\sigma = 20σ=20.\n\n For the gaussian baseline, we use σ=1\\sigma = 1σ=1. These choices\n\n are somewhat arbitrary - a more comprehensive evaluation\n\n would compare across many values of σ\\sigmaσ.\n\n . As a\n\n control, we also include ranking features randomly\n\n (*Random Noise* in the plot). \n\n \n\n We plot, as a fraction of the original logit, the output logit\n\n of the network at the true class. That is, suppose the original\n\n image is a goldfinch and the network predicts the goldfinch class correctly\n\n with 95% confidence. If the confidence of class goldfinch drops\n\n to 60% after ablating the top 10% of pixels as ranked by \n\n feature importance, then we plot a curve that goes through\n\n that best highlights which pixels the network \n\n should exhibit the fastest drop in logit magnitude, because\n\n it highlights the pixels that most increase the confidence of the network.\n\n That is, the lower the curve, the better the baseline.\n\n \n\n### Mass Center Ablation Tests\n\n One problem with ablating the top k features in an image\n\n is related to an issue we already brought up: feature correlation.\n\n No matter how we ablate a pixel, that pixel's neighbors \n\n provide a lot of information about the pixel's original value.\n\n With this in mind, one could argue that progressively ablating \n\n pixels one by one is a rather meaningless thing to do. Can\n\n we instead perform ablations with feature correlation in mind?\n\n \n\n One straightforward way to do this is simply compute the \n\n center of mass \n\n of the saliency map, and ablate a boxed region centered on\n\n the center of mass. This tests whether or not the saliency map\n\n is generally highlighting an important region in the image. We plot\n\n replacing the boxed region around the saliency map using mean-imputation\n\n and blurring below as well (*Mean Center* and *Blur Center*, respectively).\n\n As a control, we compare against a saliency map generated from random gaussian\n\n noise (*Random Noise* in the plot).\n\n \n\n A variety of ablation tests on a variety of baselines.\n\n Using the training distribution and using the uniform distribution\n\n outperform most other methods on the top k ablation tests. The\n\n blur baseline inspired by \n\n does equally well on the blur top-k test. All methods\n\n perform similarly on the mass center ablation tests. Mouse\n\n over the legend to highlight a single curve.\n\n \n\n The ablation tests seem to indicate some interesting trends. \n\n All methods do similarly on the mass center ablation tests, and\n\n only slightly better than random noise. This may be because the \n\n object of interest generally lies in the center of the image - it\n\n isn't hard for random noise to be centered at the image. In contrast,\n\n using the training data or a uniform distribution seems to do quite well\n\n on the top-k ablation tests. Interestingly, the blur baseline\n\n inspired by also\n\n does quite well on the top k baseline tests, especially when\n\n we ablate pixels by blurring them! Would the uniform\n\n baseline do better if you ablate the image with uniform random noise?\n\n by progressively replacing it with a different image. We leave\n\n these experiments as future work, as there is a more pressing question\n\n we need to discuss.\n\n \n\n### The Pitfalls of Ablation Tests\n\n Constant baselines tend to not need as many samples\n\n comparing not only across baselines but also across number of samples drawn, \n\n and for the blur and gaussian baselines, the parameter σ\\sigmaσ.\n\n As mentioned above, we have defined many notions of missingness other than \n\n mean-imputation or blurring: more extensive comparisons would also compare\n\n all of our baselines across all of the corresponding notions of missing data.\n\n \n\n But even with all of these added comparisons, do ablation\n\n tests really provide a well-founded metric to judge attribution methods? \n\n The authors of argue\n\n against ablation tests. They point out that once we artificially ablate\n\n pixels an image, we have created inputs that do not come from\n\n the original data distribution. Our trained model has never seen such \n\n inputs. Why should we expect to extract any reasonable information\n\n from evaluating our model on them?\n\n \n\n On the other hand, integrated gradients and expected gradients\n\n rely on presenting interpolated images to your model, and unless\n\n you make some strange convexity assumption, those interpolated images \n\n don't belong to the original training distribution either. \n\n In general, whether or not users should present\n\n is a subject of ongoing debate\n\n . Nonetheless, \n\n the point raised in is still an\n\n important one: \"it is unclear whether the degradation in model \n\n performance comes from the distribution shift or because the \n\n features that were removed are truly informative.\"\n\n \n\n### Alternative Evaluation Metrics\n\n So what about other evaluation metrics proposed in recent literature? In\n\n , Hooker et al. propose a variant of\n\n an ablation test where we first ablate pixels in the training and\n\n test sets. Then, we re-train a model on the ablated data and measure\n\n by how much the test-set performance degrades. This approach has the advantage\n\n of better capturing whether or not the saliency method\n\n highlights the pixels that are most important for predicting the output class.\n\n Unfortunately, it has the drawback of needing to re-train the model several\n\n times. This metric may also get confused by feature correlation.\n\n \n\n Consider the following scenario: our dataset has two features \n\n that are highly correlated. We train a model which learns to only\n\n use the first feature, and completely ignore the second feature.\n\n A feature attribution method might accurately reveal what the model is doing:\n\n re-train the model and get similar performance because similar information \n\n is stored in the second feature. We might conclude that our feature\n\n attribution method is lousy - is it? This problem fits into a larger discussion\n\n about whether or not your attribution method\n\n should be \"true to the model\" or \"true to the data\"\n\n which has been discussed in several recent articles\n\n .\n\n \n\n In , the authors propose several\n\n sanity checks that saliency methods should pass. One is the \"Model Parameter\n\n Randomization Test\". Essentially, it states that a feature attribution\n\n method should produce different attributions when evaluated on a trained\n\n model (assumedly a trained model that performs well) and a randomly initialized\n\n model. This metric is intuitive: if a feature attribution method produces\n\n similar attributions for random and trained models, is the feature\n\n attribution really using information from the model? It might just\n\n be relying entirely on information from the input image.\n\n \n\n But consider the following figure, which is another (modified) version\n\n of MNIST. We've generated expected gradients attributions using the training\n\n distribution as a baseline for two different networks. One of the networks\n\n is a trained model that gets over 99% accuracy on the test set. The other\n\n Should we now conclude that expected gradients is an unreliable method?\n\n \n\nOriginal Image:Network 1 Saliency:Network 2 Saliency:\n\n A comparison of two network's saliency maps using expected gradients. One\n\n network has randomly initialized weights, the other gets >99% accuracy\n\n on the test set.\n\n \n\n you would run these kinds of saliency method sanity checks on un-modified data.\n\n \n\n But the truth is, even for natural images, we don't actually\n\n know what an accurate model's saliency maps should look like. \n\n Different architectures trained on ImageNet can all get good performance\n\n and have very different saliency maps. Can we really say that \n\n trained models should have saliency maps that don't look like \n\n saliency maps generated on randomly initialized models? That isn't\n\n to say that the model randomization test doesn't have merit: it\n\n does reveal interesting things about what saliency methods are are doing.\n\n It just doesn't tell the whole story.\n\n \n\n .\n\n Each proposed metric comes with their various pros and cons. \n\n we don't know what our model is doing and have no ground truth to compare\n\n against.\n\n \n\nConclusion\n\n----------\n\n So what should be done? We have many baselines and \n\n no conclusion about which one is the \"best.\" Although\n\n we don't provide extensive quantitative results\n\n comparing each baseline, we do provide a foundation\n\n for understanding them further. At the heart of\n\n each baseline is an assumption about missingness \n\n in our model and the distribution of our data. In this article,\n\n we shed light on some of those assumptions, and their impact\n\n on the corresponding path attribution. We lay\n\n groundwork for future discussion about baselines in the\n\n context of path attributions, and more generally about\n\n the relationship between representations of missingness \n\n and how we explain machine learning models.\n\n \n\n \n\n A side-by-side comparison of integrated gradients\n\n using a black baseline \n\n and expected gradients using the training data\n\n as a baseline.\n\n \n\nRelated Methods\n\n---------------\n\n This work focuses on a specific interpretability method: integrated gradients\n\n and its extension, expected gradients. We refer to these\n\n methods as path attribution methods because they integrate \n\n importances over a path. However, path attribution methods\n\n represent only a tiny fraction of existing interpretability methods. We focus\n\n on them here both because they are amenable to interesting visualizations,\n\n and because they provide a springboard for talking about missingness.\n\n We briefly cited several other methods at the beginning of this article.\n\n Many of those methods use some notion of baseline and have contributed to\n\n the discussion surrounding baseline choices.\n\n \n\n In , Fong and Vedaldi propose\n\n a model-agnostic method to explain neural networks that is based\n\n on learning the minimal deletion to an image that changes the model\n\n prediction. In section 4, their work contains an extended discussion on \n\n how to represent deletions: that is, how to represent missing pixels. They\n\n argue that one natural way to delete pixels in an image is to blur them.\n\n This discussion inspired the blurred baseline that we presented in our article.\n\n They also discuss how noise can be used to represent missingness, which\n\n was part of the inspiration for our uniform and gaussian noise baselines.\n\n \n\n In , Shrikumar et al. \n\n propose a feature attribution method called deepLIFT. It assigns\n\n importance scores to features by propagating scores from the output\n\n of the model back to the input. Similar to integrated gradients,\n\n deepLIFT also defines importance scores relative to a baseline, which\n\n they call the \"reference\". Their paper has an extended discussion on\n\n why explaining relative to a baseline is meaningful. They also discuss\n\n a few different baselines, including \"using a blurred version of the original\n\n image\". \n\n \n\n The list of other related methods that we didn't discuss\n\n in this article goes on: SHAP and DeepSHAP\n\n ,\n\n layer-wise relevance propagation ,\n\n LIME ,\n\n RISE and \n\n Grad-CAM \n\n among others. Many methods for explaining machine learning models\n\n define some notion of baseline or missingness, \n\n because missingness and explanations are closely related. When we explain\n\n a model, we often want to know which features, when missing, would most\n\n change model output. But in order to do so, we need to define \n\n what missing means because most machine learning models cannot\n\n handle arbitrary patterns of missing inputs. This article\n\n does not discuss all of the nuances presented alongside\n\n each existing method, but it is important to note that these methods were\n\n points of inspiration for a larger discussion about missingness.\n\n \n\n", "bibliography_bib": [{"title": "Axiomatic attribution for deep networks"}, {"title": "Using a deep learning algorithm and integrated gradients explanation to assist grading for diabetic retinopathy"}, {"title": "Ensembling convolutional and long short-term memory networks for electrocardiogram arrhythmia detection"}, {"title": "Inception-v4, inception-resnet and the impact of residual connections on learning"}, {"title": "Imagenet: A large-scale hierarchical image database"}, {"title": "Tensorflow-slim image classification model library"}, {"title": "The Building Blocks of Interpretability"}, {"title": "Feature Visualization"}, {"title": "Interpretability beyond feature attribution: Quantitative testing with concept activation vectors (tcav)"}, {"title": "Visualizing and understanding convolutional networks"}, {"title": "Visualizing higher-layer features of a deep network"}, {"title": "Understanding deep image representations by inverting them"}, {"title": "Net2vec: Quantifying and explaining how concepts are encoded by filters in deep neural networks"}, {"title": "\"Why Should I Trust You?\": Explaining the Predictions of Any Classifier"}, {"title": "A unified approach to interpreting model predictions"}, {"title": "Layer-wise relevance propagation for neural networks with local renormalization layers"}, {"title": "Learning important features through propagating activation differences"}, {"title": "Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps"}, {"title": "Interpretable explanations of black boxes by meaningful perturbation"}, {"title": "Learning deep features for discriminative localization"}, {"title": "Grad-cam: Visual explanations from deep networks via gradient-based localization"}, {"title": "Smoothgrad: removing noise by adding noise"}, {"title": "Rise: Randomized input sampling for explanation of black-box models"}, {"title": "Understanding the difficulty of training deep feedforward neural networks"}, {"title": "Gradients of counterfactuals"}, {"title": "Values of non-atomic games"}, {"title": "A note about: Local explanation methods for deep neural networks lack sensitivity to parameter values"}, {"title": "The (Un)reliability of saliency methods"}, {"title": "Towards better understanding of gradient-based attribution methods for Deep Neural Networks"}, {"title": "Learning Explainable Models Using Attribution Priors"}, {"title": "XRAI: Better Attributions Through Regions"}, {"title": "Right for the right reasons: Training differentiable models by constraining their explanations"}, {"title": "A Benchmark for Interpretability Methods in Deep Neural Networks"}, {"title": "On the (In)fidelity and Sensitivity for Explanations"}, {"title": "Sanity Checks for Saliency Maps"}, {"title": "Benchmarking Attribution Methods with Relative Feature Importance"}, {"title": "Do Explanations Reflect Decisions? A Machine-centric Strategy to Quantify the Performance of Explainability Algorithms"}, {"title": "How do Humans Understand Explanations from Machine Learning Systems? An Evaluation of the Human-Interpretability of Explanation"}, {"title": "Interpretation of neural networks is fragile"}, {"title": "The many Shapley values for model explanation"}, {"title": "Feature relevance quantification in explainable AI: A causality problem"}, {"title": "Explaining Models by Propagating Shapley Values of Local Components"}], "id": "526128224361b17750690ab629b78768"} +{"url": "n/a", "source": "distill", "source_type": "html", "converted_with": "python", "title": "Feature Visualization", "authors": ["Chris Olah", "Alexander Mordvintsev", "Ludwig Schubert"], "date_published": "2017-11-07", "abstract": " There is a growing sense that neural networks need to be interpretable to humans. The field of neural network interpretability has formed in response to these concerns. As it matures, two major threads of research have begun to coalesce: feature visualization and attribution. ", "journal_ref": "distill-pub", "doi": "https://doi.org/10.23915/distill.00007", "text": "\n\n![](Feature%20Visualization_files/neuron.png)\n\n![](Feature%20Visualization_files/channel.png)\n\n \n\n![](Feature%20Visualization_files/attribution-1.png)\n\n![](Feature%20Visualization_files/attribution-2.jpg)\n\n**Attribution**\n\n As a young field, neural network interpretability does not yet \n\nhave standardized terminology.\n\n Attribution has gone under many different names in the \n\nliterature — including \"feature visualization\"! — but recent work seems \n\nto prefer terms like \"attribution\" and \"saliency maps\".\n\n \n\n \n\n This article focuses on feature visualization.\n\n While feature visualization is a powerful tool, actually getting it to\n\n work involves a number of details.\n\n In this article, we examine the major issues and explore common \n\napproaches to solving them.\n\n We find that remarkably simple methods can produce high-quality \n\nvisualizations. Along the way we introduce a few tricks for exploring \n\nvariation in what neurons react to, how they interact, and how to \n\nimprove the optimization process.\n\n---\n\nFeature Visualization by Optimization\n\n-------------------------------------\n\n Neural networks are, generally speaking, differentiable with respect \n\nto their inputs.\n\n If we want to find out what kind of input would cause a certain \n\nbehavior — whether that's an internal neuron firing or the final output \n\nbehavior — we can use derivatives to iteratively tweak the input\n\n towards that goal .\n\nStep 0\n\n→\n\nStep 4\n\n→\n\nStep 48\n\n→\n\nStep 2048\n\n While conceptually simple, there are subtle challenges in getting the \n\noptimization to work. We will explore them, as well as common approaches\n\n### Optimization Objectives\n\n What do we want examples of?\n\n This is the core question in working with examples, regardless of \n\nwhether we're searching through a dataset to find the examples, or \n\noptimizing images to create them from scratch.\n\n We have a wide variety of options in what we search for:\n\n #optimization-objectives .objectives {\n\n grid-column: text;\n\n grid-template-columns: repeat(1, 1fr);\n\n }\n\n #optimization-objectives .objective {\n\n display: grid;\n\n grid-template-columns: repeat(2, 1fr) 1.1fr;\n\n }\n\n #optimization-objectives .objectives figcaption {\n\n padding: 4px 8px;\n\n word-wrap: break-word;\n\n word-break: break-word;\n\n }\n\n #optimization-objectives .objective .objective-icon {\n\n padding: 8px;\n\n }\n\n @media (min-width: 512px) {\n\n #optimization-objectives .objectives {\n\n grid-template-columns: repeat(5, 1fr);\n\n grid-column-gap: 16px;\n\n }\n\n #optimization-objectives .objective {\n\n display: flex;\n\n flex-flow: column;\n\n }\n\n #optimization-objectives .objectives figcaption {\n\n padding: 0;\n\n padding-top: 4px;\n\n word-wrap: break-word;\n\n word-break: break-word;\n\n }\n\n }\n\n \n\n \n\n`**n**` layer index \n\n`**x,y**` spatial position \n\n`**z**` channel index \n\n`**k**` class index\n\n![](Feature%20Visualization_files/neuron.png)\n\n**Neuron** \n\n`layern[x,y,z]`\n\n![](Feature%20Visualization_files/channel.png)\n\n**Channel** \n\n`layern[:,:,z]`\n\n![](Feature%20Visualization_files/layer.png)\n\n**Layer**/DeepDream \n\n`layern[:,:,:]2`\n\nsoftmax\n\n![](Feature%20Visualization_files/logits.png)\n\n**Class Logits** \n\n`pre_softmax[k]`\n\nsoftmax\n\n![](Feature%20Visualization_files/logits_post.png)\n\n**Class Probability** \n\n`softmax[k]`\n\n We used the channel objective to create most of the images in this article.\n\n after the softmax.\n\n One can see the logits as the evidence for each class, and the \n\nprobabilities as the likelihood of each class given the evidence.\n\n Unfortunately, the easiest way to increase the probability softmax \n\ngives to a class is often to make the alternatives unlikely rather than \n\nto make the class of interest likely .\n\n \n\n While the standard explanation is that maximizing probability \n\ndoesn't work very well because you can just push down evidence for other\n\n classes, an alternate hypothesis is that it's just harder to optimize \n\nthrough the softmax function. We understand this has sometimes been an \n\nissue in adversarial examples, and the solution is to optimize the \n\nLogSumExp of the logits instead. This is equivalent to optimizing \n\nsoftmax but generally more tractable. Our experience was that the \n\nLogSumExp trick doesn't seem better than dealing with the raw \n\nprobabilities.\n\n \n\n \n\n Regardless of why that happens, it can be fixed by very strong \n\nregularization with generative models. In this case the probabilities \n\ncan be a very principled thing to optimize.\n\n \n\n and objectives used in optimization-based model inversion ,\n\n which help us understand what information a model keeps and what it \n\nthrows away.\n\n We are only at the beginning of understanding which objectives are \n\ninteresting, and there is a lot of room for more work in this area.\n\n### Why visualize by optimization?\n\n**Dataset Examples** show us what neurons respond to in practice\n\n**Optimization** isolates the causes of behavior from mere correlations.\n\nA neuron may not be detecting what you initially thought.\n\nBaseball—or stripes? \n\n*mixed4a, Unit 6*\n\nAnimal faces—or snouts? \n\n*mixed4a, Unit 240*\n\nClouds—or fluffiness? \n\n*mixed4a, Unit 453*\n\nBuildings—or sky? \n\n*mixed4a, Unit 492*\n\n Optimization also has the advantage of flexibility.\n\n For example, if we want to study how neurons jointly represent \n\ninformation,\n\n we can easily ask how a particular example would need to be different \n\nfor an additional neuron to activate.\n\n This flexibility can also be helpful in visualizing how features \n\nevolve as the network trains.\n\n If we were limited to understanding the model on the fixed examples in\n\n our dataset, topics like these ones would be much harder to explore.\n\n On the other hand, there are also significant challenges to \n\nvisualizing features with optimization.\n\n In the following sections we'll examine techniques to get diverse \n\nvisualizations, understand how neurons interact, and avoid high \n\nfrequency artifacts.\n\n---\n\nDiversity\n\n---------\n\n Do our examples show us the full picture?\n\n Dataset examples have a big advantage here.\n\n By looking through our dataset, we can find diverse examples.\n\n It doesn't just give us ones activating a neuron intensely:\n\n**Negative** optimized\n\n**Minimum** activation examples\n\nSlightly negative activation examples\n\n**Positive** optimized\n\nSlightly positive activation examples\n\n**Maximum** activation examples\n\n**Layer mixed 4a, unit 492**\n\n[Reproduce in a\n\n In contrast, optimization generally gives us just one extremely \n\npositive example — and if we're creative, a very negative example as \n\nwell.\n\n Is there some way that optimization could also give us this diversity?\n\n### Achieving Diversity with Optimization\n\n A given feature of a network may respond to a wide range of inputs.\n\n On the class level, for example, a classifier that has been trained to\n\n recognize dogs should recognize both closeups of their faces as well as\n\n wider profile images — even though those have quite different visual \n\nappearances.\n\n Early work by Wei *et al.* \n\nattempts to demonstrate this \"intra-class\" diversity by recording \n\nactivations over the entire training set, clustering them and optimizing\n\n for the cluster centroids, revealing the different facets of a class \n\nthat were learned.\n\n A different approach by Nguyen, Yosinski, and collaborators was to \n\nsearch through the dataset for diverse examples and use those as \n\nstarting points for the optimization process .\n\n The idea is that this initiates optimization in different facets of \n\nthe feature so that the resulting example from optimization will \n\ndemonstrate that facet.\n\n In more recent work, they combine visualizing classes with a \n\ngenerative model, which they can sample for diverse examples .\n\n Their first approach had limited success, and while the generative \n\nmodel approach works very well — we'll discuss it more in the section on\n\n \n\n For this article we use an approach based on ideas from artistic \n\nstyle transfer. Following that work, we begin by computing the Gram \n\n Gi,j=∑x,ylayern[x, y, i]⋅layern[x, y, j]\n\n Gi,j​=x,y∑​layern​[x, y, i]⋅layern​[x, y, j]\n\n Cdiversity=−∑a∑b≠a vec(Ga)⋅vec(Gb)∣∣vec(Ga)∣∣ ∣∣vec(Gb)∣∣\n\n C\\_{\text{diversity}} = - \\sum\\_{a} \\sum\\_{b\neq a} ~ \n\n\frac{\text{vec}(G\\_a) \\cdot \n\n\text{vec}(G\\_b)}{||\text{vec}(G\\_a)||~||\text{vec}(G\\_b)||}\n\n Cdiversity​=−a∑​b≠a∑​ ∣∣vec(Ga​)∣∣ ∣∣vec(Gb​)∣∣vec(Ga​)⋅vec(Gb​)​\n\n One possibility is to penalize the cosine similarity of different examples.\n\n![](Feature%20Visualization_files/mixed4a_97_optimized.png)\n\nSimple Optimization\n\n![](Feature%20Visualization_files/mixed4a_97_diversity.png)\n\n \n\n[Reproduce in a\n\n![](Feature%20Visualization_files/mixed4a_97_examples.jpg)\n\nDataset examples\n\n Diverse feature visualizations allow us to more closely pinpoint what \n\nactivates a neuron, to the degree that we can make, and — by looking at \n\n For example, let's examine this simple optimization result.\n\n![](Feature%20Visualization_files/mixed4a_143_optimized.png)\n\nSimple optimization\n\n Looking at it in isolation one might infer that this neuron activates \n\non the top of dog heads, as the optimization shows both eyes and only \n\ndownward curved edges.\n\n Looking at the optimization with diversity however, we see \n\noptimization results which don't include eyes, and also one which \n\nincludes upward curved edges. We thus have to broaden our expectation of\n\n what this neuron activates on to be mostly about the fur texture. \n\nChecking this hypothesis against dataset examples shows that is broadly \n\ncorrect. Note the spoon with a texture and color similar enough to dog \n\nfur for the neuron to activate.\n\n![](Feature%20Visualization_files/mixed4a_143_diversity.png)\n\n Optimization with diversity. *Layer mixed4a, Unit 143*\n\n![](Feature%20Visualization_files/mixed4a_143_examples.jpg)\n\nDataset examples\n\n The effect of diversity can be even more striking in higher level \n\nneurons, where it can show us different types of objects that stimulate a\n\n neuron.\n\n For example, one neuron responds to different kinds of balls, even \n\nthough they have a variety of appearances.\n\n![](Feature%20Visualization_files/mixed5a_9_optimized.png)\n\nSimple Optimization\n\n![](Feature%20Visualization_files/mixed5a_9_diversity.png)\n\n![](Feature%20Visualization_files/mixed5a_9_examples.jpg)\n\nDataset examples\n\n This simpler approach has a number of shortcomings:\n\n For one, the pressure to make examples different can cause unrelated \n\nartifacts (such as eyes) to appear.\n\n Additionally, the optimization may make examples be different in an \n\nunnatural way.\n\n For example, in the above example one might want to see examples of \n\nsoccer balls clearly separated from other types of balls like golf or \n\ntennis balls.\n\n Dataset based approaches such as Wei *et al.*\n\n can split features apart more naturally — however they may not be as \n\nhelpful in understanding how the model will behave on different data.\n\n Diversity also starts to brush on a more fundamental issue: while the \n\nexamples above represent a mostly coherent idea, there are also neurons \n\nthat represent strange mixtures of ideas.\n\n Below, a neuron responds to two types of animal faces, and also to car\n\n bodies.\n\n![](Feature%20Visualization_files/mixed4e_55_optimized.png)\n\nSimple Optimization\n\n![](Feature%20Visualization_files/mixed4e_55_diversity.png)\n\n![](Feature%20Visualization_files/mixed4e_55_examples.jpg)\n\nDataset examples\n\n---\n\nInteraction between Neurons\n\n---------------------------\n\n If neurons are not the right way to understand neural nets, what is?\n\n This framing unifies the concepts \"neurons\" and \"combinations of \n\nneurons\" as \"vectors in activation space\". It allows us to ask: Should \n\nwe expect the directions of the basis vectors to be any more \n\ninterpretable than the directions of other vectors in this space?\n\n More recently Bau, Zhou *et al.*\n\n found the directions of the basis vectors to be interpretable more \n\noften than random directions.\n\n Our experience is broadly consistent with both results; we find that \n\nrandom directions often seem interpretable, but at a lower rate than \n\nbasis directions.\n\n[Reproduce in a\n\n*mixed3a, random direction*\n\n*mixed4c, random direction*\n\n*mixed4d, random direction*\n\n*mixed5a, random direction*\n\nBy jointly optimizing two neurons we can get a sense of how they interact.\n\n[Reproduce in a\n\nNeuron 1\n\nNeuron 2\n\nJointly optimized\n\n These examples show us how neurons jointly represent images.\n\n The optimization objective is a linear \n\ninterpolation between the individual channel objectives. To get the \n\ninterpolations to look better, we also add a small alignment objective \n\nthat encourages lower layer activations to be similar. We additionally \n\nuse a combination of separate and shared image parameterizations to make\n\n it easier for the optimization algorithm to cause objects to line up, \n\nwhile still giving it the freedom to create any image it needs to.\n\n This is similar to interpolating in the latent space of generative models.\n\n[Reproduce in a\n\n \n\nLayer 4a, Unit 476\n\n \n\n \n\n \n\n \n\n \n\nLayer 4a, Unit 460\n\n This is only starting to scratch the surface of how neurons interact.\n\n The truth is that we have almost no clue how to select meaningful \n\ndirections, or whether there even exist particularly meaningful \n\ndirections.\n\n Independent of finding directions, there are also questions on how \n\ndirections interact — for example, interpolation can show us how a small\n\n number of directions interact, but in reality there are hundreds of \n\ndirections interacting.\n\n---\n\nThe Enemy of Feature Visualization\n\n----------------------------------\n\n If you want to visualize features, you might just optimize an image to\n\n make neurons fire.\n\n Unfortunately, this doesn't really work.\n\n Instead, you end up with a kind of neural network optical \n\nillusion — an image full of noise and nonsensical high-frequency \n\npatterns that the network responds strongly to.\n\nEven if you carefully tune learning rate, you'll get noise.\n\nOptimization results are enlarged to show detail and artifacts.\n\n \n\n[Reproduce in a\n\nLearning Rate\n\n(0.05)\n\nStep 1\n\nStep 32\n\nStep 128\n\nStep 256\n\nStep 2048\n\n but the image is dominated by these high frequency patterns.\n\n We don't fully understand why these high frequency patterns form,\n\n but an important part seems to be strided convolutions and pooling \n\noperations, which create high-frequency patterns in the gradient .\n\ninput\n\nconv2d0\n\nconv2d1\n\nconv2d2\n\nmixed3a\n\nmixed3b\n\nmixed4a\n\nmixed4b\n\nmixed4c\n\nmixed4d\n\nmixed4e\n\nmixed5a\n\n These high-frequency patterns show us that, while optimization based \n\nvisualization's freedom from constraints is appealing, it's a \n\ndouble-edged sword.\n\n Without any constraints on images, we end up with adversarial \n\nexamples.\n\n These are certainly interesting, but if we want to understand how \n\nthese models work in real life, we need to somehow move past them…\n\n### The Spectrum of Regularization\n\n Dealing with this high frequency noise has been one of the primary \n\nchallenges and overarching threads of feature visualization research.\n\n If you want to get useful visualizations, you need to impose a more \n\nnatural structure using some kind of prior, regularizer, or constraint.\n\n In fact, if you look at most notable papers on feature visualization, \n\none of their main points will usually be an approach to regularization.\n\n Researchers have tried a lot of different things!\n\n In the middle we have three main families of regularization options.\n\n #feature-vis-history .row {\n\n }\n\n #feature-vis-history .line {\n\n width: 900px;\n\n }\n\n #feature-vis-history .row .info {\n\n display: inline-block;\n\n width: 320px;\n\n height: 56px;\n\n vertical-align: top;\n\n }\n\n #feature-vis-history .row .info img {\n\n width: 56px;\n\n height: 56px;\n\n border-radius: 5px;\n\n background: #EEE;\n\n }\n\n #feature-vis-history figcaption {\n\n line-height: 16px;\n\n vertical-align: top;\n\n }\n\n #feature-vis-history .row .info figcaption {\n\n width: 250px;\n\n margin-left: 8px;\n\n display: inline-block;\n\n }\n\n #feature-vis-history .row.header-row .info figcaption {\n\n margin-left: 0;\n\n }\n\n #feature-vis-history .header-row .sub-headers {\n\n display: inline-block;\n\n margin-top: 16px;\n\n margin-bottom: 10px;\n\n vertical-align: top;\n\n }\n\n #feature-vis-history .header-row .sub-headers figcaption {\n\n width: 100px; /\\*104px;\\*/\n\n display: inline-block;\n\n word-wrap: break-word;\n\n word-break: keep-all;\n\n }\n\n #feature-vis-history .row .spacer {\n\n display: inline-block;\n\n height: 48px;\n\n width: 45px;\n\n }\n\n #feature-vis-history .info p {\n\n margin-bottom: 4px;\n\n }\n\n #feature-vis-history .info .paper-title {\n\n font-weight: bold;\n\n }\n\n #feature-vis-history .info .paper-text {\n\n }\n\n @media (max-width: 1100px) {\n\n #feature-vis-history .row .spacer {\n\n display: inline-block;\n\n height: 48px;\n\n width: 24px;\n\n }\n\n #feature-vis-history .line {\n\n width: 750px;\n\n }\n\n }\n\n #feature-vis-history .row .category-check-container {\n\n /\\*vertical-align: middle;\\*/\n\n display: inline-block;\n\n margin: 8px;\n\n margin-left: 0px;\n\n margin-right: 70px;\n\n width: 32px;\n\n height: 32px;\n\n border-radius: 5px;\n\n border: 1px solid #CCC;\n\n }\n\n #feature-vis-history .row .category-check-container .category-check {\n\n margin: 6px;\n\n width: 20px;\n\n height: 20px;\n\n border-radius: 6px;\n\n }\n\n #feature-vis-history .row .category-check-container .set {\n\n background: #909092;\n\n }\n\n #feature-vis-history .line {\n\n margin-top: 4px;\n\n margin-bottom: 4px;\n\n margin-left: 67px;\n\n height: 1px;\n\n background: #EAEAEA;\n\n }\n\n**Weak Regularization**\n\n avoids misleading correlations, but is less connected to real use.\n\n**Unregularized**\n\n**FrequencyPenalization**\n\n**TransformationRobustness**\n\n**Strong Regularization**\n\n gives more realistic examples at risk of misleading correlations.\n\n**LearnedPrior**\n\n**DatasetExamples**\n\n![](Feature%20Visualization_files/Erhan2009.png)\n\nErhan, *et al.*, 2009 \n\nIntroduced core idea. Minimal regularization.\n\n![](Feature%20Visualization_files/Szegedy2013.png)\n\nSzegedy, *et al.*, 2013 \n\nAdversarial examples. Visualizes with dataset examples.\n\n![](Feature%20Visualization_files/Mahendran2015.png)\n\nMahendran & Vedaldi, 2015 \n\nIntroduces total variation regularizer. Reconstructs input from representation.\n\n![](Feature%20Visualization_files/Nguyen2015.png)\n\nNguyen, *et al.*, 2015 \n\nExplores counterexamples. Introduces image blurring.\n\n![](Feature%20Visualization_files/Mordvintsev2015.png)\n\nMordvintsev, *et al.*, 2015 \n\nIntroduced jitter & multi-scale. Explored GMM priors for classes.\n\n![](Feature%20Visualization_files/Oygard2015.png)\n\nØygard, *et al.*, 2015 \n\nIntroduces gradient blurring. \n\n(Also uses jitter.)\n\n![](Feature%20Visualization_files/Tyka2016.png)\n\nTyka, *et al.*, 2016 \n\nRegularizes with bilateral filters. \n\n(Also uses jitter.)\n\n![](Feature%20Visualization_files/Mordvintsev2016.png)\n\nMordvintsev, *et al.*, 2016 \n\nNormalizes gradient frequencies. \n\n(Also uses jitter.)\n\n![](Feature%20Visualization_files/Nguyen2016a.png)\n\nNguyen, *et al.*, 2016 \n\nParamaterizes images with GAN generator.\n\n![](Feature%20Visualization_files/Nguyen2016b.png)\n\nNguyen, *et al.*, 2016 \n\nUses denoising autoencoder prior to make a generative model.\n\n### Three Families of Regularization\n\n If we think about blurring in Fourier space, it is equivalent to \n\nadding a scaled L2 penalty to the objective, penalizing each \n\nFourier-component based on its frequency.\n\n These techniques are in some ways very similar to the above and in \n\nFrequency penalization directly targets high frequency noise\n\n[Reproduce in a\n\nL1\n\n(-0.05)\n\nTotal Variation\n\n(-0.25)\n\nBlur\n\n(-1)\n\nStep 1\n\nStep 32\n\nStep 128\n\nStep 256\n\nStep 2048\n\n Even a small amount seems to be very effective in the case of images ,\n\n especially when combined with a more general regularizer for high-frequencies .\n\n[Reproduce in a\n\nJitter\n\n(1px)\n\nRotate\n\n(5°)\n\nScale\n\n(1.1×)\n\nStep 1\n\nStep 32\n\nStep 128\n\nStep 256\n\nStep 2048\n\n**Learned priors.**\n\n Our previous regularizers use very simple heuristics to keep examples \n\nreasonable.\n\n A natural next step is to actually learn a model of the real data and \n\ntry to enforce that.\n\n With a strong model, this becomes similar to searching over the \n\ndataset.\n\n This approach produces the most photorealistic visualizations, but it \n\nmay be unclear what came from the model being visualized and what came \n\nfrom the prior.\n\n such as a GAN or VAE,\n\n and optimize within that latent space .\n\n this allows you to jointly optimize for the prior along with your objective .\n\n When one optimizes for the prior and the probability of a class, one \n\nrecovers a generative model of the data conditioned on that particular \n\nclass.\n\n Finally, Wei *et al.* \n\napproximate a generative model prior, at least for the color \n\ndistribution, by penalizing distance between patches of the output and \n\nthe nearest patches retrieved from a database of image patches collected\n\n from the training data.\n\n---\n\nPreconditioning and Parameterization\n\n------------------------------------\n\n \n\n It's not clear this is really a regularizer:\n\n If it isn't a regularizer, what does transforming the gradient like this do?\n\n You can think of it as doing steepest descent to optimize the same objective,\n\n \n\n Gradient blurring is equivalent\n\n to gradient descent in a different parameterization of image space, \n\nwhere high frequency dimensions are stretched to make moving in those \n\n \n\n This changes which direction of descent will be steepest, and how fast\n\n the optimization moves in each direction, but it does not change what \n\nthe minimums are.\n\n If there are many local minima, it can stretch and shrink their basins\n\n of attraction, changing which ones the optimization process falls into.\n\n As a result, using the right preconditioner can make an optimization \n\nproblem radically easier.\n\n How can we choose a preconditioner that will give us these benefits?\n\n A good first guess is one that makes your data decorrelated and whitened.\n\n In the case of images this means doing gradient descent in the Fourier basis,\n\n \n\n This points to a profound fact about the Fourier transform.\n\n As long as a correlation is consistent across spatial \n\npositions — such as the correlation between a pixel and its left \n\nneighbor being the same across all positions of an image — the Fourier \n\ncoefficients will be independent variables.\n\n To see this, note that such a spatially consistent correlation can \n\nbe expressed as a convolution, and by the convolution theorem becomes \n\npointwise multiplication after the Fourier transform.\n\n \n\n with frequencies scaled so that they all have equal energy.\n\n \n\n Note that we have to be careful to get the colors to be \n\ndecorrelated, too. The Fourier transforms decorrelates spatially, but a \n\ncorrelation will still exist between colors.\n\n To address this, we explicitly measure the correlation between \n\ncolors in the training set and use a Cholesky decomposition to \n\ndecorrelate them. Compare the directions of steepest decent before and \n\nafter decorrelating colors:\n\n \n\n![](Feature%20Visualization_files/correlated_colors.jpeg)\n\n Correlated Colors\n\n \n\n![](Feature%20Visualization_files/decorrelated_colors.jpeg)\n\n Decorrelated Colors\n\n \n\n Three directions of steepest descent under different notions of distance\n\n \n\n \n\nImage\n\n \n\n**L∞ metric** \n\nused in adverserial work\n\n \n\n**L2 metric** \n\nregular gradient\n\n \n\n**decorrelated space**\n\n All of these directions are valid descent directions for the same objective,\n\n but we can see they're radically different.\n\n Notice that optimizing in the decorrelated space reduces high frequencies,\n\n while using L∞ increases them.\n\n It's hard to do really fair comparisons because of hyperparameters, but the\n\n resulting visualizations seem a lot better — and develop faster, too.\n\n[Reproduce in a\n\nLearning Rate\n\n(0.05)\n\nDecorrelated Space\n\nTransformation Robustness\n\nStep 1\n\nStep 32\n\nStep 128\n\nStep 256\n\nStep 2048\n\n (Unless otherwise noted, the images in this article were optimizing in\n\n the decorrelated space and a suite of transformation robustness \n\ntechniques.\n\n \n\n Images were optimized for 2560 steps in a color-decorrelated \n\nfourier-transformed space, using Adam at a learning rate of 0.05.\n\n We used each of following transformations in the given order at each\n\n step of the optimization: \n\n \n\n • Padding the input by 16 pixels to avoid edge artifacts \n\n • Jittering by up to 16 pixels \n\n • Jittering a second time by up to 8 pixels \n\n • Cropping the padding \n\n)\n\n Is the preconditioner merely accelerating descent, bringing us to the \n\nsame place\n\n normal gradient descent would have brought us if we were patient \n\nenough?\n\n Or is it also regularizing, changing which local minima we get \n\nattracted to?\n\n It's hard to tell for sure.\n\n On the one hand, gradient descent seems to continue improving as you \n\nexponentially increase the number of optimization steps — it hasn't \n\nconverged, it's just moving very slowly.\n\n On the other hand, if you turn off all other regularizers, the \n\npreconditioner seems to reduce high-frequency patterns.\n\n---\n\nConclusion\n\n----------\n\n Neural feature visualization has made great progress over the last few years.\n\n In the quest to make neural networks interpretable, feature visualization\n\n stands out as one of the most promising and developed research directions.\n\n By itself, feature visualization will never give a completely satisfactory\n\n understanding. We see it as one of the fundamental building blocks that,\n\n There remains still a lot of important work to be done in improving \n\nfeature visualization.\n\n Some issues that stand out include understanding neuron interaction, \n\nfinding which units are most meaningful for understanding neural net \n\nactivations, and giving a holistic view of the facets of a feature.\n\n", "bibliography_bib": [{"title": "Going deeper with convolutions"}, {"title": "Imagenet: A large-scale hierarchical image database"}, {"title": "Visualizing higher-layer features of a deep network"}, {"title": "Inceptionism: Going deeper into neural networks"}, {"title": "Deep inside convolutional networks: Visualising image classification models and saliency maps"}, {"title": "A neural algorithm of artistic style"}, {"title": "Understanding deep image representations by inverting them"}, {"title": "Understanding Intra-Class Knowledge Inside {CNN}"}, {"title": "Multifaceted feature visualization: Uncovering the different types of features learned by each neuron in deep neural networks"}, {"title": "Plug & play generative networks: Conditional iterative generation of images in latent space"}, {"title": "Intriguing properties of neural networks"}, {"title": "Network Dissection: Quantifying Interpretability of Deep Visual Representations"}, {"title": "Deconvolution and checkerboard artifacts"}, {"title": "Deep neural networks are easily fooled: High confidence predictions for unrecognizable images"}, {"title": "Visualizing GoogLeNet Classes"}, {"title": "Class visualization with bilateral filters"}, {"title": "DeepDreaming with TensorFlow"}, {"title": "Synthesizing the preferred inputs for neurons in neural networks via deep generator networks"}, {"title": "Tensorflow: Large-scale machine learning on heterogeneous distributed systems"}], "id": "5f07b0f410ac299d477c9f09697b9b26"} +{"url": "n/a", "source": "distill", "source_type": "html", "converted_with": "python", "title": "Feature-wise transformations", "authors": ["Vincent Dumoulin", "Ethan Perez", "Nathan Schucher", "Florian Strub", "Harm de Vries", "Aaron Courville", "Yoshua Bengio"], "date_published": "2018-07-09", "abstract": " Many real-world problems require integrating multiple sources of information. Sometimes these problems involve multiple, distinct modalities of information — vision, language, audio, etc. — as is required to understand a scene in a movie or answer a question about an image. Other times, these problems involve multiple sources of the same kind of input, i.e. when summarizing several documents or drawing one image in the style of another. ", "journal_ref": "distill-pub", "doi": "https://doi.org/10.23915/distill.00011", "text": "\n\n Many real-world problems require integrating multiple sources of information.\n\n Sometimes these problems involve multiple, distinct modalities of\n\n information — vision, language, audio, etc. — as is required\n\n to understand a scene in a movie or answer a question about an image.\n\n Other times, these problems involve multiple sources of the same\n\n kind of input, i.e. when summarizing several documents or drawing one\n\n image in the style of another.\n\n \n\n When approaching such problems, it often makes sense to process one source\n\n of information *in the context of* another; for instance, in the\n\n right example above, one can extract meaning from the image in the context\n\n of the question. In machine learning, we often refer to this context-based\n\n processing as *conditioning*: the computation carried out by a model\n\n is conditioned or *modulated* by information extracted from an\n\n auxiliary input.\n\n \n\n Finding an effective way to condition on or fuse sources of information\n\n is an open research problem, and\n\n \n\n in this article, we concentrate on a specific family of approaches we call\n\n *feature-wise transformations*.\n\n \n\n We will examine the use of feature-wise transformations in many neural network\n\n architectures to solve a surprisingly large and diverse set of problems;\n\n \n\n their success, we will argue, is due to being flexible enough to learn an\n\n effective representation of the conditioning input in varied settings.\n\n In the language of multi-task learning, where the conditioning signal is\n\n taken to be a task description, feature-wise transformations\n\n learn a task representation which allows them to capture and leverage the\n\n relationship between multiple sources of information, even in remarkably\n\n different problem settings.\n\n \n\n---\n\nFeature-wise transformations\n\n----------------------------\n\n To motivate feature-wise transformations, we start with a basic example,\n\n where the two inputs are images and category labels, respectively. For the\n\n purpose of this example, we are interested in building a generative model of\n\n images of various classes (puppy, boat, airplane, etc.). The model takes as\n\n input a class and a source of random noise (e.g., a vector sampled from a\n\n normal distribution) and outputs an image sample for the requested class.\n\n \n\n Our first instinct might be to build a separate model for each\n\n class. For a small number of classes this approach is not too bad a solution,\n\n but for thousands of classes, we quickly run into scaling issues, as the number\n\n of parameters to store and train grows with the number of classes.\n\n We are also missing out on the opportunity to leverage commonalities between\n\n classes; for instance, different types of dogs (puppy, terrier, dalmatian,\n\n etc.) share visual traits and are likely to share computation when\n\n mapping from the abstract noise vector to the output image.\n\n \n\n Now let's imagine that, in addition to the various classes, we also need to\n\n model attributes like size or color. In this case, we can't\n\n reasonably expect to train a separate network for *each* possible\n\n conditioning combination! Let's examine a few simple options.\n\n \n\n A quick fix would be to concatenate a representation of the conditioning\n\n information to the noise vector and treat the result as the model's input.\n\n This solution is quite parameter-efficient, as we only need to increase\n\n Maybe this assumption is correct, or maybe it's not; perhaps the\n\n model does not need to incorporate the conditioning information until late\n\n into the generation process (e.g., right before generating the final pixel\n\n carry this information around unaltered for many layers.\n\n \n\n Because this operation is cheap, we might as well avoid making any such\n\n assumptions and concatenate the conditioning representation to the input of\n\n *all* layers in the network. Let's call this approach\n\n *concatenation-based conditioning*.\n\n \n\n Another efficient way to integrate conditioning information into the network\n\n is via *conditional biasing*, namely, by adding a *bias* to\n\n the hidden layers based on the conditioning representation.\n\n \n\n Interestingly, conditional biasing can be thought of as another way to\n\n implement concatenation-based conditioning. Consider a fully-connected\n\n linear layer applied to the concatenation of an input\n\n x\\mathbf{x}x and a conditioning representation\n\n z\\mathbf{z}z:\n\n \n\n The same argument applies to convolutional networks, provided we ignore\n\n the border effects due to zero-padding.\n\n \n\n Yet another efficient way to integrate class information into the network is\n\n via *conditional scaling*, i.e., scaling hidden layers\n\n based on the conditioning representation.\n\n \n\n A special instance of conditional scaling is feature-wise sigmoidal gating:\n\n we scale each feature by a value between 000 and\n\n 111 (enforced by applying the logistic function), as a\n\n function of the conditioning representation. Intuitively, this gating allows\n\n the conditioning information to select which features are passed forward\n\n and which are zeroed out.\n\n \n\n Given that both additive and multiplicative interactions seem natural and\n\n intuitive, which approach should we pick? One argument in favor of\n\n *multiplicative* interactions is that they are useful in learning\n\n relationships between inputs, as these interactions naturally identify\n\n \"matches\": multiplying elements that agree in sign yields larger values than\n\n multiplying elements that disagree. This property is why dot products are\n\n often used to determine how similar two vectors are.\n\n \n\n Multiplicative interactions alone have had a history of success in various\n\n domains — see [Bibliographic Notes](#bibliographic-notes).\n\n \n\n One argument in favor of *additive* interactions is that they are\n\n more natural for applications that are less strongly dependent on the\n\n joint values of two inputs, like feature aggregation or feature detection\n\n (i.e., checking if a feature is present in either of two inputs).\n\n \n\n In the spirit of making as few assumptions about the problem as possible,\n\n we may as well combine *both* into a\n\n conditional *affine transformation*.\n\n \n\n An affine transformation is a transformation of the form\n\n y=m∗x+by = m \\* x + by=m∗x+b.\n\n \n\n All methods outlined above share the common trait that they act at the\n\n *feature* level; in other words, they leverage *feature-wise*\n\n interactions between the conditioning representation and the conditioned\n\n network. It is certainly possible to use more complex interactions,\n\n but feature-wise interactions often strike a happy compromise between\n\n effectiveness and efficiency: the number of scaling and/or shifting\n\n coefficients to predict scales linearly with the number of features in the\n\n network. Also, in practice, feature-wise transformations (often compounded\n\n across multiple layers) frequently have enough capacity to model complex\n\n phenomenon in various settings.\n\n \n\n Lastly, these transformations only enforce a limited inductive bias and\n\n remain domain-agnostic. This quality can be a downside, as some problems may\n\n be easier to solve with a stronger inductive bias. However, it is this\n\n characteristic which also enables these transformations to be so widely\n\n effective across problem domains, as we will later review.\n\n \n\n### Nomenclature\n\n To continue the discussion on feature-wise transformations we need to\n\n abstract away the distinction between multiplicative and additive\n\n interactions. Without losing generality, let's focus on feature-wise affine\n\n transformations, and let's adopt the nomenclature of Perez et al.\n\n , which formalizes conditional affine\n\n transformations under the acronym *FiLM*, for Feature-wise Linear\n\n Modulation.\n\n \n\n Strictly speaking, *linear* is a misnomer, as we allow biasing, but\n\n we hope the more rigorous-minded reader will forgive us for the sake of a\n\n better-sounding acronym.\n\n \n\n We say that a neural network is modulated using FiLM, or *FiLM-ed*,\n\n after inserting *FiLM layers* into its architecture. These layers are\n\n parametrized by some form of conditioning information, and the mapping from\n\n conditioning information to FiLM parameters (i.e., the shifting and scaling\n\n coefficients) is called the *FiLM generator*.\n\n In other words, the FiLM generator predicts the parameters of the FiLM\n\n layers based on some auxiliary input.\n\n Note that the FiLM parameters are parameters in one network but predictions\n\n from another network, so they aren't learnable parameters with fixed\n\n weights as in the fully traditional sense.\n\n For simplicity, you can assume that the FiLM generator outputs the\n\n concatenation of all FiLM parameters for the network architecture.\n\n \n\n As the name implies, a FiLM layer applies a feature-wise affine\n\n transformation to its input. By *feature-wise*, we mean that scaling\n\n and shifting are applied element-wise, or in the case of convolutional\n\n networks, feature map -wise.\n\n \n\n To expand a little more on the convolutional case, feature maps can be\n\n thought of as the same feature detector being evaluated at different\n\n spatial locations, in which case it makes sense to apply the same affine\n\n transformation to all spatial locations.\n\n \n\n In other words, assuming x\\mathbf{x}x is a FiLM layer's\n\n input, z\\mathbf{z}z is a conditioning input, and\n\n γ\\gammaγ and βetaβ are\n\n z\\mathbf{z}z-dependent scaling and shifting vectors,\n\n FiLM(x)=γ(z)⊙x+β(z).\n\n \textrm{FiLM}(\\mathbf{x}) = \\gamma(\\mathbf{z}) \\odot \\mathbf{x}\n\n + eta(\\mathbf{z}).\n\n FiLM(x)=γ(z)⊙x+β(z).\n\n You can interact with the following fully-connected and convolutional FiLM\n\n layers to get an intuition of the sort of modulation they allow:\n\n \n\n In addition to being a good abstraction of conditional feature-wise\n\n transformations, the FiLM nomenclature lends itself well to the notion of a\n\n *task representation*. From the perspective of multi-task learning,\n\n we can view the conditioning signal as the task description. More\n\n specifically, we can view the concatenation of all FiLM scaling and shifting\n\n coefficients as both an instruction on *how to modulate* the\n\n conditioned network and a *representation* of the task at hand. We\n\n will explore and illustrate this idea later on.\n\n \n\n---\n\nFeature-wise transformations in the literature\n\n----------------------------------------------\n\n Feature-wise transformations find their way into methods applied to many\n\n problem settings, but because of their simplicity, their effectiveness is\n\n seldom highlighted in lieu of other novel research contributions. Below are\n\n a few notable examples of feature-wise transformations in the literature,\n\n grouped by application domain. The diversity of these applications\n\n underscores the flexible, general-purpose ability of feature-wise\n\n interactions to learn effective task representations.\n\n \n\nexpand all\n\nVisual question-answering+\n\n Perez et al. use\n\n FiLM layers to build a visual reasoning model\n\n trained on the CLEVR dataset to\n\n answer multi-step, compositional questions about synthetic images.\n\n \n\n The model's linguistic pipeline is a FiLM generator which\n\n extracts a question representation that is linearly mapped to\n\n FiLM parameter values. Using these values, FiLM layers inserted within each\n\n residual block condition the visual pipeline. The model is trained\n\n end-to-end on image-question-answer triples. Strub et al.\n\n later on improved on the model by\n\n using an attention mechanism to alternate between attending to the language\n\n input and generating FiLM parameters layer by layer. This approach was\n\n better able to scale to settings with longer input sequences such as\n\n dialogue and was evaluated on the GuessWhat?! \n\n and ReferIt datasets.\n\n \n\n de Vries et al. leverage FiLM\n\n to condition a pre-trained network. Their model's linguistic pipeline\n\n modulates the visual pipeline via conditional batch normalization,\n\n real-world images on the GuessWhat?! \n\n and VQAv1 datasets.\n\n \n\n The visual pipeline consists of a pre-trained residual network that is\n\n fixed throughout training. The linguistic pipeline manipulates the visual\n\n pipeline by perturbing the residual network's batch normalization\n\n parameters, which re-scale and re-shift feature maps after activations\n\n have been normalized to have zero mean and unit variance. As hinted\n\n earlier, conditional batch normalization can be viewed as an instance of\n\n FiLM where the post-normalization feature-wise affine transformation is\n\n replaced with a FiLM layer.\n\n \n\nStyle transfer+\n\n Dumoulin et al. use\n\n feature-wise affine transformations — in the form of conditional\n\n instance normalization layers — to condition a style transfer\n\n network on a chosen style image. Like conditional batch normalization\n\n discussed in the previous subsection,\n\n conditional instance normalization can be seen as an instance of FiLM\n\n where a FiLM layer replaces the post-normalization feature-wise affine\n\n instance normalization parameters, and it applies normalization with these\n\n style-specific parameters.\n\n \n\n Dumoulin et al. use a simple\n\n embedding lookup to produce instance normalization parameters, while\n\n Ghiasi et al. further\n\n introduce a *style prediction network*, trained jointly with the\n\n style transfer network to predict the conditioning parameters directly from\n\n a given style image. In this article we opt to use the FiLM nomenclature\n\n because it is decoupled from normalization operations, but the FiLM\n\n layers used by Perez et al. were\n\n themselves heavily inspired by the conditional normalization layers used\n\n by Dumoulin et al. .\n\n \n\n Yang et al. use a related\n\n architecture for video object segmentation — the task of segmenting a\n\n particular object throughout a video given that object's segmentation in the\n\n first frame. Their model conditions an image segmentation network over a\n\n video frame on the provided first frame segmentation using feature-wise\n\n scaling factors, as well as on the previous frame using position-wise\n\n biases.\n\n \n\n So far, the models we covered have two sub-networks: a primary\n\n network in which feature-wise transformations are applied and a secondary\n\n network which outputs parameters for these transformations. However, this\n\n distinction between *FiLM-ed network* and *FiLM generator*\n\n is not strictly necessary. As an example, Huang and Belongie\n\n propose an alternative\n\n style transfer network that uses adaptive instance normalization layers,\n\n which compute normalization parameters using a simple heuristic.\n\n \n\n Adaptive instance normalization can be interpreted as inserting a FiLM\n\n layer midway through the model. However, rather than relying\n\n on a secondary network to predict the FiLM parameters from the style\n\n image, the main network itself is used to extract the style features\n\n used to compute FiLM parameters. Therefore, the model can be seen as\n\n *both* the FiLM-ed network and the FiLM generator.\n\n \n\nImage recognition+\n\n neural network's activations *themselves* as conditioning\n\n information. This idea gives rise to\n\n *self-conditioned* models.\n\n \n\n Highway Networks are a prime\n\n example of applying this self-conditioning principle. They take inspiration\n\n from the LSTMs' heavy use of\n\n feature-wise sigmoidal gating in their input, forget, and output gates to\n\n regulate information flow:\n\n \n\ninputsub-networksigmoidal layer1 - xoutput\n\n The ImageNet 2017 winning model also\n\n employs feature-wise sigmoidal gating in a self-conditioning manner, as a\n\n way to \"recalibrate\" a layer's activations conditioned on themselves.\n\n \n\nNatural language processing+\n\n For statistical language modeling (i.e., predicting the next word\n\n in a sentence), the LSTM \n\n constitutes a popular class of recurrent network architectures. The LSTM\n\n relies heavily on feature-wise sigmoidal gating to control the\n\n information flow in and out of the memory or context cell\n\n c\\mathbf{c}c, based on the hidden states\n\n h\\mathbf{h}h and inputs x\\mathbf{x}x at\n\n every timestep t\\mathbf{t}t.\n\n \n\nct-1tanhcthtsigmoidsigmoidtanhsigmoidlinearlinearlinearlinearht-1xtconcatenate\n\n Also in the domain of language modeling, Dauphin et al. use sigmoidal\n\n gating in their proposed *gated linear unit*, which uses half of the\n\n input features to apply feature-wise sigmoidal gating to the other half.\n\n Gehring et al. adopt this\n\n architectural feature, introducing a fast, parallelizable model for machine\n\n translation in the form of a fully convolutional network.\n\n \n\n The Gated-Attention Reader \n\n uses feature-wise scaling, extracting information\n\n from text by conditioning a document-reading network on a query. Its\n\n architecture consists of multiple Gated-Attention modules, which involve\n\n element-wise multiplications between document representation tokens and\n\n token-specific query representations extracted via soft attention on the\n\n query representation tokens.\n\n \n\nReinforcement learning+\n\n The Gated-Attention architecture \n\n uses feature-wise sigmoidal gating to fuse linguistic and visual\n\n information in an agent trained to follow simple \"go-to\" language\n\n instructions in the VizDoom 3D\n\n environment.\n\n \n\n Bahdanau et al. use FiLM\n\n layers to condition Neural Module Network\n\n and LSTM -based policies to follow\n\n basic, compositional language instructions (arranging objects and going\n\n to particular locations) in a 2D grid world. They train this policy\n\n in an adversarial manner using rewards from another FiLM-based network,\n\n trained to discriminate between ground-truth examples of achieved\n\n instruction states and failed policy trajectories states.\n\n \n\n Outside instruction-following, Kirkpatrick et al.\n\n also use\n\n game-specific scaling and biasing to condition a shared policy network\n\n trained to play 10 different Atari games.\n\n \n\nGenerative modeling+\n\n The conditional variant of DCGAN ,\n\n a well-recognized network architecture for generative adversarial networks\n\n , uses concatenation-based\n\n conditioning. The class label is broadcasted as a feature map and then\n\n concatenated to the input of convolutional and transposed convolutional\n\n layers in the discriminator and generator networks.\n\n \n\n For convolutional layers, concatenation-based conditioning requires the\n\n network to learn redundant convolutional parameters to interpret each\n\n constant, conditioning feature map; as a result, directly applying a\n\n conditional bias is more parameter efficient, but the two approaches are\n\n still mathematically equivalent.\n\n \n\n PixelCNN \n\n and WaveNet  — two recent\n\n advances in autoregressive, generative modeling of images and audio,\n\n respectively — use conditional biasing. The simplest form of\n\n conditioning in PixelCNN adds feature-wise biases to all convolutional layer\n\n outputs. In FiLM parlance, this operation is equivalent to inserting FiLM\n\n layers after each convolutional layer and setting the scaling coefficients\n\n to a constant value of 1.\n\n \n\n The authors also describe a location-dependent biasing scheme which\n\n cannot be expressed in terms of FiLM layers due to the absence of the\n\n feature-wise property.\n\n \n\n WaveNet describes two ways in which conditional biasing allows external\n\n information to modulate the audio or speech generation process based on\n\n conditioning input:\n\n \n\n1. **Global conditioning** applies the same conditional bias\n\n to the whole generated sequence and is used e.g. to condition on speaker\n\n identity.\n\n2. **Local conditioning** applies a conditional bias which\n\n varies across time steps of the generated sequence and is used e.g. to\n\n let linguistic features in a text-to-speech model influence which sounds\n\n are produced.\n\n As in PixelCNN, conditioning in WaveNet can be viewed as inserting FiLM\n\n layers after each convolutional layer. The main difference lies in how\n\n the FiLM-generating network is defined: global conditioning\n\n expresses the FiLM-generating network as an embedding lookup which is\n\n broadcasted to the whole time series, whereas local conditioning expresses\n\n it as a mapping from an input sequence of conditioning information to an\n\n output sequence of FiLM parameters.\n\n \n\nSpeech recognition+\n\n Kim et al. modulate a deep\n\n bidirectional LSTM using a form\n\n of conditional normalization. As discussed in the\n\n *Visual question-answering* and *Style transfer* subsections,\n\n conditional normalization can be seen as an instance of FiLM where\n\n the post-normalization feature-wise affine transformation is replaced\n\n with a FiLM layer.\n\n \n\n The key difference here is that the conditioning signal does not come from\n\n an external source but rather from utterance\n\n summarization feature vectors extracted in each layer to adapt the model.\n\n \n\nDomain adaptation and few-shot learning+\n\n For domain adaptation, Li et al. \n\n find it effective to update the per-channel batch normalization\n\n statistics (mean and variance) of a network trained on one domain with that\n\n network's statistics in a new, target domain. As discussed in the\n\n *Style transfer* subsection, this operation is akin to using the network as\n\n both the FiLM generator and the FiLM-ed network. Notably, this approach,\n\n along with Adaptive Instance Normalization, has the particular advantage of\n\n not requiring any extra trainable parameters.\n\n \n\n For few-shot learning, Oreshkin et al.\n\n explore the use of FiLM layers to\n\n provide more robustness to variations in the input distribution across\n\n few-shot learning episodes. The training set for a given episode is used to\n\n produce FiLM parameters which modulate the feature extractor used in a\n\n Prototypical Networks \n\n meta-training procedure.\n\n \n\n---\n\nRelated ideas\n\n-------------\n\n Aside from methods which make direct use of feature-wise transformations,\n\n the FiLM framework connects more broadly with the following methods and\n\n concepts.\n\n \n\nexpand all\n\nZero-shot learning+\n\n The idea of learning a task representation shares a strong connection with\n\n zero-shot learning approaches. In zero-shot learning, semantic task\n\n embeddings may be learned from external information and then leveraged to\n\n make predictions about classes without training examples. For instance, to\n\n generalize to unseen object categories for image classification, one may\n\n construct semantic task embeddings from text-only descriptions and exploit\n\n objects' text-based relationships to make predictions for unseen image\n\n categories. Frome et al. , Socher et\n\n al. , and Norouzi et al.\n\n are a few notable exemplars\n\n of this idea.\n\n \n\nHyperNetworks+\n\n The notion of a secondary network predicting the parameters of a primary\n\n (e.g., a recurrent neural network layer). From this perspective, the FiLM\n\n generator is a specialized HyperNetwork that predicts the FiLM parameters of\n\n the FiLM-ed network. The main distinction between the two resides in the\n\n number and specificity of predicted parameters: FiLM requires predicting far\n\n fewer parameters than Hypernetworks, but also has less modulation potential.\n\n The ideal trade-off between a conditioning mechanism's capacity,\n\n regularization, and computational complexity is still an ongoing area of\n\n investigation, and many proposed approaches lie on the spectrum between FiLM\n\n and HyperNetworks (see [Bibliographic Notes](#bibliographic-notes)).\n\n \n\nAttention+\n\n Some parallels can be drawn between attention and FiLM, but the two operate\n\n in different ways which are important to disambiguate.\n\n \n\n This difference stems from distinct intuitions underlying attention and\n\n FiLM: the former assumes that specific spatial locations or time steps\n\n contain the most useful information, whereas the latter assumes that\n\n specific features or feature maps contain the most useful information.\n\n \n\nBilinear transformations+\n\n With a little bit of stretching, FiLM can be seen as a special case of a\n\n bilinear transformation\n\n with low-rank weight\n\n matrices. A bilinear transformation defines the relationship between two\n\n inputs x\\mathbf{x}x and z\\mathbf{z}z and the\n\n kthk^{th}kth output feature yky\\_kyk​ as\n\n yk=xTWkz.\n\n y\\_k = \\mathbf{x}^T W\\_k \\mathbf{z}.\n\n yk​=xTWk​z.\n\n Note that for each output feature yky\\_kyk​ we have a separate\n\n matrix WkW\\_kWk​, so the full set of weights forms a\n\n multi-dimensional array.\n\n \n\n If we view z\\mathbf{z}z as the concatenation of the scaling\n\n and shifting vectors γ\\gammaγ and βetaβ and\n\n if we augment the input x\\mathbf{x}x with a 1-valued feature,\n\n \n\n As is commonly done to turn a linear transformation into an affine\n\n transformation.\n\n \n\n we can represent FiLM using a bilinear transformation by zeroing out the\n\n appropriate weight matrix entries:\n\n \n\n For some applications of bilinear transformations,\n\n see the [Bibliographic Notes](#bibliographic-notes).\n\n \n\n---\n\nProperties of the learned task representation\n\n---------------------------------------------\n\n As hinted earlier, in adopting the FiLM perspective we implicitly introduce\n\n a notion of *task representation*: each task — be it a question\n\n about an image or a painting style to imitate — elicits a different\n\n set of FiLM parameters via the FiLM generator which can be understood as its\n\n representation in terms of how to modulate the FiLM-ed network. To help\n\n better understand the properties of this representation, let's focus on two\n\n FiLM-ed models used in fairly different problem settings:\n\n \n\n* The visual reasoning model of Perez et al.\n\n , which uses FiLM\n\n to modulate a visual processing pipeline based off an input question.\n\n \n\n* The artistic style transfer model of Ghiasi et al.\n\n , which uses FiLM to modulate a\n\n feed-forward style transfer network based off an input style image.\n\n \n\n As a starting point, can we discern any pattern in the FiLM parameters as a\n\n function of the task description? One way to visualize the FiLM parameter\n\n space is to plot γ\\gammaγ against βetaβ,\n\n with each point corresponding to a specific task description and a specific\n\n feature map. If we color-code each point according to the feature map it\n\n belongs to we observe the following:\n\n \n\n The plots above allow us to make several interesting observations. First,\n\n FiLM parameters cluster by feature map in parameter space, and the cluster\n\n locations are not uniform across feature maps. The orientation of these\n\n clusters is also not uniform across feature maps: the main axis of variation\n\n can be γ\\gammaγ-aligned, βetaβ-aligned, or\n\n diagonal at varying angles. These findings suggest that the affine\n\n transformation in FiLM layers is not modulated in a single, consistent way,\n\n i.e., using γ\\gammaγ only, βetaβ only, or\n\n γ\\gammaγ and βetaβ together in some specific\n\n way. Maybe this is due to the affine transformation being overspecified, or\n\n maybe this shows that FiLM layers can be used to perform modulation\n\n operations in several distinct ways.\n\n \n\n Nevertheless, the fact that these parameter clusters are often somewhat\n\n \"dense\" may help explain why the style transfer model of Ghiasi et al.\n\n is able to perform style\n\n interpolations: any convex combination of FiLM parameters is likely to\n\n correspond to a meaningful parametrization of the FiLM-ed network.\n\n \n\nStyle 1Style 2 w InterpolationContent Image\n\n To some extent, the notion of interpolating between tasks using FiLM\n\n parameters can be applied even in the visual question-answering setting.\n\n Using the model trained in Perez et al. ,\n\n we interpolated between the model's FiLM parameters for two pairs of CLEVR\n\n questions. Here we visualize the input locations responsible for\n\n \n\n The network seems to be softly switching where in the image it is looking,\n\n based on the task description. It is quite interesting that these semantically\n\n meaningful interpolation behaviors emerge, as the network has not been\n\n trained to act this way.\n\n \n\n Despite these similarities across problem settings, we also observe\n\n qualitative differences in the way in which FiLM parameters cluster as a\n\n function of the task description. Unlike the style transfer model, the\n\n visual reasoning model sometimes exhibits several FiLM parameter\n\n sub-clusters for a given feature map.\n\n \n\n At the very least, this may indicate that FiLM learns to operate in ways\n\n that are problem-specific, and that we should not expect to find a unified\n\n and problem-independent explanation for FiLM's success in modulating FiLM-ed\n\n networks. Perhaps the compositional or discrete nature of visual reasoning\n\n requires the model to implement several well-defined modes of operation\n\n which are less necessary for style transfer.\n\n \n\n Focusing on individual feature maps which exhibit sub-clusters, we can try\n\n to infer how questions regroup by color-coding the scatter plots by question\n\n type.\n\n \n\n Sometimes a clear pattern emerges, as in the right plot, where color-related\n\n questions concentrate in the top-right cluster — we observe that\n\n questions either are of type *Query color* or *Equal color*,\n\n or contain concepts related to color. Sometimes it is harder to draw a\n\n conclusion, as in the left plot, where question types are scattered across\n\n the three clusters.\n\n \n\n In cases where question types alone cannot explain the clustering of the\n\n FiLM parameters, we can turn to the conditioning content itself to gain\n\n an understanding of the mechanism at play. Let's take a look at two more\n\n plots: one for feature map 26 as in the previous figure, and another\n\n for a different feature map, also exhibiting several subclusters. This time\n\n we regroup points by the words which appear in their associated question.\n\n \n\n In the left plot, the left subcluster corresponds to questions involving\n\n objects positioned *in front* of other objects, while the right\n\n subcluster corresponds to questions involving objects positioned\n\n *behind* other objects. In the right plot we see some evidence of\n\n separation based on object material: the left subcluster corresponds to\n\n questions involving *matte* and *rubber* objects, while the\n\n right subcluster contains questions about *shiny* or\n\n *metallic* objects.\n\n \n\n The presence of sub-clusters in the visual reasoning model also suggests\n\n that question interpolations may not always work reliably, but these\n\n sub-clusters don't preclude one from performing arithmetic on the question\n\n representations, as Perez et al. \n\n report.\n\n \n\n Perez et al. report that this sort of\n\n task analogy is not always successful in correcting the model's answer, but\n\n it does point to an interesting fact about FiLM-ed networks: sometimes the\n\n model makes a mistake not because it is incapable of computing the correct\n\n output, but because it fails to produce the correct FiLM parameters for a\n\n given task description. The reverse can also be true: if the set of tasks\n\n the model was trained on is insufficiently rich, the computational\n\n primitives learned by the FiLM-ed network may be insufficient to ensure good\n\n generalization. For instance, a style transfer model may lack the ability to\n\n produce zebra-like patterns if there are no stripes in the styles it was\n\n trained on. This could explain why Ghiasi et al.\n\n report that their style transfer\n\n model's ability to produce pastiches for new styles degrades if it has been\n\n trained on an insufficiently large number of styles. Note however that in\n\n that case the FiLM generator's failure to generalize could also play a role,\n\n and further analysis would be needed to draw a definitive conclusion.\n\n \n\n This points to a separation between the various computational\n\n primitives learned by the FiLM-ed network and the \"numerical recipes\"\n\n learned by the FiLM generator: the model's ability to generalize depends\n\n both on its ability to parse new forms of task descriptions and on it having\n\n learned the required computational primitives to solve those tasks. We note\n\n that this multi-faceted notion of generalization is inherited directly from\n\n the multi-task point of view adopted by the FiLM framework.\n\n \n\n Let's now turn our attention back to the overal structural properties of FiLM\n\n parameters observed thus far. The existence of this structure has already\n\n been explored, albeit more indirectly, by Ghiasi et al.\n\n as well as Perez et al.\n\n , who applied t-SNE\n\n on the FiLM parameter values.\n\n \n\n t-SNE projection of FiLM parameters for many task descriptions.\n\n \n\n The projection on the left is inspired by a similar projection done by Perez\n\n et al. for their visual reasoning\n\n model trained on CLEVR and shows how questions group by question type.\n\n The projection on the right is inspired by a similar projection done by\n\n Ghiasi et al. for their style\n\n transfer network. The projection does not cluster artists as neatly as the\n\n projection on the left, but this is to be expected, given that an artist's\n\n style may vary widely over time. However, we can still detect interesting\n\n patterns in the projection: note for instance the isolated cluster (circled\n\n in the figure) in which paintings by Ivan Shishkin and Rembrandt are\n\n aggregated. While these two painters exhibit fairly different styles, the\n\n cluster is a grouping of their sketches.\n\n \n\n To summarize, the way neural networks learn to use FiLM layers seems to\n\n vary from problem to problem, input to input, and even from feature to\n\n feature; there does not seem to be a single mechanism by which the\n\n network uses FiLM to condition computation. This flexibility may\n\n explain why FiLM-related methods have been successful across such a\n\n wide variety of domains.\n\n \n\n---\n\nDiscussion\n\n----------\n\n Looking forward, there are still many unanswered questions.\n\n Do these experimental observations on FiLM-based architectures generalize to\n\n other related conditioning mechanisms, such as conditional biasing, sigmoidal\n\n gating, HyperNetworks, and bilinear transformations? When do feature-wise\n\n transformations outperform methods with stronger inductive biases and vice\n\n versa? Recent work combines feature-wise transformations with stronger\n\n inductive bias methods\n\n ,\n\n which could be an optimal middle ground. Also, to what extent are FiLM's\n\n task representation properties\n\n inherent to FiLM, and to what extent do they emerge from other features\n\n of neural networks (i.e. non-linearities, FiLM generator\n\n depth, etc.)? If you are interested in exploring these or other\n\n questions about FiLM, we recommend looking into the code bases for\n\n FiLM models for [visual reasoning](https://github.com/ethanjperez/film)\n\n which we used as a\n\n starting point for our experiments here.\n\n \n\n Finally, the fact that changes on the feature level alone are able to\n\n compound into large and meaningful modulations of the FiLM-ed network is\n\n still very surprising to us, and hopefully future work will uncover deeper\n\n explanations. For now, though, it is a question that\n\n evokes the even grander mystery of how neural networks in general compound\n\n simple operations like matrix multiplications and element-wise\n\n non-linearities into semantically meaningful transformations.\n\n \n\n", "bibliography_bib": [{"title": "FiLM: Visual Reasoning with a General Conditioning Layer"}, {"title": "Learning visual reasoning without strong priors"}, {"title": "CLEVR: A diagnostic dataset for compositional language and elementary visual reasoning"}, {"title": "Visual Reasoning with Multi-hop Feature Modulation"}, {"title": "GuessWhat?! Visual object discovery through multi-modal dialogue"}, {"title": "ReferItGame: Referring to objects in photographs of natural scenes"}, {"title": "Modulating early visual processing by language"}, {"title": "VQA: visual question answering"}, {"title": "A learned representation for artistic style"}, {"title": "Exploring the structure of a real-time, arbitrary neural artistic stylization network"}, {"title": "Efficient video object segmentation via network modulation"}, {"title": "Arbitrary style transfer in real-time with adaptive instance normalization"}, {"title": "Highway networks"}, {"title": "Long short-term memory"}, {"title": "Squeeze-and-Excitation networks"}, {"title": "On the state of the art of evaluation in neural language models"}, {"title": "Language modeling with gated convolutional networks"}, {"title": "Convolution sequence-to-sequence learning"}, {"title": "Gated-attention readers for text comprehension"}, {"title": "Gated-attention architectures for task-oriented language grounding"}, {"title": "Vizdoom: A doom-based AI research platform for visual reinforcement learning"}, {"title": "Learning to follow language instructions with adversarial reward induction"}, {"title": "Neural module networks"}, {"title": "Overcoming catastrophic forgetting in neural networks"}, {"title": "Unsupervised representation learning with deep convolutional generative adversarial networks"}, {"title": "Generative adversarial nets"}, {"title": "Conditional image generation with PixelCNN decoders"}, {"title": "WaveNet: A generative model for raw audio"}, {"title": "Dynamic layer normalization for adaptive neural acoustic modeling in speech recognition"}, {"title": "Adaptive batch normalization for practical domain adaptation"}, {"title": "TADAM: Task dependent adaptive metric for improved few-shot learning"}, {"title": "Prototypical networks for few-shot learning"}, {"title": "Devise: A deep visual-semantic embedding model"}, {"title": "Zero-shot learning through cross-modal transfer"}, {"title": "Zero-shot learning by convex combination of semantic embeddings"}, {"title": "HyperNetworks"}, {"title": "Separating style and content with bilinear models"}, {"title": "Visualizing data using t-SNE"}, {"title": "A dataset and architecture for visual reasoning with a working memory"}, {"title": "A parallel computation that assigns canonical object-based frames of reference"}, {"title": "The correlation theory of brain function"}, {"title": "Generating text with recurrent neural networks"}, {"title": "Robust boltzmann machines for recognition and denoising"}, {"title": "Factored conditional restricted Boltzmann machines for modeling motion style"}, {"title": "Combining discriminative features to infer complex trajectories"}, {"title": "Learning where to attend with deep architectures for image tracking"}, {"title": "Learning hierarchical invariant spatio-temporal features for action recognition with independent subspace analysis"}, {"title": "Convolutional learning of spatio-temporal features"}, {"title": "Learning to relate images"}, {"title": "Incorporating side information by adaptive convolution"}, {"title": "Learning multiple visual domains with residual adapters"}, {"title": "Predicting deep zero-shot convolutional neural networks using textual descriptions"}, {"title": "Zero-shot task generalization with multi-task deep reinforcement learning"}, {"title": "Separating style and content"}, {"title": "Facial expression space learning"}, {"title": "Personalized recommendation on dynamic content using predictive bilinear models"}, {"title": "Like like alike: joint friendship and interest propagation in social networks"}, {"title": "Matrix factorization techniques for recommender systems"}, {"title": "Bilinear CNN models for fine-grained visual recognition"}, {"title": "Convolutional two-stream network fusion for video action recognition"}, {"title": "Multimodal compact bilinear pooling for visual question answering and visual grounding"}], "id": "f1eb807aa226d7d91aa13ef44f8c17c7"} +{"url": "n/a", "source": "distill", "source_type": "html", "converted_with": "python", "title": "Visualizing memorization in RNNs", "authors": ["Andreas Madsen"], "date_published": "2019-03-25", "abstract": " Memorization in Recurrent Neural Networks (RNNs) continues to pose a challenge in many applications. We’d like RNNs to be able to store information over many timesteps and retrieve it when it becomes relevant — but vanilla RNNs often struggle to do this. ", "journal_ref": "distill-pub", "doi": "https://doi.org/10.23915/distill.00016", "text": "\n\n Memorization in Recurrent Neural Networks (RNNs) continues to pose a challenge\n\n in many applications. We'd like RNNs to be able to store information over many\n\n \n\n as Long-Short-Term Memory (LSTM)\n\n units and Gated Recurrent Units (GRU).\n\n However, the practical problem of memorization still poses a challenge.\n\n As such, developing new recurrent units that are better at memorization\n\n continues to be an active field of research.\n\n \n\n To compare a recurrent unit against its alternatives, both past and recent\n\n papers, such as the Nested LSTM paper by Monzi et al.\n\n , heavily rely on quantitative\n\n comparisons. These comparisons often measure accuracy or\n\n cross entropy loss on standard problems such as Penn Treebank, Chinese\n\n Poetry Generation, or text8, where the task is to predict the\n\n next character given existing input.\n\n \n\n While quantitative comparisons are useful, they only provide partial\n\n insight into the how a recurrent unit memorizes. A model can, for \n\nexample,\n\n achieve high accuracy and cross entropy loss by just providing \n\nhighly accurate\n\n predictions in cases that only require short-term memorization, \n\nwhile\n\n being inaccurate at predictions that require long-term\n\n memorization.\n\n For example, when autocompleting words in a sentence, a model with \n\nonly short-term understanding could still exhibit high accuracy \n\ncompleting the ends of words once most of the characters are present.\n\n However, without longer term contextual understanding it won't be \n\nable to predict words when only a few characters are known.\n\n \n\n This article presents a qualitative visualization method for comparing\n\n recurrent units with regards to memorization and contextual understanding.\n\n \n\nRecurrent Units\n\n---------------\n\n The networks that will be analyzed all use a simple RNN structure:\n\n \n\nhℓth\\_{\\ell}^{t}hℓt​\n\n Output for layer ℓ\\ellℓ at time ttt.\n\n \n\n===\n\nUnit\\mathrm{Unit}Unit\n\n Recurrent unit of choice.\n\n \n\nyty^tyt\n\n===\n\nSoftmax\\mathrm{Softmax}Softmax\n\n(hLt)(h\\_L^t)(hLt​)\n\n In theory, the time dependency allows it in each iteration to know\n\n about every part of the sequence that came before. However, this time\n\n dependency typically causes a vanishing gradient problem that results in\n\n long-term dependencies being ignored during training\n\n .\n\n \n\n**Vanishing Gradient:** where the contribution from the\n\n earlier steps becomes insignificant in the gradient for the vanilla RNN\n\n unit.\n\n \n\nSoftmax LayerRecurrent LayerRecurrent LayerInput Layer\n\n Several solutions to the vanishing gradient problem have been proposed over\n\n the years. The most popular are the aforementioned LSTM and GRU units, but this\n\n is still an area of active research. Both LSTM and GRU are well known\n\n  — an explanation of Nested LSTMs\n\n can be found [in the appendix](#appendix-nestedlstm).\n\n \n\n* Nested LSTM\n\n* LSTM\n\n* GRU\n\n**Recurrent Unit, Nested LSTM:** makes the cell update depend on another\n\n LSTM unit, supposedly this allows more long-term memory compared to\n\n stacking LSTM layers.\n\n \n\n**Recurrent Unit, LSTM:** allows for long-term\n\n memorization by gateing its update, thereby solving the vanishing gradient\n\n problem.\n\n \n\n**Recurrent Unit, GRU:** solves the vanishing gradient\n\n problem without depending on an internal memory state.\n\n \n\n It is not entirely clear why one recurrent unit performs better than another\n\n in some applications, while in other applications it is another type of\n\n recurrent unit that performs better. Theoretically they all solve the vanishing\n\n gradient problem, but in practice their performance is highly application\n\n dependent.\n\n \n\n Understanding why these differences occur is likely an opaque and\n\n challenging problem. The purpose of this article is to demonstrate a\n\n visualization technique that can better highlight what these differences\n\n are. Hopefully, such an understanding can lead to a deeper understanding.\n\n \n\nComparing Recurrent Units\n\n-------------------------\n\n loss. Differences in these high-level quantitative measures\n\n can have many explanations and may only be because of some small improvement\n\n in predictions that only requires short-term contextual understanding,\n\n while it is often the long-term contextual understanding that is of interest.\n\n \n\n### A problem for qualitative analysis\n\n Therefore a good problem for qualitatively analyzing contextual\n\n understanding should have a human-interpretable output and depend both on\n\n long-term and short-term contextual understanding. The typical problems\n\n that are often used, such as Penn Treebank, Chinese Poetry Generation, or\n\n text8 generation do not have outputs that are easy to reason about, as they\n\n require an extensive understanding of either grammar, Chinese poetry, or\n\n only output a single letter.\n\n \n\n \n\n The autocomplete problem is quite similar to the text8 generation\n\n problem: the only difference is that instead of predicting the next letter,\n\n the model predicts an entire word. This makes the output much more\n\n interpretable. Finally, because of its close relation to text8 generation,\n\n existing literature on text8 generation is relevant and comparable,\n\n in the sense that models that work well on text8 generation should work\n\n well on the autocomplete problem.\n\n \n\nUser types input sequence.\n\nRecurrent neural network processes the sequence.\n\nThe output for the last character is used.\n\nThe most likely suggestions are extracted.\n\n parts of north af \n\ncustom input, loading ...\n\nafrica(85.30%)africans(1.40%)african(8.90%)\n\n**Autocomplete:** An application that has a humanly\n\n interpretable output, while depending on both short and long-term\n\n contextual understanding. In this case, the network uses past information\n\n and understands the next word should be a country.\n\n \n\n The output in this figure was produced by the GRU model;\n\n all model setups are [described in the appendix](#appendix-autocomplete).\n\n \n\n Try [removing the last letters](javascript:arDemoShort();) to see\n\n that the network continues to give meaningful suggestions.\n\n \n\nYou can also type in your own text.\n\n ([reset](javascript:arDemoReset();)).\n\n \n\n The autocomplete dataset is constructed from the full\n\n [text8](http://mattmahoney.net/dc/textdata.html) dataset. The\n\n recurrent neural networks used to solve the problem have two layers, each\n\n with 600 units. There are three models, using GRU, LSTM, and Nested LSTM.\n\n See [the appendix](#appendix-autocomplete) for more details.\n\n \n\n### Connectivity in the Autocomplete Problem\n\n In the recently published Nested LSTM paper\n\n , they qualitatively compared their\n\n Nested LSTM unit to other recurrent units, to show how it memorizes in\n\n comparison, by visualizing individual cell activations.\n\n \n\n This visualization was inspired by Karpathy et al.\n\n where they identify cells\n\n that capture a specific feature. To identify a specific\n\n feature, this visualization approach works well. However, it is not a useful\n\n argument for memorization in general as the output is entirely dependent\n\n on what feature the specific cell captures.\n\n \n\n Instead, to get a better idea of how well each model memorizes and uses\n\n memory for contextual understanding, the connectivity between the desired\n\n output and the input is analyzed. This is calculated as:\n\n \n\nconnectivity(\textrm{connectivity}(connectivity(\n\nttt\n\n Input time index.\n\n \n\n,,,\n\nt~\tilde{t}t~\n\n Output time index.\n\n \n\n)=) =)=\n\n xtx^txt.\n\n \n\n Exploring the connectivity gives a surprising amount of insight into the\n\n different models' ability for long-term contextual understanding. Try and\n\n interact with the figure below yourself to see what information the\n\n different models use for their predictions.\n\n \n\n**Connectivity:** the connection strength between\n\n ([reset](javascript:connectivitySetIndex(null);)).\n\n *Hover over or tap the text to change the selected character.*\n\n Let's highlight three specific situations:\n\n \n\n1\n\n Observe how the models predict the word \"learning\" with [only the first two\n\n information and thus only suggests common words starting with the letter \"l\".\n\n \n\n In contrast, the LSTM and GRU models both suggests the word \"learning\".\n\n The GRU model shows stronger connectivity with the word \"advanced\",\n\n \n\n2\n\n Examine how the models predict the word \"grammar\".\n\n Thus, no model suggests \"grammar\" until it has\n\n [seen at least 4 characters](javascript:connectivitySetIndex(32);).\n\n \n\n When \"grammar\" appears for the second time, the models have more context.\n\n need [at least 4 characters](javascript:connectivitySetIndex(162);).\n\n \n\n3\n\n Finally, let's look at predicting the word \"schools\"\n\n the GRU model seems better at using past information for\n\n contextual understanding.\n\n \n\n What makes this case noteworthy is how the LSTM model appears to\n\n use words from almost the entire sentence as context. However,\n\n its suggestions are far from correct and have little to do\n\n with the previous words it seems to use in its prediction.\n\n This suggests that the LSTM model in this setup is capable of\n\n long-term memorization, but not long-term contextual understanding.\n\n \n\n1\n\n2\n\n3\n\n These observations show that the connectivity visualization is a powerful tool\n\n However, it is only possible to compare models on the same dataset, and\n\n on a specific example. As such, while these observations may show that\n\n these observations may not generalize to other datasets or hyperparameters.\n\n \n\n### Future work; quantitative metric\n\n From the above observations it appears that short-term contextual understanding\n\n using previously seen letters from the word itself, as more letters become\n\n GRU network — use previously seen words as context for the prediction.\n\n \n\n This observation suggests a quantitative metric: measure the accuracy given\n\n how many letters from the word being predicted are already known.\n\n \n\n**Accuracy Graph**: shows the accuracy\n\n given a fixed number of characters in a word that the RNN has seen.\n\n 0 characters mean that the RNN has only seen the space leading up\n\n to the word, including all the previous text which should provide context.\n\n The different line styles, indicates if the correct word should appear\n\n among the top 1, 2, or 3 suggestions.\n\n \n\n These results suggest that the GRU model is better at long-term contextual\n\n understanding, while the LSTM model is better at short-term contextual\n\n understanding. These observations are valuable, as it justifies why the\n\n the GRU model is far better at long-term contextual understanding.\n\n \n\n While more detailed quantitative metrics like this provides new insight,\n\n qualitative analysis like the connectivity figure presented\n\n intuitive understanding of how the model works, which a quantitative metric\n\n cannot. It also shows that a wrong prediction can still be considered a\n\n useful prediction, such as a synonym or a contextually reasonable\n\n prediction.\n\n \n\nConclusion\n\n----------\n\n Looking at overall accuracy and cross entropy loss in itself is not that\n\n interesting. Different models may prioritize either long-term or\n\n short-term contextual understanding, while both models can have similar\n\n accuracy and cross entropy.\n\n \n\n A qualitative analysis, where one looks at how previous input is used in\n\n the prediction is therefore also important when judging models. In this\n\n case, the connectivity visualization together with the autocomplete\n\n predictions, reveals that the GRU model is much more capable of long-term\n\n contextual understanding, compared to LSTM and Nested LSTM. In the case of\n\n LSTM, the difference is much higher than one would guess from just looking\n\n at the overall accuracy and cross entropy loss alone. This observation is\n\n not that interesting in itself as it is likely very dependent on the\n\n hyperparameters, and the specific application.\n\n \n\n Much more valuable is that this visualization method makes it possible\n\n to intuitively understand how the models are different, to a much higher\n\n degree than just looking at accuracy and cross entropy. For this application,\n\n it is clear that the GRU model uses repeating words and semantic meaning\n\n of past words to make its prediction, to a much higher degree than the LSTM\n\n and Nested LSTM models. This is both a valuable insight when choosing the\n\n final model, but also essential knowledge when developing better models\n\n in the future.\n\n \n\n", "bibliography_bib": [{"title": "Long short-term memory"}, {"title": "Learning Phrase Representations using RNN Encoder–Decoder for Statistical Machine Translation"}, {"title": "Nested LSTMs"}, {"title": "The Penn Treebank: Annotating Predicate Argument Structure"}, {"title": "text8 Dataset"}, {"title": "On the difficulty of training recurrent neural networks"}, {"title": "Visualizing and Understanding Recurrent Networks"}], "id": "196ef7a48ed3bba7ac5ce1a210f4eff5"} +{"url": "n/a", "source": "distill", "source_type": "html", "converted_with": "python", "title": "A Gentle Introduction to Graph Neural Networks", "authors": ["Benjamin Sanchez-Lengeling", "Emily Reif", "Adam Pearce", "Alexander B. Wiltschko"], "date_published": "2021-09-02", "abstract": "This article is one of two Distill publications about graph neural networks. Take a look at Understanding Convolutions on Graphs to understand how convolutions over images generalize naturally to convolutions over graphs.", "journal_ref": "distill-pub", "doi": "https://doi.org/10.23915/distill.00033", "text": "\n\nGraphs are all around us; real world objects are often defined in \n\nterms of their connections to other things. A set of objects, and the \n\nconnections between them, are naturally expressed as a *graph*. \n\nResearchers have developed neural networks that operate on graph data \n\n(called graph neural networks, or GNNs) for over a decade.\n\n Recent developments have increased their capabilities and expressive \n\npower. We are starting to see practical applications in areas such as \n\nThis article explores and explains modern graph neural networks. We \n\ndivide this work into four parts. First, we look at what kind of data is\n\n most naturally phrased as a graph, and some common examples. Second, we\n\n explore what makes graphs different from other types of data, and some \n\nof the specialized choices we have to make when using graphs. Third, we \n\nbuild a modern GNN, walking through each of the parts of the model, \n\nstarting with historic modeling innovations in the field. We move \n\ngradually from a bare-bones implementation to a state-of-the-art GNN \n\nmodel. Fourth and finally, we provide a GNN playground where you can \n\nplay around with a real-word task and dataset to build a stronger \n\nintuition of how each component of a GNN model contributes to the \n\npredictions it makes.\n\nThree types of attributes we might find in a graph, hover over to \n\nhighlight each attribute. Other types of graphs and attributes are \n\n They can also be undirected, where there is no notion of source or \n\ndestination nodes, and information flows both directions. Note that \n\nhaving a single undirected edge is equivalent to having one directed \n\nGraphs are very flexible data structures, and if this seems abstract \n\nnow, we will make it concrete with examples in the next section. \n\nGraphs and where to find them\n\n-----------------------------\n\nYou're probably already familiar with some types of graph data, such \n\nas social networks. However, graphs are an extremely powerful and \n\ngeneral representation of data, we will show two types of data that you \n\nmight not think could be modeled as graphs: images and text. Although \n\ncounterintuitive, one can learn more about the symmetries and structure \n\nof images and text by viewing them as graphs,, and build an intuition \n\nthat will help understand other less grid-like graph data, which we will\n\n discuss later.\n\n### Images as graphs\n\nWe typically think of images as rectangular grids with image \n\nchannels, representing them as arrays (e.g., 244x244x3 floats). Another \n\nway to think of images is as graphs with regular structure, where each \n\npixel represents a node and is connected via an edge to adjacent pixels.\n\n Each non-border pixel has exactly 8 neighbors, and the information \n\nstored at each node is a 3-dimensional vector representing the RGB value\n\n of the pixel.\n\n with an entry if two nodes share an edge. Note that each of these three\n\n representations below are different views of the same piece of data. \n\n### Text as graphs\n\nWe can digitize text by associating indices to each character, word, \n\nor token, and representing text as a sequence of these indices. This \n\ncreates a simple directed graph, where each character or index is a node\n\n and is connected via an edge to the node that follows it.\n\n→→→→→→→→→→→→→→→→→→→→GraphsareallaroundusGraphsareallaroundus\n\nEdit the text above to see how the graph representation changes.\n\nOf course, in practice, this is not usually how text and images are \n\nencoded: these graph representations are redundant since all images and \n\nall text will have very regular structures. For instance, images have a \n\nbanded structure in their adjacency matrix because all nodes (pixels) \n\nare connected in a grid. The adjacency matrix for text is just a \n\ndiagonal line, because each word only connects to the prior word, and to\n\n the next one. \n\nThis representation (a sequence of character tokens) refers to the way \n\ntext is often represented in RNNs; other models, such as Transformers, \n\ncan be considered to view text as a fully connected graph where we learn\n\n### Graph-valued data in the wild\n\nGraphs are a useful tool to describe data you might already be \n\nfamiliar with. Let's move on to data which is more heterogeneously \n\nstructured. In these examples, the number of neighbors to each node is \n\nvariable (as opposed to the fixed neighborhood size of images and text).\n\n This data is hard to phrase in any other way besides a graph.\n\n**Molecules as graphs.** Molecules are the building \n\nblocks of matter, and are built of atoms and electrons in 3D space. All \n\nparticles are interacting, but when a pair of atoms are stuck in a \n\nstable distance from each other, we say they share a covalent bond. \n\nDifferent pairs of atoms and bonds have different distances (e.g. \n\nsingle-bonds, double-bonds). It's a very convenient and common \n\nabstraction to describe this 3D object as a graph, where nodes are atoms\n\n(Left) 3d representation of the Citronellal molecule (Center) Adjacency \n\nmatrix of the bonds in the molecule (Right) Graph representation of the \n\nmolecule.\n\n(Left) 3d representation of the Caffeine molecule (Center) Adjacency \n\nmatrix of the bonds in the molecule (Right) Graph representation of the \n\nmolecule.\n\n**Social networks as graphs.** Social networks are tools\n\n to study patterns in collective behaviour of people, institutions and \n\norganizations. We can build a graph representing groups of people by \n\nmodelling individuals as nodes, and their relationships as edges. \n\n(Left) Image of a scene from the play \"Othello\". (Center) Adjacency \n\nmatrix of the interaction between characters in the play. (Right) Graph \n\nrepresentation of these interactions.\n\n(Left) Image of karate tournament. (Center) Adjacency matrix of the \n\ninteraction between people in a karate club. (Right) Graph \n\nrepresentation of these interactions.\n\n**Citation networks as graphs.** Scientists routinely \n\ncite other scientists' work when publishing papers. We can visualize \n\nthese networks of citations as a graph, where each paper is a node, and \n\neach *directed* edge is a citation between one paper and another.\n\n Additionally, we can add information about each paper into each node, \n\nsuch as a word embedding of the abstract. (see ,  , ). \n\n**Other examples.** In computer vision, we sometimes \n\nwant to tag objects in visual scenes. We can then build graphs by \n\n can also be phrased as graphs, where the variables are nodes, and edges\n\n are operations that have these variables as input and output. You might\n\n see the term \"dataflow graph\" used in some of these contexts.\n\nThe structure of real-world graphs can vary greatly between different\n\n types of data — some graphs have many nodes with few connections \n\nbetween them, or vice versa. Graph datasets can vary widely (both within\n\n a given dataset, and between datasets) in terms of the number of nodes,\n\n edges, and the connectivity of nodes.\n\n Edges per node (degree)\n\n \n\n Dataset \n\n Domain \n\n graphs \n\n nodes \n\n edges \n\n min \n\n mean \n\n max \n\n karate club\n\n Social network \n\n 1 \n\n 34 \n\n 78 \n\n \n\n 4.5 \n\n 17 \n\n qm9 \n\n Small molecules \n\n 134k \n\n ≤ 9 \n\n ≤26 \n\n 1 \n\n 2 \n\n 5 \n\n Cora \n\n Citation network \n\n 1 \n\n 23,166 \n\n 91,500 \n\n 1 \n\n 7.8 \n\n 379 \n\n Wikipedia links, English \n\n Knowledge graph \n\n 1 \n\n 12M \n\n 378M \n\n \n\n 62.24 \n\n 1M \n\nSummary statistics on graphs found in the real world. Numbers are \n\ndependent on featurization decisions. More useful statistics and graphs \n\ncan be found in KONECT\n\nWhat types of problems have graph structured data?\n\n--------------------------------------------------\n\nWe have described some examples of graphs in the wild, but what tasks\n\n do we want to perform on this data? There are three general types of \n\nprediction tasks on graphs: graph-level, node-level, and edge-level. \n\nIn a graph-level task, we predict a single property for a whole \n\ngraph. For a node-level task, we predict some property for each node in a\n\n graph. For an edge-level task, we want to predict the property or \n\npresence of edges in a graph.\n\nFor the three levels of prediction problems described above \n\n(graph-level, node-level, and edge-level), we will show that all of the \n\nfollowing problems can be solved with a single model class, the GNN. But\n\n first, let's take a tour through the three classes of graph prediction \n\nproblems in more detail, and provide concrete examples of each.\n\n### Graph-level task\n\nIn a graph-level task, our goal is to predict the property of an \n\nentire graph. For example, for a molecule represented as a graph, we \n\nmight want to predict what the molecule smells like, or whether it will \n\nbind to a receptor implicated in a disease.\n\nThis is analogous to image classification problems with MNIST and \n\nCIFAR, where we want to associate a label to an entire image. With text,\n\n a similar problem is sentiment analysis where we want to identify the \n\nmood or emotion of an entire sentence at once.\n\n### Node-level task\n\nA classic example of a node-level prediction problem is Zach's karate club.\n\n The dataset is a single social network graph made up of individuals \n\nthat have sworn allegiance to one of two karate clubs after a political \n\nrift. As the story goes, a feud between Mr. Hi (Instructor) and John H \n\n(Administrator) creates a schism in the karate club. The nodes represent\n\n individual karate practitioners, and the edges represent interactions \n\nbetween these members outside of karate. The prediction problem is to \n\nclassify whether a given member becomes loyal to either Mr. Hi or John \n\nH, after the feud. In this case, distance between a node to either the \n\nInstructor or Administrator is highly correlated to this label.\n\nOn the left we have the initial conditions of the problem, on the right \n\nwe have a possible solution, where each node has been classified based \n\non the alliance. The dataset can be used in other graph problems like \n\nunsupervised learning. \n\n where we are trying to label the role of each pixel in an image. With \n\ntext, a similar task would be predicting the parts-of-speech of each \n\nword in a sentence (e.g. noun, verb, adverb, etc).\n\n### Edge-level task\n\nThe remaining prediction problem in graphs is *edge prediction*. \n\nOne example of edge-level inference is in image scene understanding. \n\nBeyond identifying objects in an image, deep learning models can be used\n\n to predict the relationship between them. We can phrase this as an \n\nedge-level classification: given nodes that represent the objects in the\n\n image, we wish to predict which of these nodes share an edge or what \n\nthe value of that edge is. If we wish to discover connections between \n\nentities, we could consider the graph fully connected and based on their\n\n predicted value prune edges to arrive at a sparse graph.\n\n![](A%20Gentle%20Introduction%20to%20Graph%20Neural%20Networks_files/merged.png)\n\nIn (b), above, the original image (a) has been segmented into five \n\nentities: each of the fighters, the referee, the audience and the mat. \n\n(C) shows the relationships between these entities. \n\nOn the left we have an initial graph built from the previous visual \n\nscene. On the right is a possible edge-labeling of this graph when some \n\nconnections were pruned based on the model's output.\n\nThe challenges of using graphs in machine learning\n\n--------------------------------------------------\n\nSo, how do we go about solving these different graph tasks with \n\nneural networks? The first step is to think about how we will represent \n\ngraphs to be compatible with neural networks.\n\nMachine learning models typically take rectangular or grid-like \n\narrays as input. So, it's not immediately intuitive how to represent \n\nthem in a format that is compatible with deep learning. Graphs have up \n\nto four types of information that we will potentially want to use to \n\nmake predictions: nodes, edges, global-context and connectivity. The \n\nfirst three are relatively straightforward: for example, with nodes we \n\nHowever, representing a graph's connectivity is more complicated. \n\nPerhaps the most obvious choice would be to use an adjacency matrix, \n\nsince this is easily tensorisable. However, this representation has a \n\nfew drawbacks. From the [example dataset table](#table), we \n\nsee the number of nodes in a graph can be on the order of millions, and \n\nthe number of edges per node can be highly variable. Often, this leads \n\nto very sparse adjacency matrices, which are space-inefficient.\n\nAnother problem is that there are many adjacency matrices that can \n\nencode the same connectivity, and there is no guarantee that these \n\ndifferent matrices would produce the same result in a deep neural \n\nnetwork (that is to say, they are not permutation invariant).\n\nLearning permutation invariant operations is an area of recent research.\n\n from before can be described equivalently with these two adjacency \n\nmatrices. It can also be described with every other possible \n\npermutation of the nodes.\n\nTwo adjacency matrices representing the same graph.\n\nThe example below shows every adjacency matrix that can describe this\n\n small graph of 4 nodes. This is already a significant number of \n\nadjacency matrices–for larger examples like Othello, the number is \n\nuntenable.\n\nAll of these adjacency matrices represent the same graph. Click on an \n\nedge to remove it on a \"virtual edge\" to add it and the matrices will \n\nupdate accordingly.\n\nOne elegant and memory-efficient way of representing sparse matrices \n\n as a tuple (i,j) in the k-th entry of an adjacency list. Since we \n\nexpect the number of edges to be much lower than the number of entries \n\nNodes \n\n[ \n\n \n\nEdges \n\n[2100000001000000010000002110 \n\n \n\nAdjacency List \n\n \n\nGlobal \n\n0\n\nHover and click on the edges, nodes, and global graph marker to view and\n\n change attribute representations. On one side we have a small graph and\n\n on the other the information of the graph in a tensor representation.\n\nIt should be noted that the figure uses scalar values per \n\nnode/edge/global, but most practical tensor representations have vectors\n\nGraph Neural Networks\n\n---------------------\n\nNow that the graph's description is in a matrix format that is \n\npermutation invariant, we will describe using graph neural networks \n\n(GNNs) to solve graph prediction tasks. **A GNN is an optimizable \n\ntransformation on all attributes of the graph (nodes, edges, \n\nglobal-context) that preserves graph symmetries (permutation \n\n GNNs adopt a \"graph-in, graph-out\" architecture meaning that these \n\nmodel types accept a graph as input, with information loaded into its \n\nnodes, edges and global-context, and progressively transform these \n\nembeddings, without changing the connectivity of the input graph. \n\n### The simplest GNN\n\n (with vectors instead of scalars), we are now ready to build a GNN. We \n\nwill start with the simplest GNN architecture, one where we learn new \n\nembeddings for all graph attributes (nodes, edges, global), but where we\n\n do not yet use the connectivity of the graph.\n\nFor simplicity, the previous diagrams used scalars to represent graph \n\nattributes; in practice feature vectors, or embeddings, are much more \n\nuseful. \n\nThis GNN uses a separate multilayer perceptron (MLP) (or your \n\nfavorite differentiable model) on each component of a graph; we call \n\nthis a GNN layer. For each node vector, we apply the MLP and get back a \n\nlearned node-vector. We do the same for each edge, learning a per-edge \n\nembedding, and also for the global-context vector, learning a single \n\nembedding for the entire graph.\n\nA single layer of a simple GNN. A graph is the input, and each component\n\n (V,E,U) gets updated by a MLP to produce a new graph. Each function \n\nsubscript indicates a separate function for a different graph attribute \n\nat the n-th layer of a GNN model.\n\nBecause a GNN does not update the connectivity of the input graph, we\n\n can describe the output graph of a GNN with the same adjacency list and\n\n the same number of feature vectors as the input graph. But, the output \n\ngraph has updated embeddings, since the GNN has updated each of the \n\nnode, edge and global-context representations.\n\n### GNN Predictions by Pooling Information\n\nWe will consider the case of binary classification, but this \n\nframework can easily be extended to the multi-class or regression case. \n\nIf the task is to make binary predictions on nodes, and the graph \n\nalready contains node information, the approach is straightforward — for\n\n each node embedding, apply a linear classifier.\n\nWe could imagine a social network, where we wish to anonymize user data \n\n(nodes) by not using them, and only using relational data (edges). One \n\n subsection. In the Karate club example, this would be just using the \n\nnumber of meetings between people to determine the alliance to Mr. Hi or\n\n John H.\n\nHowever, it is not always so simple. For instance, you might have \n\ninformation in the graph stored in edges, but no information in nodes, \n\nbut still need to make predictions on nodes. We need a way to collect \n\ninformation from edges and give them to nodes for prediction. We can do \n\nthis by *pooling*. Pooling proceeds in two steps:\n\n2. The gathered embeddings are then *aggregated*, usually via a sum operation.\n\n+Aggregate information from adjacent edges\n\nHover over a node (black node) to visualize which edges are gathered and\n\n aggregated to produce an embedding for that target node.\n\nSo If we only have edge-level features, and are trying to predict \n\nbinary node information, we can use pooling to route (or pass) \n\ninformation to where it needs to go. The model looks like this. \n\n sub section. Nodes can be recognized as image entities, and we are \n\ntrying to predict if the entities share a relationship (binary edges).\n\nIf we only have node-level features, and need to predict a binary \n\nglobal property, we need to gather all available node information \n\nThis is a common scenario for predicting molecular properties. For \n\nexample, we have atomic information, connectivity and we would like to \n\nknow the toxicity of a molecule (toxic/not toxic), or if it has a \n\nparticular odor (rose/not rose).\n\nIn our examples, the classification model *𝑐cc*\n\n can easily be replaced with any differentiable model, or adapted to \n\nmulti-class classification using a generalized linear model.\n\nAn end-to-end prediction task with a GNN model.\n\nNow we've demonstrated that we can build a simple GNN model, and make\n\n binary predictions by routing information between different parts of \n\nthe graph. This pooling technique will serve as a building block for \n\nconstructing more sophisticated GNN models. If we have new graph \n\nattributes, we just have to define how to pass information from one \n\nattribute to another. \n\nNote that in this simplest GNN formulation, we're not using the \n\nconnectivity of the graph at all inside the GNN layer. Each node is \n\nprocessed independently, as is each edge, as well as the global context.\n\n We only use connectivity when pooling information for prediction. \n\n### Passing messages between parts of the graph\n\nWe could make more sophisticated predictions by using pooling within \n\nthe GNN layer, in order to make our learned embeddings aware of graph \n\nMessage passing works in three steps: \n\n2. Aggregate all messages via an aggregate function (like sum).\n\nThese steps are key for leveraging the connectivity of graphs. We \n\nwill build more elaborate variants of message passing in GNN layers that\n\n yield GNN models of increasing expressiveness and power. \n\nThis is reminiscent of standard convolution: in essence, message \n\npassing and convolution are operations to aggregate and process the \n\ninformation of an element's neighbors in order to update the element's \n\nvalue. In graphs, the element is a node, and in images, the element is a\n\n pixel. However, the number of neighboring nodes in a graph can be \n\nvariable, unlike in an image where each pixel has a set number of \n\nneighboring elements.\n\nBy stacking message passing GNN layers together, a node can \n\neventually incorporate information from across the entire graph: after \n\nthree layers, a node has information about the nodes three steps away \n\nfrom it.\n\nSchematic for a GCN architecture, which updates node representations of a\n\n graph by pooling neighboring nodes at a distance of one degree.\n\n### Learning edge representations\n\nOur dataset does not always contain all types of information (node, \n\nedge, and global context). \n\nWhen we want to make a prediction on nodes, but our dataset only has \n\nedge information, we showed above how to use pooling to route \n\ninformation from edges to nodes, but only at the final prediction step \n\nof the model. We can share information between nodes and edges within \n\nthe GNN layer using message passing.\n\nWe can incorporate the information from neighboring edges in the same\n\n way we used neighboring node information earlier, by first pooling the \n\nedge information, transforming it with an update function, and storing \n\nit.\n\nHowever, the node and edge information stored in a graph are not \n\nnecessarily the same size or shape, so it is not immediately clear how \n\nto combine them. One way is to learn a linear mapping from the space of \n\nedges to the space of nodes, and vice versa. Alternatively, one may \n\nconcatenate them together before the update function.\n\nArchitecture schematic for Message Passing layer. The first step \n\n\"prepares\" a message composed of information from an edge and it's \n\nconnected nodes and then \"passes\" the message to the node.\n\nWhich graph attributes we update and in which order we update them is\n\n one design decision when constructing GNNs. We could choose whether to \n\nupdate node embeddings before edge embeddings, or the other way around. \n\nThis is an open area of research with a variety of solutions– for \n\nexample we could update in a 'weave' fashion\n\n where we have four updated representations that get combined into new \n\nnode and edge representations: node to node (linear), edge to edge \n\n(linear), node to edge (edge layer), edge to node (node layer).\n\n### Adding global representations\n\nThere is one flaw with the networks we have described so far: nodes \n\nthat are far away from each other in the graph may never be able to \n\nefficiently transfer information to one another, even if we apply \n\nmessage passing several times. For one node, If we have k-layers, \n\ninformation will propagate at most k-steps away. This can be a problem \n\nfor situations where the prediction task depends on nodes, or groups of \n\nnodes, that are far apart. One solution would be to have all nodes be \n\nable to pass information to each other. \n\nUnfortunately for large graphs, this quickly becomes computationally \n\nexpensive (although this approach, called 'virtual edges', has been used\n\n for small graphs such as molecules).\n\n or context vector. This global context vector is connected to all other\n\n nodes and edges in the network, and can act as a bridge between them to\n\n pass information, building up a representation for the graph as a \n\nwhole. This creates a richer and more complex representation of the \n\ngraph than could have otherwise been learned. \n\nSchematic of a Graph Nets architecture leveraging global representations.\n\nIn this view all graph attributes have learned representations, so we\n\n can leverage them during pooling by conditioning the information of our\n\n attribute of interest with respect to the rest. For example, for one \n\nnode we can consider information from neighboring nodes, connected edges\n\n and the global information. To condition the new node embedding on all \n\nthese possible sources of information, we can simply concatenate them. \n\nAdditionally we may also map them to the same space via a linear map and\n\nSchematic for conditioning the information of one node based\n\n on three other embeddings (adjacent nodes, adjacent edges, global). \n\nThis step corresponds to the node operations in the Graph Nets Layer. \n\nGNN playground\n\n--------------\n\nWe've described a wide range of GNN components here, but how do they \n\nactually differ in practice? This GNN playground allows you to see how \n\nthese different components and architectures contribute to a GNN's \n\nability to learn a real task. \n\n which is composed of molecules with associated odor percepts (labels). \n\nPredicting the relation of a molecular structure (graph) to its smell is\n\n a 100 year-old problem straddling chemistry, physics, neuroscience, and\n\n machine learning.\n\nTo simplify the problem, we consider only a single binary label per \n\nmolecule, classifying if a molecular graph smells \"pungent\" or not, as \n\nlabeled by a professional perfumer. We say a molecule has a \"pungent\" \n\nscent if it has a strong, striking smell. For example, garlic and \n\nWe represent each molecule as a graph, where atoms are nodes \n\ncontaining a one-hot encoding for its atomic identity (Carbon, Nitrogen,\n\n Oxygen, Fluorine) and bonds are edges containing a one-hot encoding its\n\n bond type (single, double, triple or aromatic). \n\nOur general modeling template for this problem will be built up using\n\n sequential GNN layers, followed by a linear model with a sigmoid \n\nactivation for classification. The design space for our GNN has many \n\nlevers that can customize the model:\n\n1. The number of GNN layers, also called the *depth*.\n\n2. The dimensionality of each attribute when updated. The update \n\nfunction is a 1-layer MLP with a relu activation function and a layer \n\nnorm for normalization of activations.\n\n3. The aggregation function used in pooling: max, mean or sum.\n\n4. The graph attributes that get updated, or styles of message \n\npassing: nodes, edges and global representation. We control these via \n\nboolean toggles (on or off). A baseline model would be a \n\ngraph-independent GNN (all message-passing off) which aggregates all \n\ndata at the end into a single global attribute. Toggling on all \n\nmessage-passing functions yields a GraphNets architecture.\n\nTo better understand how a GNN is learning a task-optimized \n\nrepresentation of a graph, we also look at the penultimate layer \n\nactivations of the GNN. These 'graph embeddings' are the outputs of the \n\nGNN model right before prediction. Since we are using a generalized \n\nlinear model for prediction, a linear mapping is enough to allow us to \n\nsee how we are learning representations around the decision boundary. \n\nSince these are high dimensional vectors, we reduce them to 2D via \n\nprincipal component analysis (PCA). \n\nA perfect model would visibility separate labeled data, but since we are\n\n reducing dimensionality and also have imperfect models, this boundary \n\nmight be harder to see.\n\nPlay around with different model architectures to build your \n\nintuition. For example, see if you can edit the molecule on the left to \n\nmake the model prediction increase. Do the same edits have the same \n\neffects for different model architectures?\n\n Task \n\nFor this **graph level prediction task**,\n\n each graph is a molecule. The task is to predict whether or not a \n\nmolecule will smell pungent. In the scatter plot, each point represents a\n\n graph.\n\n Model Options \n\nDepth\n\n \n\n1 layer\n\n2 layers\n\n3 layers\n\n4 layers\n\n Aggregation function\n\n \n\nMean\n\nSum\n\nMax\n\nNode embedding size\n\n25\n\n50\n\n100\n\nEdge embedding size\n\n5\n\n10\n\n20\n\nGlobal embedding size\n\n25\n\n50\n\n100\n\n Model AUC \n\n0.75\n\nReset\n\nModel Prediction\n\n14% pungent\n\nGround Truth\n\nnot pungent\n\nCarbon\n\nNitrogen\n\nOxygen\n\nSulphur\n\n – \n\n    Single Bond\n\n \n\n = \n\n    Double Bond\n\n \n\n ≡ \n\n   Triple Bond\n\n \n\n ≡ ≡ \n\n Aromatic Bond\n\n \n\n \n\n Model Prediction   \n\n \n\n Ground Truth   \n\n \n\nPungent\n\n 0% 100% \n\nepoch: 50\n\nEdit the molecule to see how the prediction changes, or \n\nchange the model params to load a different model. Select a different \n\nmolecule in the scatter plot.\n\n### Some empirical GNN design lessons\n\nWhen exploring the architecture choices above, you might have found \n\nsome models have better performance than others. Are there some clear \n\nGNN design choices that will give us better performance? For example, do\n\n deeper GNN models perform better than shallower ones? or is there a \n\nclear choice between aggregation functions? The answers are going to \n\nWith the following interactive figure, we explore the space of GNN \n\narchitectures and the performance of this task across a few major design\n\n choices: Style of message passing, the dimensionality of embeddings, \n\nnumber of layers, and aggregation operation type.\n\nEach point in the scatter plot represents a model: the x axis is the \n\nnumber of trainable variables, and the y axis is the performance. Hover \n\nover a point to see the GNN architecture parameters.\n\nvar spec = \"BasicArchitectures.json\";\n\n}).catch(console.error);\n\nScatterplot of each model's performance vs its number of \n\ntrainable variables. Hover over a point to see the GNN architecture \n\nparameters.\n\nThe first thing to notice is that, surprisingly, a higher number of \n\nparameters does correlate with higher performance. GNNs are a very \n\nparameter-efficient model type: for even a small number of parameters \n\n(3k) we can already find models with high performance. \n\nNext, we can look at the distributions of performance aggregated \n\nbased on the dimensionality of the learned representations for different\n\n graph attributes.\n\nvar spec = \"ArchitectureNDim.json\";\n\n}).catch(console.error);\n\nWe can notice that models with higher dimensionality tend to have \n\nbetter mean and lower bound performance but the same trend is not found \n\nfor the maximum. Some of the top-performing models can be found for \n\nsmaller dimensions. Since higher dimensionality is going to also involve\n\n a higher number of parameters, these observations go in hand with the \n\nprevious figure.\n\nNext we can see the breakdown of performance based on the number of GNN layers.\n\nvar spec = \"ArchitectureNLayers.json\";\n\n}).catch(console.error);\n\n Chart of number of layers vs model performance, and \n\nscatterplot of model performance vs number of parameters. Each point is \n\ncolored by the number of layers. Hover over a point to see the GNN \n\narchitecture parameters.\n\nThe box plot shows a similar trend, while the mean performance tends \n\nto increase with the number of layers, the best performing models do not\n\n have three or four layers, but two. Furthermore, the lower bound for \n\nperformance decreases with four layers. This effect has been observed \n\nbefore, GNN with a higher number of layers will broadcast information at\n\n a higher distance and can risk having their node representations \n\n'diluted' from many successive iterations .\n\nDoes our dataset have a preferred aggregation operation? Our \n\nfollowing figure breaks down performance in terms of aggregation type.\n\nvar spec = \"ArchitectureAggregation.json\";\n\n}).catch(console.error);\n\nChart of aggregation type vs model performance, and \n\nscatterplot of model performance vs number of parameters. Each point is \n\ncolored by aggregation type. Hover over a point to see the GNN \n\narchitecture parameters.\n\nOverall it appears that sum has a very slight improvement on the mean\n\n performance, but max or mean can give equally good models. This is \n\nThe previous explorations have given mixed messages. We can find mean\n\n trends where more complexity gives better performance but we can find \n\nclear counterexamples where models with fewer parameters, number of \n\nlayers, or dimensionality perform better. One trend that is much clearer\n\n is about the number of attributes that are passing information to each \n\nother.\n\nHere we break down performance based on the style of message passing.\n\n On both extremes, we consider models that do not communicate between \n\ngraph entities (\"none\") and models that have messaging passed between \n\nnodes, edges, and globals.\n\nvar spec = \"ArchitectureMessagePassing.json\";\n\n}).catch(console.error);\n\nChart of message passing vs model performance, and \n\nscatterplot of model performance vs number of parameters. Each point is \n\ncolored by message passing. Hover over a point to see the GNN \n\narchitecture parameters\n\nOverall we see that the more graph attributes are communicating, the \n\nbetter the performance of the average model. Our task is centered on \n\nglobal representations, so explicitly learning this attribute also tends\n\n to improve performance. Our node representations also seem to be more \n\nuseful than edge representations, which makes sense since more \n\ninformation is loaded in these attributes.\n\nThere are many directions you could go from here to get better \n\nperformance. We wish two highlight two general directions, one related \n\nto more sophisticated graph algorithms and another towards the graph \n\nitself.\n\nUp until now, our GNN is based on a neighborhood-based pooling \n\noperation. There are some graph concepts that are harder to express in \n\nthis way, for example a linear graph path (a connected chain of nodes). \n\nDesigning new mechanisms in which graph information can be extracted, \n\nexecuted and propagated in a GNN is one current research area , , , .\n\nOne of the frontiers of GNN research is not making new models and \n\narchitectures, but \"how to construct graphs\", to be more precise, \n\nimbuing graphs with additional structure or relations that can be \n\nleveraged. As we loosely saw, the more graph attributes are \n\ncommunicating the more we tend to have better models. In this particular\n\n case, we could consider making molecular graphs more feature rich, by \n\nadding additional spatial relationships between nodes, adding edges that\n\n are not bonds, or explicit learnable relationships between subgraphs.\n\nSee more in [Other types of graphs](#Other-types-of-graphs ).\n\nInto the Weeds\n\n--------------\n\nWhile we only described graphs with vectorized information for each \n\nattribute, graph structures are more flexible and can accommodate other \n\ntypes of information. Fortunately, the message passing framework is \n\nflexible enough that often adapting GNNs to more complex graph \n\nstructures is about defining how information is passed and updated by \n\nnew graph attributes. \n\nFor example, we can consider multi-edge graphs or *multigraphs*,\n\n where a pair of nodes can share multiple types of edges, this happens \n\nwhen we want to model the interactions between nodes differently based \n\non their type. For example with a social network, we can specify edge \n\ntypes based on the type of relationships (acquaintance, friend, family).\n\n A GNN can be adapted by having different types of message passing steps\n\n for each edge type. \n\nWe can also consider nested graphs, where for example a node represents a\n\n graph, also called a hypernode graph.\n\n Nested graphs are useful for representing hierarchical information. For\n\n example, we can consider a network of molecules, where a node \n\nrepresents a molecule and an edge is shared between two molecules if we \n\nhave a way (reaction) of transforming one to the other .\n\nIn this case, we can learn on a nested graph by having a GNN that learns\n\n representations at the molecule level and another at the reaction \n\nnetwork level, and alternate between them during training.\n\nAnother type of graph is a hypergraph,\n\n where an edge can be connected to multiple nodes instead of just two. \n\nFor a given graph, we can build a hypergraph by identifying communities \n\nof nodes and assigning a hyper-edge that is connected to all nodes in a \n\ncommunity.\n\nSchematic of more complex graphs. On the left we have an \n\nexample of a multigraph with three edge types, including a directed \n\nedge. On the right we have a three-level hierarchical graph, the \n\nintermediate level nodes are hypernodes.\n\n### Sampling Graphs and Batching in GNNs\n\nA common practice for training neural networks is to update network \n\nparameters with gradients calculated on randomized constant size (batch \n\nsize) subsets of the training data (mini-batches). This practice \n\npresents a challenge for graphs due to the variability in the number of \n\nnodes and edges adjacent to each other, meaning that we cannot have a \n\nconstant batch size. The main idea for batching with graphs is to create\n\n subgraphs that preserve essential properties of the larger graph. This \n\ngraph sampling operation is highly dependent on context and involves \n\nsub-selecting nodes and edges from a graph. These operations might make \n\nsense in some contexts (citation networks) and in others, these might be\n\n too strong of an operation (molecules, where a subgraph simply \n\nrepresents a new, smaller molecule). How to sample a graph is an open \n\nresearch question. \n\nIf we care about preserving structure at a neighborhood level, one way \n\n Each neighborhood can be considered an individual graph and a GNN can \n\nbe trained on batches of these subgraphs. The loss can be masked to only\n\n consider the node-set since all neighboring nodes would have incomplete\n\n neighborhoods.\n\nA more efficient strategy might be to first randomly sample a single \n\nnode, expand its neighborhood to distance k, and then pick the other \n\nnode within the expanded set. These operations can be terminated once a \n\ncertain amount of nodes, edges, or subgraphs are constructed.\n\nIf the context allows, we can build constant size neighborhoods by \n\npicking an initial node-set and then sub-sampling a constant number of \n\nnodes (e.g randomly, or via a random walk or Metropolis algorithm).\n\nFour different ways of sampling the same graph. Choice of \n\nsampling strategy depends highly on context since they will generate \n\ndifferent distributions of graph statistics (# nodes, #edges, etc.). For\n\n highly connected graphs, edges can be also subsampled. \n\nSampling a graph is particularly relevant when a graph is large \n\nenough that it cannot be fit in memory. Inspiring new architectures and \n\n### Inductive biases\n\nWhen building a model to solve a problem on a specific kind of data, \n\nwe want to specialize our models to leverage the characteristics of that\n\n data. When this is done successfully, we often see better predictive \n\nperformance, lower training time, fewer parameters and better \n\ngeneralization. \n\nWhen labeling on images, for example, we want to take advantage of \n\nthe fact that a dog is still a dog whether it is in the top-left or \n\nbottom-right corner of an image. Thus, most image models use \n\nconvolutions, which are translation invariant. For text, the order of \n\nthe tokens is highly important, so recurrent neural networks process \n\ndata sequentially. Further, the presence of one token (e.g. the word \n\n'not') can affect the meaning of the rest of a sentence, and so we need \n\ncomponents that can 'attend' to other parts of the text, which \n\ntransformer models like BERT and GPT-3 can do. These are some examples \n\nof inductive biases, where we are identifying symmetries or regularities\n\n in the data and adding modelling components that take advantage of \n\nthese properties.\n\nIn the case of graphs, we care about how each graph component (edge, \n\nnode, global) is related to each other so we seek models that have a \n\nrelational inductive bias. A \n\nmodel should preserve explicit relationships between entities (adjacency\n\n matrix) and preserve graph symmetries (permutation invariance). We \n\nexpect problems where the interaction between entities is important will\n\n benefit from a graph structure. Concretely, this means designing \n\ntransformation on sets: the order of operation on nodes or edges should \n\nnot matter and the operation should work on a variable number of \n\ninputs. \n\n### Comparing aggregation operations\n\nPooling information from neighboring nodes and edges is a critical \n\nstep in any reasonably powerful GNN architecture. Because each node has a\n\n variable number of neighbors, and because we want a differentiable \n\nmethod of aggregating this information, we want to use a smooth \n\naggregation operation that is invariant to node ordering and the number \n\nof nodes provided.\n\n A desirable property of an aggregation operation is that similar inputs\n\n provide similar aggregated outputs, and vice-versa. Some very simple \n\ncandidate permutation-invariant operations are sum, mean, and max. \n\nSummary statistics like variance also work. All of these take a variable\n\n number of inputs, and provide an output that is the same, no matter the\n\n input ordering. Let's explore the difference between these operations.\n\n24?Max4Mean3Sum644?44833?Max3Mean3Sum63-1?333\n\nThere is no operation that is uniformly the best choice. The mean \n\noperation can be useful when nodes have a highly-variable number of \n\nneighbors or you need a normalized view of the features of a local \n\nneighborhood. The max operation can be useful when you want to highlight\n\n single salient features in local neighborhoods. Sum provides a balance \n\nbetween these two, by providing a snapshot of the local distribution of \n\nfeatures, but because it is not normalized, can also highlight outliers.\n\n In practice, sum is commonly used. \n\n take into account several aggregation operations by concatenating them \n\nand adding a scaling function that depends on the degree of connectivity\n\n of the entity to aggregate. Meanwhile, domain specific aggregation \n\noperations can also be designed. One example lies with the \"Tetrahedral \n\nChirality\" aggregation operators .\n\n### GCN as subgraph function approximators\n\nAnother way to see GCN (and MPNN) of k-layers with a 1-degree \n\nneighbor lookup is as a neural network that operates on learned \n\nembeddings of subgraphs of size k.\n\nWhen focusing on one node, after k-layers, the updated node \n\nrepresentation has a limited viewpoint of all neighbors up to \n\nk-distance, essentially a subgraph representation. Same is true for edge\n\n representations.\n\nSo a GCN is collecting all possible subgraphs of size k and learning \n\nvector representations from the vantage point of one node or edge. The \n\nnumber of possible subgraphs can grow combinatorially, so enumerating \n\nthese subgraphs from the beginning vs building them dynamically as in a \n\nGCN, might be prohibitive.\n\n### Edges and the Graph Dual\n\nOne thing to note is that edge predictions and node predictions, \n\nwhile seemingly different, often reduce to the same problem: an edge \n\nTo obtain 𝐺GG's\n\n dual, we can convert nodes to edges (and edges to nodes). A graph and \n\nits dual contain the same information, just expressed in a different \n\nway. Sometimes this property makes solving problems easier in one \n\nrepresentation than another, like frequencies in Fourier space. In \n\nWe've talked a lot about graph convolutions and message passing, and \n\nof course, this raises the question of how do we implement these \n\noperations in practice? For this section, we explore some of the \n\nproperties of matrix multiplication, message passing, and its connection\n\n to traversing a graph. \n\n It should be noted that this message passing is not updating the \n\nrepresentation of the node features, just pooling neighboring node \n\n is zero. As long as we have an operation to gather values based on an \n\nindex, we should be able to just retrieve positive entries. \n\nAdditionally, this matrix multiply-free approach frees us from using \n\nsummation as an aggregation operation. \n\nWe can imagine that applying this operation multiple times allows us \n\nto propagate information at greater distances. In this sense, matrix \n\nmultiplication is a form of traversing over a graph. This relationship \n\nThere are deeper connections on how we can view matrices as graphs to explore .\n\n### Graph Attention Networks\n\n For example, when we consider the sum-aggregation of a node and its \n\n1-degree neighboring nodes we could also consider using a weighted \n\nsum.The challenge then is to associate weights in a permutation \n\ninvariant fashion. One approach is to consider a scalar scoring function\n\n In this case, the scoring function can be interpreted as a function \n\nthat measures how relevant a neighboring node is in relation to the \n\ncenter node. Weights can be normalized, for example with a softmax \n\nfunction to focus most of the weight on a neighbor most relevant for a \n\nnode in relation to a task. This concept is the basis of Graph Attention\n\n Networks (GAT) and Set Transformers.\n\n Permutation invariance is preserved, because scoring works on pairs of \n\nnodes. A common scoring function is the inner product and nodes are \n\noften transformed before scoring into query and key vectors via a linear\n\n map to increase the expressivity of the scoring mechanism. Additionally\n\n for interpretability, the scoring weights can be used as a measure of \n\nthe importance of an edge in relation to a task. \n\nSchematic of attention over one node with respect to it's \n\nadjacent nodes. For each edge an interaction score is computed, \n\nnormalized and used to weight node embeddings.\n\nAdditionally, transformers can be viewed as GNNs with an attention mechanism .\n\n Under this view, the transformer models several elements (i.g. \n\ncharacter tokens) as nodes in a fully connected graph and the attention \n\nmechanism is assigning edge embeddings to each node-pair which are used \n\nto compute attention weights. The difference lies in the assumed pattern\n\n of connectivity between entities, a GNN is assuming a sparse pattern \n\nand the Transformer is modelling all connections.\n\n### Graph explanations and attributions\n\nWhen deploying GNN in the wild we might care about model \n\ninterpretability for building credibility, debugging or scientific \n\ndiscovery. The graph concepts that we care to explain vary from context \n\nto context. For example, with molecules we might care about the presence\n\n or absence of particular subgraphs,\n\n while in a citation network we might care about the degree of \n\nconnectedness of an article. Due to the variety of graph concepts, there\n\n assign ranked importance values to parts of a graph that are relevant \n\nfor a task. Because realistic and challenging graph problems can be \n\ngenerated synthetically, GNNs can serve as a rigorous and repeatable \n\ntestbed for evaluating attribution techniques .\n\nSchematic of some explanability techniques on graphs. \n\nAttributions assign ranked values to graph attributes. Rankings can be \n\nused as a basis to extract connected subgraphs that might be relevant to\n\n a task.\n\n### Generative modelling\n\nBesides learning predictive models on graphs, we might also care \n\nabout learning a generative model for graphs. With a generative model we\n\n can generate new graphs by sampling from a learned distribution or by \n\ncompleting a graph given a starting point. A relevant application is in \n\nthe design of new drugs, where novel molecular graphs with specific \n\nproperties are desired as candidates to treat a disease.\n\nA key challenge with graph generative models lies in modelling the \n\n term can be avoided by only predicting known edges and a subset of the \n\nedges that are not present. The graphVAE learns to model positive \n\npatterns of connectivity and some patterns of non-connectivity in the \n\nadjacency matrix.\n\nAnother approach is to build a graph sequentially, by starting with a\n\n graph and applying discrete actions such as addition or subtraction of \n\nnodes and edges iteratively. To avoid estimating a gradient for discrete\n\n actions we can use a policy gradient. This has been done via an \n\nFinal thoughts\n\n--------------\n\nGraphs are a powerful and rich structured data type that have \n\nstrengths and challenges that are very different from those of images \n\nand text. In this article, we have outlined some of the milestones that \n\nresearchers have come up with in building neural network based models \n\nthat process graphs. We have walked through some of the important design\n\n choices that must be made when using these architectures, and hopefully\n\n the GNN playground can give an intuition on what the empirical results \n\nof these design choices are. The success of GNNs in recent years creates\n\n a great opportunity for a wide range of new problems, and we are \n\nexcited to see what the field will bring. \n\n", "bibliography_bib": [{"title": "Understanding Convolutions on Graphs"}, {"title": "The Graph Neural Network Model"}, {"title": "A Deep Learning Approach to Antibiotic Discovery"}, {"title": "Learning to simulate complex physics with graph networks"}, {"title": "Fake News Detection on Social Media using Geometric Deep Learning"}, {"title": "Traffic prediction with advanced Graph Neural Networks"}, {"title": "Pixie: A System for Recommending 3+ Billion Items to 200+ Million Users in {Real-Time}"}, {"title": "Convolutional Networks on Graphs for Learning Molecular Fingerprints"}, {"title": "Distributed Representations of Words and Phrases and their Compositionality"}, {"title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding"}, {"title": "Glove: Global Vectors for Word Representation"}, {"title": "Learning to Represent Programs with Graphs"}, {"title": "Deep Learning for Symbolic Mathematics"}, {"title": "KONECT"}, {"title": "An Information Flow Model for Conflict and Fission in Small Groups"}, {"title": "Learning Latent Permutations with Gumbel-Sinkhorn Networks"}, {"title": "Janossy Pooling: Learning Deep Permutation-Invariant Functions for Variable-Size Inputs"}, {"title": "Neural Message Passing for Quantum Chemistry"}, {"title": "Relational inductive biases, deep learning, and graph networks"}, {"title": "Deep Sets"}, {"title": "Molecular graph convolutions: moving beyond fingerprints"}, {"title": "Feature-wise transformations"}, {"title": "Leffingwell Odor Dataset"}, {"title": "Machine Learning for Scent: Learning Generalizable Perceptual Representations of Small Molecules"}, {"title": "Benchmarking Graph Neural Networks"}, {"title": "Design Space for Graph Neural Networks"}, {"title": "Principal Neighbourhood Aggregation for Graph Nets"}, {"title": "Graph Traversal with Tensor Functionals: A Meta-Algorithm for Scalable Learning"}, {"title": "Graph Neural Tangent Kernel: Fusing Graph Neural Networks with Graph Kernels"}, {"title": "Representation Learning on Graphs with Jumping Knowledge Networks"}, {"title": "Neural Execution of Graph Algorithms"}, {"title": "Graph Theory"}, {"title": "A nested-graph model for the representation and manipulation of complex objects"}, {"title": "Modeling polypharmacy side effects with graph convolutional networks"}, {"title": "Machine learning in chemical reaction space"}, {"title": "Graphs and Hypergraphs"}, {"title": "HyperGCN: A New Method of Training Graph Convolutional Networks on Hypergraphs"}, {"title": "Hierarchical Message-Passing Graph Neural Networks"}, {"title": "Little Ball of Fur"}, {"title": "Sampling from large graphs"}, {"title": "Metropolis Algorithms for Representative Subgraph Sampling"}, {"title": "Cluster-GCN: An Efficient Algorithm for Training Deep and Large Graph Convolutional Networks"}, {"title": "GraphSAINT: Graph Sampling Based Inductive Learning Method"}, {"title": "How Powerful are Graph Neural Networks?"}, {"title": "Rep the Set: Neural Networks for Learning Set Representations"}, {"title": "Message Passing Networks for Molecules with Tetrahedral Chirality"}, {"title": "N-Gram Graph: Simple Unsupervised Representation for Graphs, with Applications to Molecules"}, {"title": "Dual-Primal Graph Convolutional Networks"}, {"title": "Viewing matrices & probability as graphs"}, {"title": "Graphs and Matrices"}, {"title": "Modern Graph Theory"}, {"title": "Attention Is All You Need"}, {"title": "Graph Attention Networks"}, {"title": "Set Transformer: A Framework for Attention-based Permutation-Invariant Neural Networks"}, {"title": "Transformers are Graph Neural Networks"}, {"title": "Using Attribution to Decode Dataset Bias in Neural Network Models for Chemistry"}, {"title": "GNNExplainer: Generating Explanations for Graph Neural Networks"}, {"title": "Explainability Methods for Graph Convolutional Neural Networks"}, {"title": "Evaluating Attribution for Graph Neural Networks"}, {"title": "Variational Graph Auto-Encoders"}, {"title": "GraphRNN: Generating Realistic Graphs with Deep Auto-regressive Models"}, {"title": "Optimization of Molecules via Deep Reinforcement Learning"}, {"title": "Self-Referencing Embedded Strings (SELFIES): A 100% robust molecular string representation"}, {"title": "GraphGen: A Scalable Approach to Domain-agnostic Labeled Graph Generation"}], "id": "ce80220242c025fdbc11a238dee295d9"} +{"url": "n/a", "source": "distill", "source_type": "html", "converted_with": "python", "title": "Growing Neural Cellular Automata", "authors": ["Alexander Mordvintsev", "Ettore Randazzo", "Eyvind Niklasson", "Michael Levin"], "date_published": "2020-02-11", "abstract": " This article is part of the Differentiable Self-organizing Systems Thread, an experimental format collecting invited short articles delving into differentiable self-organizing systems, interspersed with critical commentary from several experts in adjacent fields. ", "journal_ref": "distill-pub", "doi": "https://doi.org/10.23915/distill.00023", "text": "\n\n### Contents\n\n[Model](#model)\n\n[Experiments](#experiment-1)\n\n* [Learning to Grow](#experiment-1)\n\n* [What persists, exists](#experiment-2)\n\n* [Learning to regenerate](#experiment-3)\n\n* [Rotating the perceptive field](#experiment-4)\n\n[Related Work](#related-work)\n\n[Discussion](#discussion)\n\n![](Growing%20Neural%20Cellular%20Automata_files/multiple-pages.svg)\n\n This article is part of the\n\n an experimental format collecting invited short articles delving into\n\n differentiable self-organizing systems, interspersed with critical\n\n commentary from several experts in adjacent fields.\n\n \n\n[Self-classifying MNIST Digits](https://distill.pub/2020/selforg/mnist/)\n\n Most multicellular organisms begin their life as a single egg cell - a\n\n single cell whose progeny reliably self-assemble into highly complex\n\n anatomies with many organs and tissues in precisely the same arrangement\n\n each time. The ability to build their own bodies is probably the most\n\n fundamental skill every living creature possesses. Morphogenesis (the\n\n process of an organism's shape development) is one of the most striking\n\n examples of a phenomenon called *self-organisation*. Cells, the tiny\n\n building blocks of bodies, communicate with their neighbors to decide the\n\n shape of organs and body plans, where to grow each organ, how to\n\n interconnect them, and when to eventually stop. Understanding the interplay\n\n of the emergence of complex outcomes from simple rules and\n\n homeostatic\n\n Self-regulatory feedback loops trying maintain the body in a stable state\n\n or preserve its correct overall morphology under external\n\n perturbations\n\n feedback loops is an active area of research\n\n . What is clear\n\n is that evolution has learned to exploit the laws of physics and computation\n\n to implement the highly robust morphogenetic software that runs on\n\n genome-encoded cellular hardware.\n\n \n\n This process is extremely robust to perturbations. Even when the organism is\n\n fully developed, some species still have the capability to repair damage - a\n\n process known as regeneration. Some creatures, such as salamanders, can\n\n fully regenerate vital organs, limbs, eyes, or even parts of the brain!\n\n Morphogenesis is a surprisingly adaptive process. Sometimes even a very\n\n atypical development process can result in a viable organism - for example,\n\n when an early mammalian embryo is cut in two, each half will form a complete\n\n individual - monozygotic twins!\n\n \n\n The biggest puzzle in this field is the question of how the cell collective\n\n knows what to build and when to stop. The sciences of genomics and stem cell\n\n biology are only part of the puzzle, as they explain the distribution of\n\n specific components in each cell, and the establishment of different types\n\n of cells. While we know of many genes that are *required* for the\n\n process of regeneration, we still do not know the algorithm that is\n\n *sufficient* for cells to know how to build or remodel complex organs\n\n to a very specific anatomical end-goal. Thus, one major lynch-pin of future\n\n work in biomedicine is the discovery of the process by which large-scale\n\n anatomy is specified within cell collectives, and how we can rewrite this\n\n information to have rational control of growth and form. It is also becoming\n\n clear that the software of life possesses numerous modules or subroutines,\n\n such as \"build an eye here\", which can be activated with simple signal\n\n triggers. Discovery of such subroutines and a\n\n mapping out of the developmental logic is a new field at the intersection of\n\n developmental biology and computer science. An important next step is to try\n\n to formulate computational models of this process, both to enrich the\n\n conceptual toolkit of biologists and to help translate the discoveries of\n\n biology into better robotics and computational technology.\n\n \n\n Imagine if we could design systems of the same plasticity and robustness as\n\n biological life: structures and machines that could grow and repair\n\n themselves. Such technology would transform the current efforts in\n\n regenerative medicine, where scientists and clinicians seek to discover the\n\n inputs or stimuli that could cause cells in the body to build structures on\n\n demand as needed. To help crack the puzzle of the morphogenetic code, and\n\n also exploit the insights of biology to create self-repairing systems in\n\n real life, we try to replicate some of the desired properties in an\n\n *in silico* experiment.\n\n \n\nModel\n\n-----\n\n Those in engineering disciplines and researchers often use many kinds of\n\n simulations incorporating local interaction, including systems of partial\n\n derivative equation (PDEs), particle systems, and various kinds of Cellular\n\n Automata (CA). We will focus on Cellular Automata models as a roadmap for\n\n the effort of identifying cell-level rules which give rise to complex,\n\n regenerative behavior of the collective. CAs typically consist of a grid of\n\n cells being iteratively updated, with the same set of rules being applied to\n\n each cell at every step. The new state of a cell depends only on the states\n\n of the few cells in its immediate neighborhood. Despite their apparent\n\n simplicity, CAs often demonstrate rich, interesting behaviours, and have a\n\n long history of being applied to modeling biological phenomena.\n\n \n\n Let's try to develop a cellular automata update rule that, starting from a\n\n single cell, will produce a predefined multicellular pattern on a 2D grid.\n\n This is our analogous toy model of organism development. To design the CA,\n\n we must specify the possible cell states, and their update function. Typical\n\n CA models represent cell states with a set of discrete values, although\n\n variants using vectors of continuous values exist. The use of continuous\n\n values has the virtue of allowing the update rule to be a differentiable\n\n function of the cell's neighbourhood's states. The rules that guide\n\n individual cell behavior based on the local environment are analogous to the\n\n low-level hardware specification encoded by the genome of an organism.\n\n Running our model for a set amount of steps from a starting configuration\n\n will reveal the patterning behavior that is enabled by such hardware.\n\n \n\n So - what is so special about differentiable update rules? They will allow\n\n us to use the powerful language of loss functions to express our wishes, and\n\n the extensive existing machinery around gradient-based numerical\n\n optimization to fulfill them. The art of stacking together differentiable\n\n functions, and optimizing their parameters to perform various tasks has a\n\n long history. In recent years it has flourished under various names, such as\n\n (Deep) Neural Networks, Deep Learning or Differentiable Programming.\n\n \n\nA single update step of the model.\n\n### Cell State\n\n We will represent each cell state as a vector of 16 real values (see the\n\n figure above). The first three channels represent the cell color visible to\n\n and an αlphaα equal to 1.0 for foreground pixels, and 0.0 for background.\n\n \n\n The alpha channel (αlphaα) has a special meaning: it demarcates living\n\n cells, those belonging to the pattern being grown. In particular, cells\n\n cells are \"dead\" or empty and have their state vector values explicitly set\n\n can become mature if their alpha passes the 0.1 threshold.\n\n \n\n![](Growing%20Neural%20Cellular%20Automata_files/alive2.svg)\n\n Hidden channels don't have a predefined meaning, and it's up to the update\n\n rule to decide what to use them for. They can be interpreted as\n\n concentrations of some chemicals, electric potentials or some other\n\n signaling mechanism that are used by cells to orchestrate the growth. In\n\n terms of our biological analogy - all our cells share the same genome\n\n (update rule) and are only differentiated by the information encoded the\n\n chemical signalling they receive, emit, and store internally (their state\n\n vectors).\n\n \n\n### Cellular Automaton rule\n\n Now it's time to define the update rule. Our CA runs on a regular 2D grid of\n\n 16-dimensional vectors, essentially a 3D array of shape [height, width, 16].\n\n We want to apply the same operation to each cell, and the result of this\n\n operation can only depend on the small (3x3) neighborhood of the cell. This\n\n is heavily reminiscent of the convolution operation, one of the cornerstones\n\n of signal processing and differential programming. Convolution is a linear\n\n operation, but it can be combined with other per-cell operations to produce\n\n a complex update rule, capable of learning the desired behaviour. Our cell\n\n update rule can be split into the following phases, applied in order:\n\n \n\n**Perception.** This step defines what each cell perceives of\n\n the environment surrounding it. We implement this via a 3x3 convolution with\n\n a fixed kernel. One may argue that defining this kernel is superfluous -\n\n after all we could simply have the cell learn the requisite perception\n\n kernel coefficients. Our choice of fixed operations are motivated by the\n\n fact that real life cells often rely only on chemical gradients to guide the\n\n organism development. Thus, we are using classical Sobel filters to estimate\n\n the partial derivatives of cell state channels in the x⃗ec{x}x⃗ and\n\n y⃗ec{y}y⃗​ directions, forming a 2D gradient vector in each direction, for\n\n each state channel. We concatenate those gradients with the cells own\n\n rather *percepted vector,* for each cell.\n\n \n\ndef perceive(state\\_grid):\n\nsobel\\_x = [[-1, 0, +1],\n\n[-2, 0, +2],\n\n[-1, 0, +1]]\n\nsobel\\_y = transpose(sobel\\_x)\n\n# Convolve sobel filters with states\n\n# in x, y and channel dimension.\n\ngrad\\_x = conv2d(sobel\\_x, state\\_grid)\n\ngrad\\_y = conv2d(sobel\\_y, state\\_grid)\n\n# Concatenate the cell's state channels,\n\n# the gradients of channels in x and\n\n# the gradient of channels in y.\n\nperception\\_grid = concat(\n\nstate\\_grid, grad\\_x, grad\\_y, axis=2)\n\nreturn perception\\_grid\n\n**Update rule.** Each cell now applies a series of operations\n\n to the perception vector, consisting of typical differentiable programming\n\n building blocks, such as 1x1-convolutions and ReLU nonlinearities, which we\n\n call the cell's \"update rule\". Recall that the update rule is learned, but\n\n every cell runs the same update rule. The network parametrizing this update\n\n rule consists of approximately 8,000 parameters. Inspired by residual neural\n\n networks, the update rule outputs an incremental update to the cell's state,\n\n which applied to the cell before the next time step. The update rule is\n\n designed to exhibit \"do-nothing\" initial behaviour - implemented by\n\n initializing the weights of the final convolutional layer in the update rule\n\n with zero. We also forego applying a ReLU to the output of the last layer of\n\n the update rule as the incremental updates to the cell state must\n\n necessarily be able to both add or subtract from the state.\n\n \n\ndef update(perception\\_vector):\n\n# The following pseudocode operates on\n\n# a single cell's perception vector.\n\n# Our reference implementation uses 1D\n\n# convolutions for performance reasons.\n\nx = dense(perception\\_vector, output\\_len=128)\n\nx = relu(x)\n\nds = dense(x, output\\_len=16, weights\\_init=0.0)\n\nreturn ds\n\n**Stochastic cell update.** Typical cellular automata update\n\n all cells simultaneously. This implies the existence of a global clock,\n\n synchronizing all cells. Relying on global synchronisation is not something\n\n one expects from a self-organising system. We relax this requirement by\n\n assuming that each cell performs an update independently, waiting for a\n\n random time interval between updates. To model this behaviour we apply a\n\n random per-cell mask to update vectors, setting all update values to zero\n\n with some predefined probability (we use 0.5 during training). This\n\n operation can be also seen as an application of per-cell dropout to update\n\n vectors.\n\n \n\ndef stochastic\\_update(state\\_grid, ds\\_grid):\n\n# Zero out a random fraction of the updates.\n\nrand\\_mask = cast(random(64, 64) < 0.5, float32)\n\nds\\_grid = ds\\_grid \\* rand\\_mask\n\nreturn state\\_grid + ds\\_grid\n\n**Living cell masking.** We want to model the growth process\n\n that starts with a single cell, and don't want empty cells to participate in\n\n computations or carry any hidden state. We enforce this by explicitly\n\n setting all channels of empty cells to zeros. A cell is considered empty if\n\n there is no \"mature\" (alpha>0.1) cell in its 3x3 neightborhood.\n\n \n\ndef alive\\_masking(state\\_grid):\n\n# Take the alpha channel as the measure of \"life\".\n\nalive = max\\_pool(state\\_grid[:, :, 3], (3,3)) > 0.1\n\nstate\\_grid = state\\_grid \\* cast(alive, float32)\n\nreturn state\\_grid\n\nExperiment 1: Learning to Grow\n\n------------------------------\n\nTraining regime for learning a target pattern.\n\n In our first experiment, we simply train the CA to achieve a target image\n\n after a random number of updates. This approach is quite naive and will run\n\n into issues. But the challenges it surfaces will help us refine future\n\n attempts.\n\n \n\n We initialize the grid with zeros, except a single seed cell in the center,\n\n which will have all channels except RGB\n\n We set RGB channels of the seed to zero because we want it to be visible\n\n on the white background.\n\n set to one. Once the grid is initialized, we iteratively apply the update\n\n rule. We sample a random number of CA steps from the [64, 96]\n\n This should be a sufficient number of steps to grow the pattern of the\n\n size we work with (40x40), even considering the stochastic nature of our\n\n update rule.\n\n range for each training step, as we want the pattern to be stable across a\n\n number of iterations. At the last step we apply pixel-wise L2 loss between\n\n RGBA channels in the grid and the target pattern. This loss can be\n\n differentiably optimized\n\n We observed training instabilities, that were manifesting themselves as\n\n sudden jumps of the loss value in the later stages of the training. We\n\n managed to mitigate them by applying per-variable L2 normalization to\n\n parameter gradients. This may have the effect similar to the weight\n\n normalization . Other training\n\n parameters are available in the accompanying source code.\n\n with respect to the update rule parameters by backpropagation-through-time,\n\n the standard method of training recurrent neural networks.\n\n \n\n Once the optimisation converges, we can run simulations to see how our\n\n learned CAs grow patterns starting from the seed cell. Let's see what\n\n happens when we run it for longer than the number of steps used during\n\n training. The animation below shows the behaviour of a few different models,\n\n trained to generate different emoji patterns.\n\n \n\n Your browser does not support the video tag.\n\n \n\n Many of the patterns exhibit instability for longer time periods.\n\n \n\n \n\n We can see that different training runs can lead to models with drastically\n\n different long term behaviours. Some tend to die out, some don't seem to\n\n know how to stop growing, but some happen to be almost stable! How can we\n\n steer the training towards producing persistent patterns all the time?\n\n \n\nExperiment 2: What persists, exists\n\n-----------------------------------\n\n One way of understanding why the previous experiment was unstable is to draw\n\n a parallel to dynamical systems. We can consider every cell to be a\n\n dynamical system, with each cell sharing the same dynamics, and all cells\n\n being locally coupled amongst themselves. When we train our cell update\n\n model we are adjusting these dynamics. Our goal is to find dynamics that\n\n satisfy a number of properties. Initially, we wanted the system to evolve\n\n from the seed pattern to the target pattern - a trajectory which we achieved\n\n in Experiment 1. Now, we want to avoid the instability we observed - which\n\n in our dynamical system metaphor consists of making the target pattern an\n\n attractor.\n\n \n\n One strategy to achieve this is letting the CA iterate for much longer time\n\n and periodically applying the loss against the target, training the system\n\n by backpropagation through these longer time intervals. Intuitively we claim\n\n that with longer time intervals and several applications of loss, the model\n\n is more likely to create an attractor for the target shape, as we\n\n iteratively mold the dynamics to return to the target pattern from wherever\n\n the system has decided to venture. However, longer time periods\n\n substantially increase the training time and more importantly, the memory\n\n requirements, given that the entire episode's intermediate activations must\n\n be stored in memory for a backwards-pass to occur.\n\n \n\n Instead, we propose a \"sample pool\" based strategy to a similar effect. We\n\n define a pool of seed states to start the iterations from, initially filled\n\n with the single black pixel seed state. We then sample a batch from this\n\n pool which we use in our training step. To prevent the equivalent of\n\n \"catastrophic forgetting\" we replace one sample in this batch with the\n\n original, single-pixel seed state. After concluding the training step , we\n\n replace samples in the pool that were sampled for the batch with the output\n\n states from the training step over this batch. The animation below shows a\n\n random sample of the entries in the pool every 20 training steps.\n\n \n\ndef pool\\_training():\n\n# Set alpha and hidden channels to (1.0).\n\nseed = zeros(64, 64, 16)\n\nseed[64//2, 64//2, 3:] = 1.0\n\ntarget = targets['lizard']\n\npool = [seed] \\* 1024\n\nfor i in range(training\\_iterations):\n\nidxs, batch = pool.sample(32)\n\n# Sort by loss, descending.\n\nbatch = sort\\_desc(batch, loss(batch))\n\n# Replace the highest-loss sample with the seed.\n\nbatch[0] = seed\n\n# Perform training.\n\noutputs, loss = train(batch, target)\n\n# Place outputs back in the pool.\n\npool[idxs] = outputs\n\n Your browser does not support the video tag.\n\n \n\n A random sample of the patterns in the pool during training, sampled\n\n every 20 training steps. \n\n \n\n Early on in the training process, the random dynamics in the system allow\n\n the model to end up in various incomplete and incorrect states. As these\n\n states are sampled from the pool, we refine the dynamics to be able to\n\n recover from such states. Finally, as the model becomes more robust at going\n\n from a seed state to the target state, the samples in the pool reflect this\n\n and are more likely to be very close to the target pattern, allowing the\n\n training to refine these almost completed patterns further.\n\n \n\n Essentially, we use the previous final states as new starting points to\n\n force our CA to learn how to persist or even improve an already formed\n\n pattern, in addition to being able to grow it from a seed. This makes it\n\n possible to add a periodical loss for significantly longer time intervals\n\n than otherwise possible, encouraging the generation of an attractor as the\n\n target shape in our coupled system. We also noticed that reseeding the\n\n highest loss sample in the batch, instead of a random one, makes training\n\n more stable at the initial stages, as it helps to clean up the low quality\n\n states from the pool.\n\n \n\n Here is what a typical training progress of a CA rule looks like. The cell\n\n rule learns to stabilize the pattern in parallel to refining its features.\n\n \n\n Your browser does not support the video tag.\n\n \n\n CA behaviour at training steps 100, 500, 1000, 4000. \n\n \n\nExperiment 3: Learning to regenerate\n\n------------------------------------\n\n In addition to being able to grow their own bodies, living creatures are\n\n great at maintaining them. Not only does worn out skin get replaced with new\n\n skin, but very heavy damage to complex vital organs can be regenerated in\n\n some species. Is there a chance that some of the models we trained above\n\n have regenerative capabilities?\n\n \n\n Your browser does not support the video tag.\n\n \n\n Patterns exhibit some regenerative properties upon being damaged, but\n\n not full re-growth. \n\n \n\n The animation above shows three different models trained using the same\n\n settings. We let each of the models develop a pattern over 100 steps, then\n\n damage the final state in five different ways: by removing different halves\n\n of the formed pattern, and by cutting out a square from the center. Once\n\n again, we see that these models show quite different out-of-training mode\n\n behaviour. For example \"the lizard\" develops quite strong regenerative\n\n capabilities, without being explicitly trained for it!\n\n \n\n Since we trained our coupled system of cells to generate an attractor\n\n towards a target shape from a single cell, it was likely that these systems,\n\n once damaged, would generalize towards non-self-destructive reactions.\n\n That's because the systems were trained to grow, stabilize, and never\n\n entirely self-destruct. Some of these systems might naturally gravitate\n\n towards regenerative capabilities, but nothing stops them from developing\n\n different behaviors such as explosive mitoses (uncontrolled growth),\n\n unresponsiveness to damage (overstabilization), or even self destruction,\n\n especially for the more severe types of damage.\n\n \n\n If we want our model to show more consistent and accurate regenerative\n\n capabilities, we can try to increase the basin of attraction for our target\n\n pattern - increase the space of cell configurations that naturally gravitate\n\n towards our target shape. We will do this by damaging a few pool-sampled\n\n states before each training step. The system now has to be capable of\n\n regenerating from states damaged by randomly placed erasing circles. Our\n\n hope is that this will generalize to regenerational capabilities from\n\n various types of damage.\n\n \n\n Your browser does not support the video tag.\n\n \n\n Damaging samples in the pool encourages the learning of robust\n\n regenerative qualities. Row 1 are samples from the pool, Row 2 are their\n\n respective states after iterating the model. \n\n \n\n The animation above shows training progress, which includes sample damage.\n\n We sample 8 states from the pool. Then we replace the highest-loss sample\n\n (top-left-most in the above) with the seed state, and damage the three\n\n lowest-loss (top-right-most) states by setting a random circular region\n\n within the pattern to zeros. The bottom row shows states after iteration\n\n from the respective top-most starting state. As in Experiment 2, the\n\n resulting states get injected back into the pool.\n\n \n\n Your browser does not support the video tag.\n\n \n\n Patterns exposed to damage during training exhibit astounding\n\n regenerative capabilities. \n\n \n\n As we can see from the animation above, models that were exposed to damage\n\n during training are much more robust, including to types of damage not\n\n experienced in the training process (for instance rectangular damage as\n\n above).\n\n \n\nExperiment 4: Rotating the perceptive field\n\n-------------------------------------------\n\n As previously described, we model the cell's perception of its neighbouring\n\n cells by estimating the gradients of state channels in x⃗ec{x}x⃗ and\n\n y⃗ec{y}y⃗​ using Sobel filters. A convenient analogy is that each agent has\n\n two sensors (chemosensory receptors, for instance) pointing in orthogonal\n\n directions that can sense the gradients in the concentration of certain\n\n chemicals along the axis of the sensor. What happens if we rotate those\n\n sensors? We can do this by rotating the Sobel kernels.\n\n \n\n -\\sin \theta \\ \\sin \theta & \\cos \theta \\end{bmatrix} \\* egin{bmatrix}\n\n This simple modification of the perceptive field produces rotated versions\n\n of the pattern for an angle of choosing without retraining as seen below.\n\n \n\n![](Growing%20Neural%20Cellular%20Automata_files/rotation.png)\n\n Rotating the axis along which the perception step computes gradients\n\n brings about rotated versions of the pattern. \n\n \n\n In a perfect world, not quantized by individual cells in a pixel-lattice,\n\n this would not be too surprising, as, after all, one would expect the\n\n angle - a simple change of frame of reference. However, it is important to\n\n note that things are not as simple in a pixel based model. Rotating pixel\n\n based graphics involves computing a mapping that's not necessarily bijective\n\n and classically involves interpolating between pixels to achieve the desired\n\n result. This is because a single pixel, when rotated, will now likely\n\n overlap several pixels. The successful growth of patterns as above suggests\n\n a certain robustness to the underlying conditions outside of those\n\n experienced during training.\n\n \n\nRelated Work\n\n------------\n\n### CA and PDEs\n\n There exists an extensive body of literature that describes the various\n\n flavours of cellular automata and PDE systems, and their applications to\n\n modelling physical, biological or even social systems. Although it would be\n\n impossible to present a just overview of this field in a few lines, we will\n\n describe some prominent examples that inspired this work. Alan Turing\n\n introduced his famous Turing patterns back in 1952\n\n , suggesting how\n\n reaction-diffusion systems can be a valid model for chemical behaviors\n\n during morphogenesis. A particularly inspiring reaction-diffusion model that\n\n stood the test of time is the Gray-Scott model\n\n , which shows an extreme variety of\n\n behaviors controlled by just a few variables.\n\n \n\n Ever since von Neumann introduced CAs\n\n as models for self-replication they\n\n have captivated researchers' minds, who observed extremely complex\n\n behaviours emerging from very simple rules. Likewise, the a broader audience\n\n outside of academia were seduced by CA's life-like behaviours thanks to\n\n Conway's Game of Life . Perhaps\n\n motivated in part by the proof that something as simple as the Rule 110 is\n\n Turing complete, Wolfram's \"*A New Kind of Science\"*\n\n asks for a paradigm shift centered\n\n around the extensive usage of elementary computer programs such as CA as\n\n tools for understanding the world.\n\n \n\n More recently, several researchers generalized Conway's Game of life to work\n\n on more continuous domains. We were particularly inspired by Rafler's\n\n SmoothLife and Chan's Lenia\n\n , the latter of\n\n which also discovers and classifies entire species of \"lifeforms\".\n\n \n\n A number of researchers have used evolutionary algorithms to find CA rules\n\n that reproduce predefined simple patterns\n\n .\n\n For example, J. Miller proposed an\n\n experiment similar to ours, using evolutionary algorithms to design a CA\n\n rule that could build and regenerate the French flag, starting from a seed\n\n cell.\n\n \n\n### Neural Networks and Self-Organisation\n\n The close relation between Convolutional Neural Networks and Cellular\n\n Automata has already been observed by a number of researchers\n\n . The\n\n connection is so strong it allowed us to build Neural CA models using\n\n components readily available in popular ML frameworks. Thus, using a\n\n different jargon, our Neural CA could potentially be named \"Recurrent\n\n Residual Convolutional Networks with 'per-pixel' Dropout\".\n\n \n\n The Neural GPU\n\n offers\n\n a computational architecture very similar to ours, but applied in the\n\n context of learning multiplication and a sorting algorithm.\n\n \n\n Looking more broadly, we think that the concept of self-organisation is\n\n finding its way into mainstream machine learning with popularisation of\n\n Graph Neural Network models.\n\n Typically, GNNs run a repeated computation across vertices of a (possibly\n\n dynamic) graph. Vertices communicate locally through graph edges, and\n\n aggregate global information required to perform the task over multiple\n\n rounds of message exchanges, just as atoms can be thought of as\n\n communicating with each other to produce the emergent properties of a\n\n molecule , or even points of a point\n\n cloud talk to their neighbors to figure out their global shape\n\n .\n\n \n\n Self-organization also appeared in fascinating contemporary work using more\n\n traditional dynamic graph networks, where the authors evolved\n\n Self-Assembling Agents to solve a variety of virtual tasks\n\n .\n\n \n\n### Swarm Robotics\n\n One of the most remarkable demonstrations of the power of self-organisation\n\n is when it is applied to swarm modeling. Back in 1987, Reynolds' Boids\n\n simulated the flocking behaviour of birds with\n\n just a tiny set of handcrafted rules. Nowadays, we can embed tiny robots\n\n with programs and test their collective behavior on physical agents, as\n\n demonstrated by work such as Mergeable Nervous Systems\n\n and Kilobots\n\n . To the best of our knowledge, programs\n\n embedded into swarm robots are currently designed by humans. We hope our\n\n work can serve as an inspiration for the field and encourage the design of\n\n collective behaviors through differentiable modeling.\n\n \n\nDiscussion\n\n----------\n\n### Embryogenetic Modeling\n\n Your browser does not support the video tag.\n\n \n\n Regeneration-capable 2-headed planarian, the creature that inspired this\n\n work \n\n \n\n \n\n This article describes a toy embryogenesis and regeneration model. This is a\n\n major direction for future work, with many applications in biology and\n\n beyond. In addition to the implications for understanding the evolution and\n\n control of regeneration, and harnessing this understanding for biomedical\n\n repair, there is the field of bioengineering. As the field transitions from\n\n synthetic biology of single cell collectives to a true synthetic morphology\n\n of novel living machines , it\n\n will be essential to develop strategies for programming system-level\n\n capabilities, such as anatomical homeostasis (regenerative repair). It has\n\n long been known that regenerative organisms can restore a specific\n\n anatomical pattern; however, more recently it's been found that the target\n\n morphology is not hard coded by the DNA, but is maintained by a\n\n physiological circuit that stores a setpoint for this anatomical homeostasis\n\n . Techniques are\n\n now available for re-writing this setpoint, resulting for example\n\n in 2-headed flatworms\n\n that, when cut into pieces in plain water (with no more manipulations)\n\n result in subsequent generations of 2-headed regenerated worms (as shown\n\n above). It is essential to begin to develop models of the computational\n\n processes that store the system-level target state for swarm behavior\n\n , so that efficient strategies can be developed for rationally editing this\n\n information structure, resulting in desired large-scale outcomes (thus\n\n defeating the inverse problem that holds back regenerative medicine and many\n\n other advances).\n\n \n\n### Engineering and machine learning\n\n The models described in this article run on the powerful GPU of a modern\n\n computer or a smartphone. Yet, let's speculate about what a \"more physical\"\n\n implementation of such a system could look like. We can imagine it as a grid\n\n of tiny independent computers, simulating individual cells. Each of those\n\n computers would require approximately 10Kb of ROM to store the \"cell\n\n genome\": neural network weights and the control code, and about 256 bytes of\n\n RAM for the cell state and intermediate activations. The cells must be able\n\n to communicate their 16-value state vectors to neighbors. Each cell would\n\n also require an RGB-diode to display the color of the pixel it represents. A\n\n single cell update would require about 10k multiply-add operations and does\n\n not have to be synchronised across the grid. We propose that cells might\n\n wait for random time intervals between updates. The system described above\n\n is uniform and decentralised. Yet, our method provides a way to program it\n\n to reach the predefined global state, and recover this state in case of\n\n multi-element failures and restarts. We therefore conjecture this kind of\n\n modeling may be used for designing reliable, self-organising agents. On the\n\n more theoretical machine learning front, we show an instance of a\n\n decentralized model able to accomplish remarkably complex tasks. We believe\n\n this direction to be opposite to the more traditional global modeling used\n\n in the majority of contemporary work in the deep learning field, and we hope\n\n this work to be an inspiration to explore more decentralized learning\n\n modeling.\n\n \n\n![](Growing%20Neural%20Cellular%20Automata_files/multiple-pages.svg)\n\n This article is part of the\n\n an experimental format collecting invited short articles delving into\n\n differentiable self-organizing systems, interspersed with critical\n\n commentary from several experts in adjacent fields.\n\n \n\n[Self-classifying MNIST Digits](https://distill.pub/2020/selforg/mnist/)\n\n", "bibliography_bib": [{"title": "Top-down models in biology: explanation and control of complex living systems above the molecular level"}, {"title": "Re-membering\n the body: applications of computational neuroscience to the top-down \ncontrol of regeneration of limbs and other complex organs"}, {"title": "Transmembrane voltage potential controls embryonic eye patterning in Xenopus laevis"}, {"title": "Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks"}, {"title": "The chemical basis of morphogenesis"}, {"title": "Complex Patterns in a Simple System"}, {"title": "Theory of Self-Reproducing Automata"}, {"title": "MATHEMATICAL GAMES"}, {"title": "A New Kind of Science"}, {"title": "Generalization of Conway's \"Game of Life\" to a continuous domain - SmoothLife"}, {"title": "Lenia: Biology of Artificial Life"}, {"title": "Intrinsically Motivated Exploration for Automated Discovery of Patterns in Morphogenetic Systems"}, {"title": "Evolving Self-organizing Cellular Automata Based on Neural Network Genotypes"}, {"title": "CA-NEAT: Evolved Compositional Pattern Producing Networks for Cellular Automata Morphogenesis and Replication"}, {"title": "Evolving a Self-Repairing, Self-Regulating, French Flag Organism"}, {"title": "Learning Cellular Automaton Dynamics with Neural Networks"}, {"title": "Cellular automata as convolutional neural networks"}, {"title": "Neural GPUs Learn Algorithms"}, {"title": "Improving the Neural GPU Architecture for Algorithm Learning"}, {"title": "A Comprehensive Survey on Graph Neural Networks"}, {"title": "Convolutional Networks on Graphs for Learning Molecular Fingerprints"}, {"title": "Dynamic Graph CNN for Learning on Point Clouds"}, {"title": "Learning to Control Self- Assembling Morphologies: A Study of Generalization via Modularity"}, {"title": "Flocks, Herds and Schools: A Distributed Behavioral Model"}, {"title": "Mergeable nervous systems for robots"}, {"title": "Kilobot: A low cost scalable robot system for collective behaviors"}, {"title": "What Bodies Think About: Bioelectric Computation Outside the Nervous System"}, {"title": "A scalable pipeline for designing reconfigurable organisms"}, {"title": "Perspective: The promise of multi-cellular engineered living systems"}, {"title": "Physiological inputs regulate species-specific anatomy during embryogenesis and regeneration"}, {"title": "Long-range neural and gap junction protein-mediated cues control polarity during planarian regeneration"}, {"title": "Long-Term, Stochastic Editing of Regenerative Anatomy via Targeting Endogenous Bioelectric Gradients"}, {"title": "Pattern Regeneration in Coupled Networks"}, {"title": "Bioelectrical\n control of positional information in development and regeneration: A \nreview of conceptual and computational advances"}, {"title": "Modeling Cell Migration in a Simulated Bioelectrical Signaling Network for Anatomical Regeneration"}, {"title": "Investigating the effects of noise on a cell-to-cell communication mechanism for structure regeneration"}, {"title": "Social Intelligence"}, {"title": "Inceptionism: Going deeper into neural networks"}], "id": "24e3b9f91f80c9c56f12547c2a206e10"} +{"url": "n/a", "source": "distill", "source_type": "html", "converted_with": "python", "title": "Using Artificial Intelligence to Augment Human Intelligence", "authors": ["Shan Carter", "Michael Nielsen"], "date_published": "2017-12-04", "abstract": " Historically, different answers to this question – that is, different visions of computing – have helped inspire and determine the computing systems humanity has ultimately built. Consider the early electronic computers. ENIAC, the world’s first general-purpose electronic computer, was commissioned to compute artillery firing tables for the United States Army. Other early computers were also used to solve numerical problems, such as simulating nuclear explosions, predicting the weather, and planning the motion of rockets. The machines operated in a batch mode, using crude input and output devices, and without any real-time interaction. It was a vision of computers as number-crunching machines, used to speed up calculations that would formerly have taken weeks, months, or more for a team of humans. ", "journal_ref": "distill-pub", "doi": "https://doi.org/10.23915/distill.00009", "text": "\n\n What are computers for?\n\n-------------------------\n\n Historically, different answers to this question – that is,\n\n different visions of computing – have helped inspire and\n\n determine the computing systems humanity has ultimately\n\n built. Consider the early electronic computers. ENIAC, the\n\n world's first general-purpose electronic computer, was\n\n commissioned to compute artillery firing tables for the United\n\n States Army. Other early computers were also used to solve\n\n numerical problems, such as simulating nuclear explosions,\n\n predicting the weather, and planning the motion of rockets. The\n\n machines operated in a batch mode, using crude input and output\n\n devices, and without any real-time interaction. It was a vision\n\n of computers as number-crunching machines, used to speed up\n\n calculations that would formerly have taken weeks, months, or more\n\n for a team of humans.\n\n \n\n In the 1950s a different vision of what computers are for began to\n\n develop. That vision was crystallized in 1962, when Douglas\n\n Engelbart proposed that computers could be used as a way\n\n of augmenting human\n\n intellect. In this view, computers weren't primarily\n\n tools for solving number-crunching problems. Rather, they were\n\n real-time interactive systems, with rich inputs and outputs, that\n\n humans could work with to support and expand their own\n\n problem-solving process. This vision of intelligence augmentation\n\n (IA) deeply influenced many others, including researchers such as\n\n Alan Kay at Xerox PARC, entrepreneurs such as Steve Jobs at Apple,\n\n and led to many of the key ideas of modern computing systems. Its\n\n ideas have also deeply influenced digital art and music, and\n\n fields such as interaction design, data visualization,\n\n computational creativity, and human-computer interaction.\n\n \n\n Research on IA has often been in competition with research on\n\n artificial intelligence (AI): competition for funding, competition\n\n for the interest of talented researchers. Although there has\n\n always been overlap between the fields, IA has typically focused\n\n on building systems which put humans and machines to work\n\n together, while AI has focused on complete outsourcing of\n\n intellectual tasks to machines. In particular, problems in AI are\n\n often framed in terms of matching or surpassing human performance:\n\n beating humans at chess or Go; learning to recognize speech and\n\n images or translating language as well as humans; and so on.\n\n \n\n This essay describes a new field, emerging today out of a\n\n synthesis of AI and IA. For this field, we suggest the\n\n name *artificial intelligence augmentation* (AIA): the use\n\n of AI systems to help develop new methods for intelligence\n\n augmentation. This new field introduces important new fundamental\n\n questions, questions not associated to either parent field. We\n\n believe the principles and systems of AIA will be radically\n\n different to most existing systems.\n\n \n\n Our essay begins with a survey of recent technical work hinting at\n\n artificial intelligence augmentation, including work\n\n on *generative interfaces* – that is, interfaces\n\n which can be used to explore and visualize generative machine\n\n learning models. Such interfaces develop a kind of cartography of\n\n generative models, ways for humans to explore and make meaning\n\n from those models, and to incorporate what those models\n\n \"know\" into their creative work.\n\n \n\n Our essay is not just a survey of technical work. We believe now\n\n is a good time to identify some of the broad, fundamental\n\n questions at the foundation of this emerging field. To what\n\n extent are these new tools enabling creativity? Can they be used\n\n to generate ideas which are truly surprising and new, or are the\n\n ideas cliches, based on trivial recombinations of existing ideas?\n\n Can such systems be used to develop fundamental new interface\n\n primitives? How will those new primitives change and expand the\n\n way humans think?\n\n \n\n Using generative models to invent meaningful creative operations\n\n------------------------------------------------------------------\n\n Let's look at an example where a machine learning model makes a\n\n new type of interface possible. To understand the interface,\n\n imagine you're a type designer, working on creating a new\n\n fontWe shall egregiously abuse the distinction between\n\n a font and a typeface. Apologies to any type designers who may be\n\n reading.. After sketching some initial designs, you\n\n wish to experiment with bold, italic, and condensed variations.\n\n Let's examine a tool to generate and explore such variations, from\n\n any initial design. For reasons that will soon be explained the\n\n quality of results is quite crude; please bear with us.\n\n \n\n#### Starting Font\n\n …\n\n …\n\n …\n\n …\n\n …\n\n …\n\n#### Modifications\n\nBold\n\nItalic\n\nCondensed\n\n#### Result\n\n Of course, varying the bolding (i.e., the weight), italicization\n\n and width are just three ways you can vary a font. Imagine that\n\n instead of building specialized tools, users could build their own\n\n tool merely by choosing examples of existing fonts. For instance,\n\n suppose you wanted to vary the degree of serifing on a font. In\n\n the following, please select 5 to 10 sans-serif fonts from the top\n\n box, and drag them to the box on the left. Select 5 to 10 serif\n\n fonts and drag them to the box on the right. As you do this, a\n\n machine learning model running in your browser will automatically\n\n infer from these examples how to interpolate your starting font in\n\n either the serif or sans-serif direction:\n\n \n\nDrag **sans-serif** examples here\n\nDrag **serif** examples here\n\n#### Starting font\n\n …\n\n …\n\n …\n\n …\n\n …\n\n …\n\n#### Apply your new tool\n\n#### Result\n\n In fact, we used this same technique to build the earlier bolding\n\n italicization, and condensing tool. To do so, we used the\n\n following examples of bold and non-bold fonts, of italic and\n\n non-italic fonts, and of condensed and non-condensed fonts:\n\n \n\n#### Bold\n\n#### Italic\n\n#### Condensed\n\n To build these tools, we used what's called a *generative\n\n model*; the particular model we use was trained\n\n by James Wexler. To\n\n understand generative models, consider that *a priori*\n\n describing a font appears to require a lot of data. For\n\n instance, if the font is 646464 by 646464 pixels, then we'd expect\n\n glyph. But we can use a generative model to find a much simpler\n\n description.\n\n \n\n We do this by building a neural network which takes a small number\n\n of input variables, called *latent variables*, and produces\n\n as output the entire glyph. For the particular model we use, we\n\n have 404040 latent space dimensions, and map that into the\n\n 4,0964,0964,096-dimensional space describing all the pixels in the glyph.\n\n In other words, the idea is to map a low-dimensional space into a\n\n higher-dimensional space:\n\n \n\nNeural network\n\n The generative model we use is a type of neural network known as\n\n a variational autoencoder\n\n (VAE). For our purposes, the details of the generative\n\n model aren't so important. The important thing is that by\n\n changing the latent variables used as input, it's possible to get\n\n different fonts as output. So one choice of latent variables will\n\n give one font, while another choice will give a different font:\n\n \n\n…\n\n…\n\n…\n\n You can think of the latent variables as a compact, high-level\n\n representation of the font. The neural network takes that\n\n high-level representation and converts it into the full pixel\n\n data. It's remarkable that just 404040 numbers can capture the\n\n apparent complexity in a glyph, which originally required 4,0964,0964,096\n\n variables.\n\n \n\n The generative model we use is learnt from a training set of more\n\n than 505050 thousand\n\n fonts Bernhardsson\n\n scraped from the open web. During training, the weights and\n\n biases in the network are adjusted so that the network can output\n\n a close approximation to any desired font from the training set,\n\n provided a suitable choice of latent variables is made. In some\n\n sense, the model is learning a highly compressed representation of\n\n all the training fonts.\n\n \n\n In fact, the model doesn't just reproduce the training fonts. It\n\n can also generalize, producing fonts not seen in training. By\n\n being forced to find a compact description of the training\n\n examples, the neural net learns an abstract, higher-level model of\n\n what a font is. That higher-level model makes it possible to\n\n generalize beyond the training examples already seen, to produce\n\n realistic-looking fonts.\n\n \n\n Ideally, a good generative model would be exposed to a relatively\n\n small number of training examples, and use that exposure to\n\n generalize to the space of all possible human-readable fonts.\n\n That is, for any conceivable font – whether existing or\n\n perhaps even imagined in the future – it would be possible\n\n to find latent variables corresponding exactly to that font. Of\n\n course, the model we're using falls far short of this ideal\n\n – a particularly egregious failure is that many fonts\n\n generated by the model omit the tail on the capital\n\n \"Q\" (you can see this in the examples above). Still,\n\n it's useful to keep in mind what an ideal generative model would\n\n do.\n\n \n\n Such generative models are similar in some ways to how scientific\n\n theories work. Scientific theories often greatly simplify the\n\n description of what appear to be complex phenomena, reducing large\n\n numbers of variables to just a few variables from which many\n\n aspects of system behavior can be deduced. Furthermore, good\n\n scientific theories sometimes enable us to generalize to discover\n\n new phenomena.\n\n \n\n As an example, consider ordinary material objects. Such objects\n\n have what physicists call a *phase* – they may be a\n\n liquid, a solid, a gas, or perhaps something more exotic, like a\n\n superconductor\n\n or [Bose-Einstein\n\n complex, involving perhaps 102310^{23}1023 or so molecules. But the\n\n laws of thermodynamics and statistical mechanics enable us to find\n\n a simpler description, reducing that complexity to just a few\n\n variables (temperature, pressure, and so on), which encompass much\n\n of the behavior of the system. Furthermore, sometimes it's\n\n possible to generalize, predicting unexpected new phases of\n\n matter. For example, in 1924, physicists used thermodynamics and\n\n statistical mechanics to predict a remarkable new phase of matter,\n\n Bose-Einstein condensation, in which a collection of atoms may all\n\n occupy identical quantum states, leading to surprising large-scale\n\n quantum interference effects. We'll come back to this predictive\n\n ability in our later discussion of creativity and generative\n\n models.\n\n \n\n Returning to the nuts and bolts of generative models, how can we\n\n use such models to do example-based reasoning like that in the\n\n tool shown above? Let's consider the case of the bolding tool. In\n\n that instance, we take the average of all the latent vectors for\n\n the user-specified bold fonts, and the average for all the\n\n user-specified non-bold fonts. We then compute the difference\n\n between these two average vectors:\n\n \n\nbolding vectoraverage ofnon-bold fontsaverage ofbold fonts\n\n We'll refer to this as the *bolding vector*. To make some\n\n given font bolder, we simply add a little of the bolding vector to\n\n the corresponding latent vector, with the amount of bolding vector\n\n added controlling the boldness of the resultIn\n\n practice, sometimes a slightly different procedure is used. In\n\n some generative models the latent vectors satisfy some constraints\n\n – for instance, they may all be of the same length. When\n\n that's the case, as in our model, a more sophisticated\n\n \"adding\" operation must be used, to ensure the length\n\n remains the same. But conceptually, the picture of adding the\n\n bolding vector is the right way to think.:\n\n \n\n This technique was introduced\n\n by Larsen *et al*, and\n\n vectors like the bolding vector are sometimes called\n\n *attribute vectors*. The same idea is use to implement all\n\n the tools we've shown. That is, we use example fonts to creating\n\n a bolding vector, an italicizing vector, a condensing vector, and\n\n a user-defined serif vector. The interface thus provides a way of\n\n exploring the latent space in those four directions.\n\n \n\n The tools we've shown have many drawbacks. Consider the following\n\n example, where we start with an example glyph, in the middle, and\n\n either increase or decrease the bolding (on the right and left,\n\n respectively):\n\n \n\noriginal\n\n Examining the glyphs on the left and right we see many unfortunate\n\n artifacts. Particularly for the rightmost glyph, the edges start to get\n\n rough, and the serifs begin to disappear. A better generative\n\n model would reduce those artifacts. That's a good long-term\n\n research program, posing many intriguing problems. But even with\n\n the model we have, there are also some striking benefits to the\n\n use of the generative model.\n\n \n\n To understand these benefits, consider a naive approach to\n\n bolding, in which we simply add some extra pixels around a glyph's\n\n edges, thickening it up. While this thickening perhaps matches a\n\n non-expert's way of thinking about type design, an expert does\n\n something much more involved. In the following we show the\n\n results of this naive thickening procedure versus what is actually\n\n done, for Georgia and Helvetica:\n\n \n\n#### \n\n#### Naive bolding\n\n#### Actual bolding\n\nGeorgia\n\nHelvetica\n\n As you can see, the naive bolding procedure produces quite\n\n different results, in both cases. For example, in Georgia, the\n\n left stroke is only changed slightly by bolding, while the right\n\n stroke is greatly enlarged, but only on one side. In both\n\n fonts, bolding doesn't change the height of the font, while the\n\n naive approach does.\n\n \n\n As these examples show, good bolding is *not* a trivial\n\n process of thickening up a font. Expert type designers have many\n\n heuristics for bolding, heuristics inferred from much previous\n\n experimentation, and careful study of historical\n\n examples. Capturing all those heuristics in a conventional program\n\n would involve immense work. The benefit of using the generative\n\n model is that it automatically learns many such heuristics.\n\n \n\n For example, a naive bolding tool would rapidly fill in the\n\n enclosed negative space in the enclosed upper region of the letter\n\n \"A\". The font tool doesn't do this. Instead, it goes\n\n to some trouble to preserve the enclosed negative space, moving\n\n the A's bar down, and filling out the interior strokes more slowly\n\n than the exterior. This principle is evident in the examples\n\n shown above, especially Helvetica, and it can also be seen in the\n\n operation of the font tool:\n\n \n\noriginal\n\noriginal\n\nHeight is constant\n\noriginal\n\nEven non-standard fonts are properly bolded\n\n The heuristic of preserving enclosed negative space is not *a\n\n priori* obvious. However, it's done in many professionally\n\n designed fonts. If you examine examples like those shown above\n\n it's easy to see why: it improves legibility. During training,\n\n our generative model has automatically inferred this principle\n\n from the examples it's seen. And our bolding interface then makes\n\n this available to the user.\n\n \n\n In fact, the model captures many other heuristics. For instance,\n\n in the above examples the heights of the fonts are (roughly)\n\n preserved, which is the norm in professional font design. Again,\n\n what's going on isn't just a thickening of the font, but rather\n\n the application of a more subtle heuristic inferred by the\n\n generative model. Such heuristics can be used to create fonts\n\n with properties which would otherwise be unlikely to occur to\n\n users. Thus, the tool expands ordinary people's ability to\n\n explore the space of meaningful fonts.\n\n \n\n The font tool is an example of a kind of cognitive technology. In\n\n particular, the primitive operations it contains can be\n\n internalized as part of how a user thinks. In this it resembles a\n\n program such as *Photoshop* or a spreadsheet or 3D graphics\n\n programs. Each provides a novel set of interface primitives,\n\n primitives which can be internalized by the user as fundamental\n\n new elements in their thinking. This act of internalization of new\n\n primitives is fundamental to much work on intelligence\n\n augmentation.\n\n \n\n The ideas shown in the font tool can be extended to other domains.\n\n Using the same interface, we can use a generative model to\n\n manipulate images of human faces using qualities such as\n\n expression, gender, or hair color. Or to manipulate sentences\n\n using length, sarcasm, or tone. Or to manipulate molecules using\n\n chemical properties:\n\n \n\n #alternate-uses .root {\n\n max-width: 300px;\n\n margin: 0 auto;\n\n padding: 0 20px;\n\n }\n\n @media(min-width: 600px) {\n\n #alternate-uses .root {\n\n max-width: 760px;\n\n display: grid;\n\n grid-template-columns: 1fr 1fr 1fr;\n\n grid-gap: 50px;\n\n }\n\n }\n\n .note {\n\n margin-top: 12px;\n\n grid-column-end: span 2;\n\n font-size: 10px;\n\n line-height: 1.5em;\n\n text-align: left;\n\n color: rgba(0, 0, 0, 0.4);\n\n }\n\n \n\n#### Faces\n\nSmiling\n\n Images from *Sampling Generative Networks* by White.\n\n \n\n#### Sentences\n\n\"Everyone knows that a rich, single man must want a wife.\"\n\n\"A rich, single man obviously must want a wife.\"\n\n\"A single man must want a wife.\"\n\nLength\n\n#### Molecules\n\nDelayed fluorescence decay rate\n\n \n\n Such generative interfaces provide a kind of cartography of\n\n generative models, ways for humans to explore and make meaning\n\n using those models.\n\n \n\n We saw earlier that the font model automatically infers relatively\n\n deep principles about font design, and makes them available to\n\n users. While it's great that such deep principles can be\n\n inferred, sometimes such models infer other things that are wrong,\n\n or undesirable. For example, White\n\n points out the addition of a smile vector in some face\n\n models will make faces not just smile more, but also appear more\n\n feminine. Why? Because in the training data more women than men\n\n were smiling. So these models may not just learn deep facts about\n\n the world, they may also internalize prejudices or erroneous\n\n beliefs. Once such a bias is known, it is often possible to make\n\n corrections. But to find those biases requires careful auditing\n\n of the models, and it is not yet clear how we can ensure such\n\n audits are exhaustive.\n\n \n\n More broadly, we can ask why attribute vectors work, when they\n\n work, and when they fail? At the moment, the answers to these\n\n questions are poorly understood.\n\n \n\n For the attribute vector to work requires that taking any starting\n\n font, we can construct the corresponding bold version by adding\n\n the *same* vector in the latent space. However, *a\n\n priori* there is no reason using a single constant vector to\n\n displace will work. It may be that we should displace in many\n\n different ways. For instance, the heuristics used to bold serif\n\n and sans-serif fonts are quite different, and so it seems likely\n\n that very different displacements would be involved:\n\n \n\n#### Assumption\n\n#### Reality\n\n Of course, we could do something more sophisticated than using a\n\n single constant attribute vector. Given pairs of example fonts\n\n (unbold, bold) we could train a machine learning algorithm to take\n\n as input the latent vector for the unbolded version and output the\n\n latent vector for the bolded version. With additional training\n\n data about font weights, the machine learning algorithm could\n\n learn to generate fonts of arbitrary weight. Attribute vectors\n\n are just an extremely simple approach to doing these kinds of\n\n operation.\n\n \n\n For these reasons, it seems unlikely that attribute vectors will\n\n last as an approach to manipulating high-level features. Over the\n\n next few years much better approaches will be developed. However,\n\n we can still expect interfaces offering operations broadly similar\n\n to those sketched above, allowing access to high-level and\n\n potentially user-defined concepts. That interface pattern doesn't\n\n depend on the technical details of attribute vectors.\n\n \n\n Interactive Generative Adversarial Models\n\n-------------------------------------------\n\n Let's look at another example using machine learning models to\n\n augment human creativity. It's the interactive generative\n\n adversarial networks, or iGANs, introduced\n\n by Zhu *et al* in 2016.\n\n \n\n One of the examples of Zhu *et al* is the use of iGANs in\n\n an interface to generate images of consumer products such as\n\n shoes. Conventionally, such an interface would require the\n\n programmer to write a program containing a great deal of knowledge\n\n about shoes: soles, laces, heels, and so on. Instead of doing\n\n this, Zhu *et al* train a generative model using 505050\n\n thousand images of shoes, downloaded from Zappos. They then use\n\n that generative model to build an interface that lets a user\n\n roughly sketch the shape of a shoe, the sole, the laces, and so\n\n on:\n\n \n\nExcerpted from Zhu *et\n\n al*.\n\n The visual quality is low, in part because the generative model\n\n Zhu *et al* used is outdated by modern (2017) standards\n\n – with more modern models, the visual quality would be much\n\n higher.\n\n \n\n But the visual quality is not the point. Many interesting things\n\n are going on in this prototype. For instance, notice how the\n\n overall shape of the shoe changes considerably when the sole is\n\n filled in – it becomes narrower and sleeker. Many small\n\n details are filled in, like the black piping on the top of the\n\n white sole, and the red coloring filled in everywhere on the\n\n shoe's upper. These and other facts are automatically deduced\n\n from the underlying generative model, in a way we'll describe\n\n shortly.\n\n \n\n The same interface may be used to sketch landscapes. The only\n\n difference is that the underlying generative model has been\n\n trained on landscape images rather than images of shoes. In this\n\n case it becomes possible to sketch in just the colors associated\n\n to a landscape. For example, here's a user sketching in some green\n\n grass, the outline of a mountain, some blue sky, and snow on the\n\n mountain:\n\n \n\nExcerpted from Zhu *et\n\n al*.\n\n The generative models used in these interfaces are different than\n\n for our font model. Rather than using variational autoencoders,\n\n they're based on generative\n\n adversarial networks (GANs). But the underlying idea is\n\n still to find a low-dimensional latent space which can be used to\n\n represent (say) all landscape images, and map that latent space to\n\n a corresponding image. Again, we can think of points in the\n\n latent space as a compact way of describing landscape images.\n\n \n\n Roughly speaking, the way the iGANs works is as follows. Whatever\n\n the current image is, it corresponds to some point in the latent\n\n space:\n\n \n\n Suppose, as happened in the earlier video, the user now sketches\n\n in a stroke outlining the mountain shape. We can think of the\n\n stroke as a constraint on the image, picking out a subspace of the\n\n latent space, consisting of all points in the latent space whose\n\n image matches that outline:\n\n \n\nSubspace of all images that satisfy the mountain constraint\n\n The way the interface works is to find a point in the latent space\n\n which is near to the current image, so the image is not changed\n\n too much, but also coming close to satisfying the imposed\n\n constraints. This is done by optimizing an objective function\n\n which combines the distance to each of the imposed constraints, as\n\n well as the distance moved from the current point. If there's\n\n just a single constraint, say, corresponding to the mountain\n\n stroke, this looks something like the following:\n\n \n\n We can think of this, then, as a way of applying constraints to\n\n the latent space to move the image around in meaningful ways.\n\n \n\n The iGANs have much in common with the font tool we showed\n\n earlier. Both make available operations that encode much subtle\n\n knowledge about the world, whether it be learning to understand\n\n what a mountain looks like, or inferring that enclosed negative\n\n space should be preserved when bolding a font. Both the iGANs and\n\n the font tool provide ways of understanding and navigating a\n\n high-dimensional space, keeping us on the natural space of fonts\n\n or shoes or landscapes. As Zhu *et al* remark:\n\n \n\n> \n\n> [F]or most of us, even a simple image manipulation in Photoshop\n\n> presents insurmountable difficulties… any less-than-perfect\n\n> edit immediately makes the image look completely unrealistic. To\n\n> put another way, classic visual manipulation paradigm does not\n\n> prevent the user from \"falling off\" the manifold of\n\n> natural images.\n\n> \n\n Like the font tool, the iGANs is a cognitive technology. Users\n\n can internalize the interface operations as new primitive elements\n\n in their thinking. In the case of shoes, for example, they can\n\n learn to think in terms of the difference they want to apply,\n\n adding a heel, or a higher top, or a special highlight. This is\n\n richer than the traditional way non-experts think about shoes\n\n (\"Size 11, black\" *etc*). To the extent that\n\n non-experts do think in more sophisticated ways –\n\n \"make the top a little higher and sleeker\" –\n\n they get little practice in thinking this way, or seeing the\n\n consequences of their choices. Having an interface like this\n\n enables easier exploration, the ability to develop idioms and the\n\n ability to plan, to swap ideas with friends, and so on.\n\n \n\n Two models of computation\n\n---------------------------\n\n Let's revisit the question we began the essay with, the question\n\n of what computers are for, and how this relates to intelligence\n\n augmentation.\n\n \n\n One common conception of computers is that they're problem-solving\n\n machines: \"computer, what is the result of firing this\n\n artillery shell in such-and-such a wind [and so on]?\";\n\n \"computer, what will the maximum temperature in Tokyo be in\n\n 5 days?\"; \"computer, what is the best move to take\n\n when the Go board is in this position?\"; \"computer,\n\n how should this image be classified?\"; and so on.\n\n \n\n This is a conception common to both the early view of computers as\n\n number-crunchers, and also in much work on AI, both historically\n\n and today. It's a model of a computer as a way of outsourcing\n\n cognition. In speculative depictions of possible future AI,\n\n this *cognitive outsourcing* model often shows up in the\n\n view of an AI as an oracle, able to solve some large class of\n\n problems with better-than-human performance.\n\n \n\n But a very different conception of what computers are for is\n\n possible, a conception much more congruent with work on\n\n intelligence augmentation.\n\n \n\n To understand this alternate view, consider our subjective\n\n experience of thought. For many people, that experience is verbal:\n\n they think using language, forming chains of words in their heads,\n\n similar to sentences in speech or written on a page. For other\n\n people, thinking is a more visual experience, incorporating\n\n representations such as graphs and maps. Still other people mix\n\n mathematics into their thinking, using algebraic expressions or\n\n diagrammatic techniques, such as Feynman diagrams and Penrose\n\n diagrams.\n\n \n\n In each case, we're thinking using representations invented by\n\n other people: words, graphs, maps, algebra, mathematical diagrams,\n\n and so on. We internalize these cognitive technologies as we grow\n\n up, and come to use them as a kind of substrate for our thinking.\n\n \n\n For most of history, the range of available cognitive technologies\n\n has changed slowly and incrementally. A new word will be\n\n introduced, or a new mathematical symbol. More rarely, a radical\n\n new cognitive technology will be developed. For example, in 1637\n\n Descartes published his \"Discourse on Method\",\n\n explaining how to represent geometric ideas using algebra, and\n\n vice versa:\n\n \n\n-1.5-1-0.50.511.5-1.5-1-0.50.511.5\n\nx2 + y2 = 1\n\nWhile it may seem obvious now, it is remarkable that a \n\ngeometric shape and an equation can represent the same underlying \n\nmathematical object.\n\n This enabled a radical change and expansion in how we think about\n\n both geometry and algebra.\n\n \n\n Historically, lasting cognitive technologies have been invented\n\n only rarely. But modern computers are a meta-medium enabling the\n\n rapid invention of many new cognitive technologies. Consider a\n\n relatively banal example, such\n\n as *Photoshop*. Adept *Photoshop* users routinely\n\n have formerly impossible thoughts such as: \"let's apply the\n\n clone stamp to the such-and-such layer.\". That's an\n\n instance of a more general class of thought: \"computer, [new\n\n type of action] this [new type of representation for a newly\n\n imagined class of object]\". When that happens, we're using\n\n computers to expand the range of thoughts we can think.\n\n \n\n It's this kind of *cognitive transformation* model which\n\n underlies much of the deepest work on intelligence augmentation.\n\n Rather than outsourcing cognition, it's about changing the\n\n operations and representations we use to think; it's about\n\n changing the substrate of thought itself. And so while cognitive\n\n outsourcing is important, this cognitive transformation view\n\n offers a much more profound model of intelligence augmentation.\n\n It's a view in which computers are a means to change and expand\n\n human thought itself.\n\n \n\n Historically, cognitive technologies were developed by human\n\n inventors, ranging from the invention of writing in Sumeria and\n\n Mesoamerica, to the modern interfaces of designers such as Douglas\n\n Engelbart, Alan Kay, and others.\n\n \n\n Examples such as those described in this essay suggest that AI\n\n systems can enable the creation of new cognitive technologies.\n\n Things like the font tool aren't just oracles to be consulted when\n\n you want a new font. Rather, they can be used to explore and\n\n discover, to provide new representations and operations, which can\n\n be internalized as part of the user's own thinking. And while\n\n these examples are in their early stages, they suggest AI is not\n\n just about cognitive outsourcing. A different view of AI is\n\n possible, one where it helps us invent new cognitive technologies\n\n which transform the way we think.\n\n \n\n In this essay we've focused on a small number of examples, mostly\n\n involving exploration of the latent space. There are many other\n\n examples of artificial intelligence augmentation. To give some\n\n flavor, without being comprehensive:\n\n the sketch-rnn system, for neural\n\n network assisted drawing;\n\n the Wekinator, which enables\n\n users to rapidly build new musical instruments and artistic\n\n systems; TopoSketch, for developing\n\n animations by exploring latent spaces; machine learning models for\n\n designing overall typographic\n\n layout; and a generative model which enables\n\n interpolation between musical\n\n phrases. In each case, the systems use machine learning\n\n to enable new primitives which can be integrated into the user's\n\n thinking. More broadly, artificial intelligence augmentation will\n\n draw on fields such as computational\n\n creativity and interactive machine\n\n learning.\n\n \n\n Finding powerful new primitives of thought\n\n--------------------------------------------\n\n We've argued that machine learning systems can help create\n\n representations and operations which serve as new primitives in\n\n human thought. What properties should we look for in such new\n\n primitives? This is too large a question to be answered\n\n comprehensively in a short essay. But we will explore it briefly.\n\n \n\n Historically, important new media forms often seem strange when\n\n introduced. Many such stories have passed into popular culture:\n\n the near riot at the premiere of Stravinsky and Nijinksy's\n\n \"Rite of Spring\"; the consternation caused by the\n\n early cubist paintings, leading\n\n *The New York Times* to\n\n comment: \"What do they mean? Have those\n\n responsible for them taken leave of their senses? Is it art or\n\n madness? Who knows?\"\n\n \n\n Another example comes from physics. In the 1940s, different\n\n formulations of the theory of quantum electrodynamics were\n\n developed independently by the physicists Julian Schwinger,\n\n Shin'ichirō Tomonaga, and Richard Feynman. In their work,\n\n Schwinger and Tomonaga used a conventional algebraic approach,\n\n along lines similar to the rest of physics. Feynman used a more\n\n radical approach, based on what are now known as Feynman diagrams,\n\n for depicting the interaction of light and matter:\n\n \n\nImage by [Joel\n\n Attribution-Share Alike 3.0 Unported license\n\n \n\n Initially, the Schwinger-Tomonaga approach was easier for other\n\n physicists to understand. When Feynman and Schwinger presented\n\n their work at a 1948 workshop, Schwinger was immediately\n\n acclaimed. By contrast, Feynman left his audience mystified. As\n\n James Gleick put it in his biography of\n\n Feynman:\n\n \n\n> \n\n> It struck Feynman that everyone had a favorite principle or\n\n> theorem and he was violating them all… Feynman knew he had\n\n> failed. At the time, he was in anguish. Later he said simply:\n\n> \"I had too much stuff. My machines came from too far\n\n> away.\"\n\n> \n\n Of course, strangeness for strangeness's sake alone is not\n\n useful. But these examples suggest that breakthroughs in\n\n representation often appear strange at first. Is there any\n\n underlying reason that is true?\n\n \n\n Part of the reason is because if some representation is truly new,\n\n then it will appear different than anything you've ever seen\n\n before. Feynman's diagrams, Picasso's paintings, Stravinsky's\n\n music: all revealed genuinely new ways of making meaning. Good\n\n representations sharpen up such insights, eliding the familiar to\n\n show that which is new as vividly as possible. But because of\n\n that emphasis on unfamiliarity, the representation will seem\n\n strange: it shows relationships you've never seen before. In some\n\n sense, the task of the designer is to identify that core\n\n strangeness, and to amplify it as much as possible.\n\n \n\n Strange representations are often difficult to understand. At\n\n first, physicists preferred Schwinger-Tomonaga to Feynman. But as\n\n Feynman's approach was slowly understood by physicists, they\n\n realized that although Schwinger-Tomonaga and Feynman were\n\n mathematically equivalent, Feynman was more powerful. As Gleick\n\n puts it:\n\n \n\n> \n\n> Schwinger's students at Harvard were put at a competitive\n\n> disadvantage, or so it seemed to their fellows elsewhere, who\n\n> suspected them of surreptitiously using the diagrams anyway. This\n\n> was sometimes true… Murray Gell-Mann later spent a semester\n\n> staying in Schwinger's house and loved to say afterward that he\n\n> had searched everywhere for the Feynman diagrams. He had not\n\n> found any, but one room had been locked…\n\n> \n\n These ideas are true not just of historical representations, but\n\n also of computer interfaces. However, our advocacy of strangeness\n\n in representation contradicts much conventional wisdom about\n\n interfaces, especially the widely-held belief that they should be\n\n \"user friendly\", i.e., simple and immediately useable\n\n by novices. That most often means the interface is cliched, built\n\n from conventional elements combined in standard ways. But while\n\n using a cliched interface may be easy and fun, it's an ease\n\n similar to reading a formulaic romance novel. It means the\n\n interface does not reveal anything truly surprising about its\n\n subject area. And so it will do little to deepen the user's\n\n understanding, or to change the way they think. For mundane tasks\n\n that is fine, but for deeper tasks, and for the longer term, you\n\n want a better interface.\n\n \n\n Ideally, an interface will surface the deepest principles\n\n underlying a subject, revealing a new world to the user. When you\n\n learn such an interface, you internalize those principles, giving\n\n you more powerful ways of reasoning about that world. Those\n\n principles are the diffs in your understanding. They're all you\n\n really want to see, everything else is at best support, at worst\n\n unimportant dross. The purpose of the best interfaces isn't to be\n\n user-friendly in some shallow sense. It's to be user-friendly in\n\n a much stronger sense, reifying deep\n\n principles about the world, making them the working\n\n conditions in which users live and create. At that point what once\n\n appeared strange can instead becomes comfortable and familiar,\n\n part of the pattern of thoughtA powerful instance of\n\n these ideas is when an interface reifies general-purpose\n\n principles. An example is an\n\n interface one of us developed\n\n based on the principle of conservation of energy. Such\n\n general-purpose principles generate multiple unexpected\n\n relationships between the entities of a subject, and so are a\n\n particularly rich source of insights when reified in an\n\n interface..\n\n \n\n What does this mean for the use of AI models for intelligence\n\n augmentation?\n\n \n\n Aspirationally, as we've seen, our machine learning models will\n\n help us build interfaces which reify deep principles in ways\n\n meaningful to the user. For that to happen, the models have to\n\n discover deep principles about the world, recognize those\n\n principles, and then surface them as vividly as possible in an\n\n interface, in a way comprehensible by the user.\n\n \n\n Of course, this is a tall order! The examples we've shown are just\n\n barely beginning to do this. It's true that our models do\n\n sometimes discover relatively deep principles, like the\n\n preservation of enclosed negative space when bolding a font. But\n\n this is merely implicit in the model. And while we've built a tool\n\n which takes advantage of such principles, it'd be better if the\n\n model automatically inferred the important principles learned, and\n\n found ways of explicitly surfacing them through the interface.\n\n (Encouraging progress toward this has been made\n\n by InfoGANs, which use\n\n information-theoretic ideas to find structure in the latent\n\n space.) Ideally, such models would start to get at true\n\n explanations, not just in a static form, but in a dynamic form,\n\n manipulable by the user. But we're a long way from that point.\n\n \n\n Do these interfaces inhibit creativity?\n\n-----------------------------------------\n\n It's tempting to be skeptical of the expressiveness of the\n\n interfaces we've described. If an interface constrains us to\n\n explore only the natural space of images, does that mean we're\n\n merely doing the expected? Does it mean these interfaces can only\n\n be used to generate visual cliches? Does it prevent us from\n\n generating anything truly new, from doing truly creative work?\n\n \n\n To answer these questions, it's helpful to identify two different\n\n modes of creativity. This two-mode model is over-simplified:\n\n creativity doesn't fit so neatly into two distinct categories. Yet\n\n the model nonetheless clarifies the role of new interfaces in\n\n creative work.\n\n \n\n The first mode of creativity is the everyday creativity of a\n\n craftsperson engaged in their craft. Much of the work of a font\n\n designer, for example, consists of competent recombination of the\n\n best existing practices. Such work typically involves many\n\n creative choices to meet the intended design goals, but not\n\n developing key new underlying principles.\n\n \n\n For such work, the generative interfaces we've been discussing are\n\n promising. While they currently have many limitations, future\n\n research will identity and fix many deficiencies. This is\n\n happening rapidly with GANs: the original\n\n GANs had many limitations,\n\n but models soon appeared that were better adapted to\n\n images, improved the\n\n resolution, reduced artifactsSo much work has been\n\n done on improving resolution and reducing artifacts it seems\n\n unfair to single out any small set of papers, and to omit the many\n\n others., and so on. With enough iterations it's\n\n plausible these generative interfaces will become powerful tools\n\n for craft work.\n\n \n\n The second mode of creativity aims toward developing new\n\n principles that fundamentally change the range of creative\n\n expression. One sees this in the work of artists such as Picasso\n\n or Monet, who violated existing principles of painting, developing\n\n new principles which enabled people to see in new ways.\n\n \n\n Is it possible to do such creative work, while using a generative\n\n interface? Don't such interfaces constrain us to the space of\n\n natural images, or natural fonts, and thus actively prevent us\n\n from exploring the most interesting new directions in creative\n\n work?\n\n \n\n The situation is more complex than this.\n\n \n\n In part, this is a question about the power of our generative\n\n models. In some cases, the model can only generate recombinations\n\n of existing ideas. This is a limitation of an ideal GAN, since a\n\n perfectly trained GAN generator will reproduce the training\n\n distribution. Such a model can't directly generate an image based\n\n on new fundamental principles, because such an image wouldn't look\n\n anything like it's seen in its training data.\n\n \n\n Artists such as [Mario\n\n Klingemann](http://quasimondo.com/) and [Mike\n\n Tyka](http://www.miketyka.com/) are now using GANs to create interesting\n\n artwork. They're doing that using \"imperfect\" GAN\n\n models, which they seem to be able to use to explore interesting\n\n new principles; it's perhaps the case that bad GANs may be more\n\n artistically interesting than ideal GANs. Furthermore, nothing\n\n says an interface must only help us explore the latent space.\n\n Perhaps operations can be added which deliberately take us out\n\n of the latent space, or to less probable (and so more\n\n surprising) parts of the space of natural images.\n\n \n\n Of course, GANs are not the only generative models. In a\n\n sufficiently powerful generative model, the generalizations\n\n discovered by the model may contain ideas going beyond what humans\n\n have discovered. In that case, exploration of the latent space may\n\n enable us to discover new fundamental principles. The model would\n\n have discovered stronger abstractions than human experts. Imagine\n\n a generative model trained on paintings up until just before the\n\n time of the cubists; might it be that by exploring that model it\n\n would be possible to discover cubism? It would be an analogue to\n\n something like the prediction of Bose-Einstein condensation, as\n\n discussed earlier in the essay. Such invention is beyond today's\n\n generative models, but seems a worthwhile aspiration for future\n\n models.\n\n \n\n Our examples so far have all been based on generative models. But\n\n there are some illuminating examples which are not based on\n\n generative models. Consider the pix2pix system developed\n\n by Isola *et al*. This\n\n system is trained on pairs of images, e.g., pairs showing the\n\n edges of a cat, and the actual corresponding cat. Once trained,\n\n it can be shown a set of edges and asked to generate an image for\n\n an actual corresponding cat. It often does this quite well:\n\n \n\n .cat-grid .row {\n\n grid-template-columns: 1fr 1fr 0.5fr;\n\n align-items: center;\n\n }\n\n .cat-grid .row {\n\n border-bottom: 1px solid rgba(0, 0, 0, 0.1);\n\n }\n\n .cat-grid .row:last-child {\n\n border-bottom: none;\n\n }\n\n \n\n#### Input\n\n#### Output\n\n[Live demo by Christopher Hesse](https://affinelayer.com/pixsrv/)\n\n When supplied with unusual constraints, pix2pix can produce\n\n striking images:\n\n \n\n[Bread cat by Ivy Tsai](https://twitter.com/ivymyt/status/834174687282241537)\n\n[Cat beholder by Marc Hesse](https://affinelayer.com/pixsrv/beholder.jpg)\n\nSpiral cat\n\n This is perhaps not high creativity of a Picasso-esque level. But\n\n it is still surprising. It's certainly unlike images most of us\n\n have ever seen before. How does pix2pix and its human user achieve\n\n this kind of result?\n\n \n\n Unlike our earlier examples, pix2pix is not a generative model.\n\n This means it does not have a latent space or a corresponding\n\n space of natural images. Instead, there is a neural network,\n\n called, confusingly, a generator – this is not meant in the\n\n same sense as our earlier generative models – that takes as\n\n input the constraint image, and produces as output the filled-in\n\n image.\n\n \n\n The generator is trained adversarially against a discriminator\n\n network, whose job is to distinguish between pairs of images\n\n generated from real data, and pairs of images generated by the\n\n generator.\n\n \n\n While this sounds similar to a conventional GAN, there is a\n\n crucial difference: there is no latent vector input to the\n\n generatorActually, Isola *et\n\n al* experimented with adding such a latent vector to\n\n the generator, but found it made little difference to the\n\n resulting images.. Rather, there is simply an input\n\n constraint. When a human inputs a constraint unlike anything seen\n\n in training, the network is forced to improvise, doing the best it\n\n can to interpret that constraint according to the rules it has\n\n previously learned. The creativity is the result of a forced\n\n merger of knowledge inferred from the training data, together with\n\n novel constraints provided by the user. As a result, even\n\n relatively simple ideas – like the bread- and beholder-cats\n\n – can result in striking new types of images, images not\n\n within what we would previously have considered the space of\n\n natural images.\n\n \n\nConclusion\n\n----------\n\n It is conventional wisdom that AI will change how we interact with\n\n computers. Unfortunately, many in the AI community greatly\n\n underestimate the depth of interface design, often regarding it as\n\n a simple problem, mostly about making things pretty or\n\n easy-to-use. In this view, interface design is a problem to be\n\n handed off to others, while the hard work is to train some machine\n\n learning system.\n\n \n\n This view is incorrect. At its deepest, interface design means\n\n developing the fundamental primitives human beings think and\n\n create with. This is a problem whose intellectual genesis goes\n\n back to the inventors of the alphabet, of cartography, and of\n\n musical notation, as well as modern giants such as Descartes,\n\n Playfair, Feynman, Engelbart, and Kay. It is one of the hardest,\n\n most important and most fundamental problems humanity grapples\n\n with.\n\n \n\n As discussed earlier, in one common view of AI our computers will\n\n continue to get better at solving problems, but human beings will\n\n remain largely unchanged. In a second common view, human beings\n\n will be modified at the hardware level, perhaps directly through\n\n neural interfaces, or indirectly through whole brain emulation.\n\n \n\n We've described a third view, in which AIs actually change\n\n humanity, helping us invent new cognitive technologies, which\n\n expand the range of human thought. Perhaps one day those\n\n cognitive technologies will, in turn, speed up the development of\n\n AI, in a virtuous feedback cycle:\n\n \n\n It would not be a Singularity in machines. Rather, it would be a\n\n Singularity in humanity's range of thought. Of course, this loop\n\n is at present extremely speculative. The systems we've described\n\n can help develop more powerful ways of thinking, but there's at\n\n most an indirect sense in which those ways of thinking are being\n\n used in turn to develop new AI systems.\n\n \n\n Of course, over the long run it's possible that machines will\n\n exceed humans on all or most cognitive tasks. Even if that's the\n\n case, cognitive transformation will still be a valuable end, worth\n\n pursuing in its own right. There is pleasure and value involved\n\n in learning to play chess or Go well, even if machines do it\n\n better. And in activities such as story-telling the benefit often\n\n isn't so much the artifact produced as the process of construction\n\n itself, and the relationships forged. There is intrinsic value in\n\n personal change and growth, apart from instrumental benefits.\n\n \n\n The interface-oriented work we've discussed is outside the\n\n narrative used to judge most existing work in artificial\n\n intelligence. It doesn't involve beating some benchmark for a\n\n classification or regression problem. It doesn't involve\n\n impressive feats like beating human champions at games such as\n\n Go. Rather, it involves a much more subjective and\n\n difficult-to-measure criterion: is it helping humans think and\n\n create in new ways?\n\n \n\n This creates difficulties for doing this kind of work,\n\n particularly in a research setting. Where should one publish?\n\n What community does one belong to? What standards should be\n\n applied to judge such work? What distinguishes good work from\n\n bad?\n\n \n\n We believe that over the next few years a community will emerge\n\n which answers these questions. It will run workshops and\n\n conferences. It will publish work in venues such as Distill. Its\n\n standards will draw from many different communities: from the\n\n artistic and design and musical communities; from the mathematical\n\n community's taste in abstraction and good definition; as well as\n\n from the existing AI and IA communities, including work on\n\n computational creativity and human-computer interaction. The\n\n long-term test of success will be the development of tools which\n\n are widely used by creators. Are artists using these tools to\n\n develop remarkable new styles? Are scientists in other fields\n\n using them to develop understanding in ways not otherwise\n\n possible? These are great aspirations, and require an approach\n\n that builds on conventional AI work, but also incorporates very\n\n different norms.\n\n \n\n", "bibliography_bib": [{"title": "Augmenting Human Intellect: A Conceptual Framework"}, {"title": "deeplearn.js font demo"}, {"title": "Auto-encoding variational Bayes"}, {"title": "Analyzing 50k fonts using deep neural networks"}, {"title": "Autoencoding beyond pixels using a learned similarity metric"}, {"title": "Sampling Generative Networks"}, {"title": "Writing with the Machine"}, {"title": "Automatic chemical design using a data-driven continuous representation of molecules"}, {"title": "Generative visual manipulation on the natural image manifold"}, {"title": "Generative adversarial nets"}, {"title": "A Neural Representation of Sketch Drawings"}, {"title": "Real-time human interaction with supervised learning algorithms for music composition and performance"}, {"title": "TopoSketch: Drawing in Latent Space"}, {"title": "Taking The Robots To Design School, Part 1"}, {"title": "Hierarchical Variational Autoencoders for Music"}, {"title": "Computational creativity: the final frontier?"}, {"title": "Interactive machine learning: letting users build classifiers"}, {"title": "Eccentric School of Painting Increased Its Vogue in the Current Art Exhibition — What Its Followers Attempt to Do"}, {"title": "Genius: The Life and Science of Richard Feynman"}, {"title": "Thought as a Technology"}, {"title": "InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets"}, {"title": "Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks"}, {"title": "Image-to-Image Translation with Conditional Adversarial Networks"}], "id": "7a93cf004149a62f56db3a137c077c68"} +{"url": "n/a", "source": "distill", "source_type": "html", "converted_with": "python", "title": "Activation Atlas", "authors": ["Shan Carter", "Zan Armstrong", "Ludwig Schubert", "Ian Johnson", "Chris Olah"], "date_published": "2019-03-06", "abstract": "Neural networks can learn to classify images more accurately than any system humans directly design. This raises a natural question: What have these networks learned that allows them to classify images so well? ", "journal_ref": "distill-pub", "doi": "https://doi.org/10.23915/distill.00015", "text": "\n\nIntroduction\n\n------------\n\nNeural networks can learn to classify images more accurately than \n\nany system humans directly design. This raises a natural question: What \n\nhave these networks learned that allows them to classify images so well?\n\n \n\nFeature visualization is a thread of research that tries to \n\nanswer this question by letting us \"see through the eyes\" of the network\n\n and trying to determine what they respond to. Because neurons don't \n\n But there was still a problem — what combinations of neurons should we \n\n \n\nThese approaches are exciting because they can make the hidden \n\nlayers of networks comprehensible. These layers are the heart of how \n\nneural networks outperform more traditional approaches to machine \n\nlearning and historically, we've had little understanding of what \n\n \n\nUnfortunately, visualizing activations has a major \n\nweakness — it is limited to seeing only how the network sees a single \n\ninput. Because of this, it doesn't give us a big picture view of the \n\nnetwork. When what we want is a map of an entire forest, inspecting one \n\ntree at a time will not suffice.\n\n \n\n gives a global view of a dataset by taking each image and organizing\n\n them by their activation values from a neural network.\n\n Showing which images the model sees as similar does help us infer \n\nsome ideas about what features the network is responding to,\n\n but feature visualization makes those connections much more \n\nexplicit.\n\n Nguyen, et al use t-SNE to make more diverse neuron visualizations,\n\n \n\nIn this article we introduce *activation atlases* to this \n\nquiver of techniques. (An example is shown at the top of this article.) \n\nBroadly speaking, we use a technique similar to the one in CNN codes, \n\nbut instead of showing input data, we show feature visualizations of \n\naveraged activations. By combining these two techniques, we can get the \n\nadvantages of each in one view — a global map seen through the eyes of \n\nthe network.\n\n \n\n would give us the global view of a network that we are seeking. In \n\npractice, however, neurons are rarely used by the network in isolation, \n\nand it may be difficult to understand them that way. As an analogy, \n\nwhile the 26 letters in the alphabet provide a basis for English, seeing\n\n how letters are commonly combined to make words gives far more insight \n\ninto the concepts that can be expressed than the letters alone. \n\nSimilarly, activation atlases give us a bigger picture view by showing \n\ncommon combinations of neurons.\n\n \n\n#### Individual Neurons\n\n![](Activation%20Atlas_files/overview-neuron.jpg)\n\n![diagram](Activation%20Atlas_files/manifold-1.jpg)\n\n make hidden layers somewhat meaningful, but misses interactions between\n\n neurons — it only shows us one-dimensional, orthogonal probes of the \n\nhigh-dimensional activation space.\n\n#### Pairwise Interactions\n\n![](Activation%20Atlas_files/overview-pairwise.jpg)\n\n![diagram](Activation%20Atlas_files/manifold-2.jpg)\n\n reveal interaction effects, but they only show two-dimensional slices \n\nof a space that has hundreds of dimensions, and many of the combinations\n\n are not realistic.\n\n#### Spatial Activations\n\n![](Activation%20Atlas_files/activation-grid.png)\n\n![diagram](Activation%20Atlas_files/manifold-3.jpg)\n\n show us important combinations of many neurons by sampling the \n\nsub-manifold of likely activations, but they are limited to those that \n\noccur in the given example image.\n\n#### Activation Atlas\n\n![](Activation%20Atlas_files/overview-atlas.jpg)\n\n![diagram](Activation%20Atlas_files/manifold-4.jpg)\n\n These atlases not only reveal visual abstractions within a model, \n\nbut later in the article we will show that they can reveal high-level \n\nmisunderstandings in a model that can be exploited. For example, by \n\nlooking at an activation atlas we will be able to see why a picture of a\n\n baseball can switch the classification of an image from \"grey whale\" to\n\n \"great white shark\".\n\n \n\n![great white shark / grey whale](Activation%20Atlas_files/whale-baseball.png)\n\n| | | |\n\n| --- | --- | --- |\n\n| 1. | grey whale | 91.0% |\n\n| 2. | killer whale | 7.5% |\n\n| 3. | great white shark | 0.7% |\n\n| 4. | gar | 0.4% |\n\n| | | |\n\n| --- | --- | --- |\n\n| 1. | great white shark | 66.7% |\n\n| 2. | baseball | 7.4% |\n\n| 3. | grey whale | 4.1% |\n\n| 4. | sombrero | 3.2% |\n\nOf course, activation atlases do have limitations. In particular, \n\nthey're dependent on the distribution of the data we choose to sample \n\nactivations from (in our examples, we use one million images chosen at \n\nrandom from the ImageNet dataset \n\n training data). As a result, they will only show the activations that \n\nexist within the distribution of the sample data. However, while it's \n\nimportant to be aware of these limitations — we'll discuss them in much \n\nmore depth later! — Activation Atlases still give us a new kind of \n\noverview of what neural networks can represent.\n\nLooking at a Single Image\n\n-------------------------\n\nBefore we dive into Activation Atlases, let's briefly review how we\n\n use feature visualization to make activation vectors meaningful (\"see \n\n \n\n (also known as \"GoogLeNet\").\n\n \n\n InceptionV1 consists of a number of layers, which we refer to as \n\n\"mixed3a\", \"mixed3b\", \"mixed4a\", etc., and sometimes shortened to just \n\n\"3a\". Each layer successively builds on the previous layers. \n\n \n\n![model](Activation%20Atlas_files/model.svg)\n\nTo visualize how InceptionV1 sees an image, the first step is to \n\nfeed the image into the network and run it through to the layer of \n\ninterest. Then we collect the activations — the numerical values of how \n\nmuch each neuron fired. If a neuron is excited by what it is shown, its \n\nactivation value will be positive.\n\n \n\nUnfortunately these vectors of activation values are just \n\nvectors of unitless numbers and not particularly interpretable by \n\n comes in.\n\n Roughly speaking, we can think of feature visualization as creating an\n\n idealized image of what the network thinks would produce a particular \n\nactivation vector. Whereas we normally use a network to transform an \n\nimage into an activation vector, in feature visualization we go in the \n\nopposite direction. Starting with an activation vector at a particular \n\nlayer, we create an image through an iterative optimization process.\n\n \n\n there is not just one activation vector per layer per image.\n\n This means that the same neurons are run on each patch of the previous\n\n layer.\n\n Thus, when we pass an entire image through the network, each neuron \n\nwill be evaluated hundreds of times, once for each overlapping patch of \n\nthe image. We can consider the vectors of how much each neuron fired for\n\n each patch separately.\n\n \n\nThe result is a grid of feature visualizations, one for each\n\n patch. This shows us how the network sees different parts of the input \n\nimage.\n\n \n\n![input image](Activation%20Atlas_files/dogcat.jpg)\n\nInput image from ImageNet.\n\n![activation grid](Activation%20Atlas_files/dogcat-grid.jpg)\n\nActivation grid from InceptionV1, layer mixed4d.\n\nAggregating Multiple Images\n\n---------------------------\n\nActivation grids show how the network sees a single image, but what\n\n if we want to see more? What if we want to understand how it reacts to \n\nmillions of images?\n\n \n\nOf course, we could look at individual activation grids for \n\nthose images one by one. But looking at millions of examples doesn't \n\nscale, and human brains aren't good at comparing lots of examples \n\nwithout structure. In the same way that we need a tool like a histogram \n\nin order to understand millions of numbers, we need a way to aggregate \n\nand organize activations if we want to see meaningful patterns in \n\nmillions of them.\n\n \n\n This gives us one million activation vectors. Each of the vectors is \n\nhigh-dimensional, perhaps 512 dimensions! With such a complex set of \n\ndata, we need to organize and aggregate it if we want a big picture \n\nview.\n\n \n\n can project high-dimensional data like our collection of activation \n\nvectors into useful 2D layouts, preserving some of the local structure \n\nof the original space. This takes care of organizing our activation \n\nvectors, but we also need to aggregate into a more manageable number of \n\nelements — one million dots would be hard to interpret. We'll do this by\n\n drawing a grid over the 2D layout we created with dimensionality \n\nreduction. For each cell in our grid, we average all the activations \n\nthat lie within the boundaries of that cell, and use feature \n\nvisualization to create an iconic representation.\n\n \n\n[0.700, 0.498, …]\n\n [0.818, 0.437, …]\n\n [0.421, 0.027, …]\n\n \n\nThe activations are fed through UMAP to \n\nreduce them to two dimensions. They are then plotted, with similar \n\nactivations placed near each other.\n\nWe then draw a grid and average the \n\nactivations that fall within a cell and run feature inversion on the \n\naveraged activation. We also optionally size the grid cells according to\n\n the density of the number of activations that are averaged within.\n\n However, we use a slightly non-standard objective.\n\n We find it helpful to use an objective that emphasizes angle more \n\nheavily by multiplying the dot product by cosine similarity, leading to \n\nobjectives of the following form:\n\n We don't yet fully understand this phenomenon.\n\n \n\n For each activation vector, we also compute an *attribution* \n\nvector.\n\n The attribution vector has an entry for each class, and approximates\n\n the amount that the activation vector influenced the logit for each \n\nclass.\n\n Attribution vectors generally depend on the surrounding context.\n\n This is similar to Grad-CAM ,\n\n but without the spatial averaging of the gradient.\n\n Instead, we reduce noise in the gradient by using a continuous \n\nrelaxation of the gradient for max pooling in computing the gradient (as\n\n in ).\n\n \n\n This average attribution can be thought of as showing what classes \n\nthat cell tends to support, marginalizing over contexts.\n\n At early layers, the average attribution is very small and the top \n\nclasses are fairly arbitrary because low-level visual features like \n\ntextures tend to not be very discriminative without context.\n\n \n\n \n\nGrid size:\n\n 20x20\n\n 40x40\n\n 80x80\n\n 160x160\n\n attribution labels\n\nThis atlas can be a bit overwhelming at first \n\nglance — there's a lot going on! This diversity is a reflection of the \n\nvariety of abstractions and concepts the model has developed. Let's take\n\n a tour to examine this atlas in more depth.\n\n \n\nIf we look at the top-left of the atlas, we see images which \n\nlook like animal heads. There is some differentiation between different \n\ntypes of animals, but it seems to be more a collection of elements of \n\ngeneric mammals — eyes, fur, noses — rather than a collection of \n\ndifferent classes of animals. We've also added labels that show which \n\nclass each averaged activation most contributes to. Please note, in some\n\n areas of a layer this early in the network these attribution labels can\n\n![thumbnail for mixed4c](Activation%20Atlas_files/thumbnail-mixed4c.jpg)\n\n \n\n![thumbnail for mixed4c](Activation%20Atlas_files/thumbnail-mixed4c.jpg)\n\n \n\n![thumbnail for mixed4c](Activation%20Atlas_files/thumbnail-mixed4c.jpg)\n\nBelow the feet we start to lose any identifiable parts of \n\nanimals, and see isolated grounds and floors. We see attribution toward \n\nenvironments like \"sandbar\" and also toward things that are found on the\n\n ground, like \"doormat\" or \"ant\".\n\n \n\n![thumbnail for mixed4c](Activation%20Atlas_files/thumbnail-mixed4c.jpg)\n\nThese sandy, rocky backgrounds slowly blend into beaches and\n\n bodies of water. Here we see lakes and oceans, both above and below \n\nwater. Though the network does have certain classes like \"seashore\", we \n\nsee attribution toward many sea animals, without any visual references \n\nto the animals themselves. While not unexpected, it is reassuring to see\n\n that the activations that are used to identify the sea for the class \n\n\"seashore\" are the same ones used when classifying \"starfish\" or \"sea \n\nlion\". There is also no real distinction at this point between lakes and\n\n ocean — \"lakeside\" and \"hippopotamus\" attributions are intermingled \n\nwith \"starfish\" and \"stingray\".\n\n \n\n![thumbnail for mixed4c](Activation%20Atlas_files/thumbnail-mixed4c.jpg)\n\nNow let's jump to the other side of the atlas, where we can \n\nsee many variations of text detectors. These will be useful when \n\nidentifying classes such as \"menu\", \"web site\" or \"book jacket\".\n\n \n\n![thumbnail for mixed4c](Activation%20Atlas_files/thumbnail-mixed4c.jpg)\n\nMoving upward, we see many variations of people. There are \n\nvery few classes that specifically identify people in ImageNet, but \n\npeople are present in lots of the images. We see attribution toward \n\nthings people use (\"hammer\", \"flute\"), clothes that people wear (\"bow \n\ntie\", \"maillot\") and activities that people participate in \n\n(\"basketball\"). There is a uniformity to the skin color in these \n\nvisualizations which we suspect is a reflection of the distribution of \n\nthe data used for training. (You can browse the ImageNet training data \n\n \n\n![thumbnail for mixed4c](Activation%20Atlas_files/thumbnail-mixed4c.jpg)\n\nAnd finally, moving back to the left, we can see round food \n\nand fruit organized mostly by colors — we see attribution toward \n\n\"lemon\", \"orange\" and \"fig\".\n\n \n\n![thumbnail for mixed4c](Activation%20Atlas_files/thumbnail-mixed4c.jpg)\n\nWe can also trace curved paths through this manifold that \n\nwe've created. Not only are regions important, but certain movements \n\nthrough the space seem to correspond to human interpretable qualities. \n\nWith the fruit, we can trace a path that seems to correlate with the \n\nsize and number of fruits in the frame.\n\n \n\n![thumbnail for mixed4c](Activation%20Atlas_files/thumbnail-mixed4c.jpg)\n\nSimilarly, with people, we can trace a path that seems to \n\ncorrespond to how many people are in the frame, whether it's a single \n\nperson or a crowd.\n\n \n\n![thumbnail for mixed4c](Activation%20Atlas_files/thumbnail-mixed4c.jpg)\n\n \n\n![thumbnail for mixed4c](Activation%20Atlas_files/thumbnail-mixed4c.jpg)\n\nIn the plants region, we can trace a path that seems to \n\ncorrespond to how blurry the plant is. This could possibly be used to \n\ndetermine relative size of objects because of the typical focal lengths \n\nof cameras. Close up photos of small insects have more opportunity for \n\nblurry background foliage than photos of larger animals, like monkeys.\n\n \n\n![thumbnail for mixed4c](Activation%20Atlas_files/thumbnail-mixed4c.jpg)\n\nIt is important to note that these paths are constructed \n\nafter the fact in the low-dimensional projection. They are smooth paths \n\nin this reduced projection but we don't necessarily know how the paths \n\noperate in the original higher-dimensional activation space.\n\nLooking at Multiple Layers\n\n--------------------------\n\nIn the previous section we focused on one layer of the network, \n\nmixed4c, which is in the middle of our network. Convolutional networks \n\nare generally deep, consisting of many layers that progressively build \n\nup more powerful abstractions. In order to get a holistic view, we must \n\nlook at how the model's abstractions develop over several layers.\n\n \n\n![model](Activation%20Atlas_files/model2.svg)\n\nTo start, let's compare three layers from different areas of\n\n the network to try to get a sense for the different personalities of \n\n \n\n .vertical-layer-comparison {\n\n display: grid;\n\n grid-column: page;\n\n grid-template-columns: 1fr 1fr 1fr;\n\n grid-gap: 16px 20px;\n\n }\n\n .vertical-layer-comparison h4 {\n\n margin-bottom: 0;\n\n padding-bottom: 4px;\n\n width: 100%;\n\n border-bottom: solid #CCC 1px;\n\n }\n\n .vertical-layer-comparison.progress {\n\n display: block;\n\n max-width: 704px;\n\n padding: 0 20px;\n\n margin: 0 auto;\n\n }\n\n .vertical-layer-comparison.progress > div {\n\n display: grid;\n\n grid-template-columns: 2.5fr 1fr;\n\n grid-gap: 20px;\n\n margin-bottom: 16px;\n\n }\n\n .vertical-layer-comparison.progress h4 {\n\n margin-top: 0;\n\n margin-bottom: 16px;\n\n }\n\n .vertical-layer-comparison.progress .figcaption {\n\n margin-top: 8px;\n\n }\n\n \n\n#### Mixed3b\n\n#### Mixed4c\n\n#### Mixed5b\n\n![](Activation%20Atlas_files/layers-plant-0.jpg)\n\n![thumbnail for mixed3b](Activation%20Atlas_files/thumbnail-mixed3b.jpg)\n\nYou'll immediately notice that \n\nthe early layer is very nonspecific in comparison to the others. The \n\nicons that emerge are of patterns and splotches of color. It is \n\nsuggestive of the final class, but not particularly evocative.\n\n![](Activation%20Atlas_files/layers-plant-1.jpg)\n\n![thumbnail for mixed4c](Activation%20Atlas_files/thumbnail-mixed4c.jpg)\n\nBy the middle layer, icons \n\ndefinitely resemble leaves, but they could be any type of plant. \n\nAttributions are focused on plants, but are a little all over the board.\n\n![](Activation%20Atlas_files/layers-plant-2.jpg)\n\n![thumbnail for mixed5b](Activation%20Atlas_files/thumbnail-mixed5b.jpg)\n\nHere we see foliage with \n\ntextures that are specific to cabbage, and curved into rounded balls. \n\nThere are full heads of cabbage rather than individual leaves.\n\nAs you move through the network, the later layers seem to \n\nget much more specific and complex. This is to be expected, as each \n\nlayer builds its activations on top of the preceding layer's \n\nactivations. The later layers also tend to have larger receptive fields \n\nthan the ones that precede them (meaning they are shown larger subsets \n\nof the image) so the concepts seem to encompass more of the whole of \n\nobjects.\n\n \n\nThere is another phenomenon worth noting: not only are concepts\n\n being refined, but new concepts are appearing out of combinations of \n\nold ones. Below, you can see how sand and water are distinct concepts in\n\n a middle layer, mixed4c, both with strong attributions to the \n\nclassification of \"sandbar\". Contrast this with a later layer, mixed5b, \n\nwhere the two ideas seem to be fused into one activation.\n\n \n\n#### Mixed4c\n\n#### Mixed5b\n\n![](Activation%20Atlas_files/layers-water-0.jpg)\n\n![thumbnail for mixed4c](Activation%20Atlas_files/thumbnail-mixed4c.jpg)\n\n![](Activation%20Atlas_files/layers-water-1.jpg)\n\n![thumbnail for mixed4c](Activation%20Atlas_files/thumbnail-mixed4c.jpg)\n\n…and a separate area of **watery textures**.\n\n![](Activation%20Atlas_files/layers-water-2.jpg)\n\n![thumbnail for mixed5b](Activation%20Atlas_files/thumbnail-mixed5b.jpg)\n\nFinally, if we zoom out a little, we can see how the broader\n\n shape of the activation space changes from layer to layer. By looking \n\nat similar regions in several consecutive layers, we can see concepts \n\ngetting refined and differentiated — In mixed4a we see very vague, \n\ngeneric blob, which gets refined into much more specific \"peninsulas\" by\n\n mixed4e.\n\n![mixed4a](Activation%20Atlas_files/progress-mixed4a.jpg)\n\n#### Mixed4a\n\n In mixed4a there is a vague \"mammalian\" area.\n\n \n\n![mixed4b](Activation%20Atlas_files/progress-mixed4b.jpg)\n\n#### Mixed4b\n\n \n\n![mixed4c](Activation%20Atlas_files/progress-mixed4c.jpg)\n\n#### Mixed4c\n\n \n\n![mixed4d](Activation%20Atlas_files/progress-mixed4d.jpg)\n\n#### Mixed4d\n\n The specialization continues in mixed4d.\n\n \n\n![mixed4e](Activation%20Atlas_files/progress-mixed4e.jpg)\n\n#### Mixed4e\n\n And further still in mixed4e.\n\n \n\nBelow you can browse many more of the layers of InceptionV1. You can compare the\n\n [curved edge detectors of mixed4a](#) with the\n\n [bowls and cups of mixed5b](#). Mixed4b has some\n\n [fabrics](#). In later layers, you'll see\n\n [specific types of clothing](#).\n\n \n\nClass Filter\n\n------------\n\nshow all\n\nfireboat\n\nstreetcar\n\nspeedboat\n\nscuba diver\n\nsnorkel\n\ndalmatian\n\nLabrador retriever\n\nSiberian husky\n\nEnglish setter\n\nDoberman\n\nTibetan terrier\n\nsea lion\n\ngreat white shark\n\ngoldfish\n\nlionfish\n\noil filter\n\ndigital clock\n\ndigital watch\n\ncomic book\n\nspider web\n\nbarn spider\n\nhead cabbage\n\nartichoke\n\nyellow lady's slipper\n\ncroquet ball\n\nvolleyball\n\nparachute\n\nsleeping bag\n\nlakeside\n\ncoral reef\n\nseashore\n\nvine snake\n\nwater snake\n\nIndian cobra\n\ngreen mamba\n\ngoldfinch\n\nhouse finch\n\nvulture\n\nlorikeet\n\npeacock\n\nLayer\n\n-----\n\nmixed3a\n\nmixed3b\n\nmixed4a\n\nmixed4b\n\nmixed4c\n\nmixed4d\n\nmixed4e\n\nmixed5a\n\nmixed5b\n\n![thumbnail for mixed4e](Activation%20Atlas_files/thumbnail-mixed4e.jpg)\n\n attribution labels\n\n show tooltip\n\n scroll to zoom\n\n### Grid size\n\n 20x20\n\n 40x40\n\n 80x80\n\n 160x160\n\n 320x320\n\n auto\n\nauto threshold: 0.5\n\n### Icon density: 1\n\n### Class filter\n\n positive influence\n\n negative influence\n\nIntensity: 1\n\n### Location\n\nx: 0.500\n\ny: 0.500\n\nscale: 1.000\n\nfewer options\n\nFocusing on a Single Classification\n\n-----------------------------------\n\nLooking at an atlas of all activations can be a little \n\noverwhelming, especially when you're trying to understand how the \n\nnetwork goes about ranking one particular class. For instance, let's \n\ninvestigate how the network classifies a \"fireboat\".\n\n \n\n![fireboat](Activation%20Atlas_files/fireboat-01.jpg)\n\nAn image labeled \"fireboat\" from ImageNet.\n\nWe'll start by looking at an atlas for the last layer, \n\nmixed5b. Instead of showing all the activations, however, we'll \n\ncalculate the amount that each activation contributes toward a \n\nclassification of \"fireboat\" and then map that value to the opacity of \n\nthe activation icon.In the case of \n\nmixed5b, determining this contribution is fairly straightforward because\n\n the relationship between activations at mixed5b and the logit values is\n\n linear. When there are multiple layers between our present one and the \n\noutput — and as a result, the relationship is non-linear — it's a little\n\n less clear what to do. In this article, we take the simple approach of \n\nforming a linear approximation of these future layers and use it to \n\napproximate the effect of our activations. The areas that \n\ncontribute a lot toward a classification of \"fireboat\" will be clearly \n\nvisible, whereas the areas that contribute very little (or even \n\ncontribute negatively) will be completely transparent.\n\n \n\n![](Activation%20Atlas_files/focus-1-1.jpg)\n\n![](Activation%20Atlas_files/focus-1-2.jpg)\n\nWhen we map opacity to the \n\namount that each activation contributes to \"fireboat\" in the layer \n\nmixed5b, we see a main cluster of icons showing red boats and splashing,\n\n spraying water. While there are some stray areas elsewhere, it seems \n\nthat this is region of the atlas that is dedicated specifically to \n\nclassifying red boats with splashing water nearby.\n\nThe layer we just looked at, mixed5b, is located just before\n\n the final classification layer so it seems reasonable that it would be \n\nclosely aligned with the final classes. Let's look at a layer a little \n\nearlier in the network, say mixed4d, and see how it differs.\n\n \n\n![](Activation%20Atlas_files/focus-2-1.jpg)\n\n3\n\n2\n\n1\n\n![](Activation%20Atlas_files/focus-2-4.jpg)\n\n1\n\n![](Activation%20Atlas_files/focus-2-3.jpg)\n\n2\n\n![](Activation%20Atlas_files/focus-2-2.jpg)\n\n3\n\nIn mixed4d we see we see the \n\nattribution toward \"fireboat\" is high in several clusters located in \n\ndifferent positions around the atlas. One is very focused on windows, \n\nanother on geysers and splashing water, and yet another on crane-like \n\nobjects.\n\nHere we see a much different pattern. If we look at some \n\nmore input examples, this seems entirely reasonable. It's almost as if \n\nwe can see a collection of the component concepts the network will use \n\nin later layers to classify \"fireboat\". Windows + crane + water = \n\n\"fireboat\".\n\n \n\n![fireboat](Activation%20Atlas_files/fireboat-10.jpg)\n\n![fireboat](Activation%20Atlas_files/fireboat-02.jpg)\n\n![fireboat](Activation%20Atlas_files/fireboat-03.jpg)\n\nOne of the clusters, the one with windows, has strong \n\nattribution to \"fireboat\", but taken on its own, it has an even stronger\n\n attribution toward \"streetcar\". So, let's go back to the atlas at \n\nmixed4d, but isolate \"streetcar\" and compare it to the patterns seen for\n\n \"fireboat\". Let's look more closely at the four highlighted areas: the \n\nthree areas we highlighted for fireboat plus one additional area that is\n\n highly activated for streetcars.\n\n \n\n#### Activations for Fireboat\n\n![](Activation%20Atlas_files/focus-2-1.jpg)\n\n1234\n\n#### Activations for Streetcar\n\n![](Activation%20Atlas_files/focus-3-1.jpg)\n\n1234\n\nIf we zoom in, we can get a better look at what \n\ndistinguishes the two classifications at this layer. (We've \n\ncherry-picked these examples for brevity, but you can explore all the \n\nlayers and activations in detail in a explorable playground below.)\n\n \n\n#### Fireboat\n\n![](Activation%20Atlas_files/focus-2-2.jpg)\n\n1![](Activation%20Atlas_files/focus-2-3.jpg)\n\n2![](Activation%20Atlas_files/focus-2-4.jpg)\n\n3![](Activation%20Atlas_files/focus-2-5.jpg)\n\n4\n\n#### Streetcar\n\n![](Activation%20Atlas_files/focus-3-2.jpg)\n\n1![](Activation%20Atlas_files/focus-3-3.jpg)\n\n2![](Activation%20Atlas_files/focus-3-4.jpg)\n\n3![](Activation%20Atlas_files/focus-3-5.jpg)\n\n4\n\n\"Fireboat\" activations have \n\nmuch stronger attributions from water than \"streetcar\", where there is \n\nvirtually no positive evidence.\n\nIf we look at a couple of input examples, we can see how \n\nbuildings and water backgrounds are an easy way to differentiate between\n\n a \"fireboat\" and a \"streetcar\".\n\n \n\n![fireboat](Activation%20Atlas_files/fireboat-01.jpg)\n\n![streetcar](Activation%20Atlas_files/streetcar-01.jpg)\n\nImages from ImageNet\n\nBy isolating the activations that contribute strongly to one\n\n class and comparing it to other class activations, we can see which \n\nactivations are conserved among classes and which are recombined to form\n\n more complex activations in later layers. Below you can explore the \n\nactivation patterns of many classes in ImageNet through several layers \n\nof InceptionV1. You can even explore negative attributions, which we \n\nignored in this discussion.\n\n \n\nClass Filter\n\n------------\n\nshow all\n\nfireboat\n\nstreetcar\n\nspeedboat\n\nscuba diver\n\nsnorkel\n\ndalmatian\n\nLabrador retriever\n\nSiberian husky\n\nEnglish setter\n\nDoberman\n\nTibetan terrier\n\nsea lion\n\ngreat white shark\n\ngoldfish\n\nlionfish\n\noil filter\n\ndigital clock\n\ndigital watch\n\ncomic book\n\nspider web\n\nbarn spider\n\nhead cabbage\n\nartichoke\n\nyellow lady's slipper\n\ncroquet ball\n\nvolleyball\n\nparachute\n\nsleeping bag\n\nlakeside\n\ncoral reef\n\nseashore\n\nvine snake\n\nwater snake\n\nIndian cobra\n\ngreen mamba\n\ngoldfinch\n\nhouse finch\n\nvulture\n\nlorikeet\n\npeacock\n\nLayer\n\n-----\n\nmixed3a\n\nmixed3b\n\nmixed4a\n\nmixed4b\n\nmixed4c\n\nmixed4d\n\nmixed4e\n\nmixed5a\n\nmixed5b\n\n![thumbnail for mixed4d](Activation%20Atlas_files/thumbnail-mixed4d.jpg)\n\n attribution labels\n\n show tooltip\n\n scroll to zoom\n\n### Grid size\n\n 20x20\n\n 40x40\n\n 80x80\n\n 160x160\n\n 320x320\n\n auto\n\nauto threshold: 0.5\n\n### Icon density: 1\n\n### Class filter\n\n positive influence\n\n negative influence\n\nIntensity: 1\n\n### Location\n\nx: 0.500\n\ny: 0.500\n\nscale: 1.000\n\nfewer options\n\nFurther Isolating Classes\n\n-------------------------\n\nHighlighting the class-specific activations in situ of a full atlas\n\n is helpful for seeing how that class relates to the full space of what a\n\n network \"can see.\" However, if we want to really isolate the \n\nactivations that contribute to a specific class we can remove all the \n\nother activations rather than just dimming them, creating what we'll \n\n class activations we generally have better results from using t-SNE for\n\n the dimensionality reduction step rather than UMAP. We suspect it is \n\n \n\n#### Class Activation Atlas for \"snorkel\" from mixed5b\n\n scuba diver\n\n snorkel\n\n red fox\n\n ibex\n\n giant panda\n\n lesser panda\n\n zebra\n\n tiger\n\n fireboat\n\n dalmatian\n\n English setter\n\n great white shark\n\n goldfish\n\n lionfish\n\n comic book\n\n plunger\n\n knot\n\n tile roof\n\n pill bottle\n\n measuring cup\n\n crossword puzzle\n\n cauliflower\n\n head cabbage\n\n artichoke\n\n green mamba\n\n goldfinch\n\n house finch\n\n vulture\n\n chainlink fence\n\n wok\n\n grey whale\n\n waffle iron\n\n conch\n\n snowmobile\n\n sax\n\n French horn\n\n stage\n\n spider web\n\n barn spider\n\n grasshopper\n\n monarch\n\nA class activation atlas gives us a much clearer view of \n\nwhich detectors the network is using to rank a specific class. In the \n\n\"snorkel\" example we can clearly see ocean, underwater, and colorful \n\nmasks.\n\n \n\nIn the previous example, we are only showing those activations \n\nwhose strongest attribution is toward the class in question. This will \n\nshow us activations that contribute mostly to our class in question, \n\neven if their overall strength is low (like in background detectors). In\n\n some cases, though, there are strong correlations that we'd like to see\n\n (like fish with snorkelers). These activations on their own might \n\ncontribute more strongly to a different class than the one we're \n\ninterested in, but their existence can also contribute strongly to our \n\nclass of interest. For these we need to choose a different filtering \n\nmethod.\n\n#### \"snorkel\" filtered by top rank\n\nsnorkel\n\nsnorkel\n\nsnorkel\n\nsnorkel\n\nsnorkel\n\nsnorkel\n\nsnorkel\n\nsnorkel\n\nsnorkel\n\nsnorkel\n\nsnorkel\n\nsnorkel\n\nsnorkel\n\nsnorkel\n\nsnorkel\n\nsnorkel\n\nsnorkel\n\nsnorkel\n\nsnorkel\n\nsnorkel\n\nsnorkel\n\nsnorkel\n\nsnorkel\n\nsnorkel\n\nsnorkel\n\nsnorkel\n\nsnorkel\n\nsnorkel\n\nsnorkel\n\nsnorkel\n\nsnorkel\n\nsnorkel\n\nsnorkel\n\nsnorkel\n\nsnorkel\n\nsnorkel\n\nsnorkel\n\nsnorkel\n\nsnorkel\n\nsnorkel\n\nsnorkel\n\nsnorkel\n\nsnorkel\n\nsnorkel\n\nsnorkel\n\nsnorkel\n\nsnorkel\n\nsnorkel\n\nsnorkel\n\nWe pluck only those \n\nactivations whose top attribution is toward the class in question. The \n\nresults are often much more focused and isolated, exclusive to the \n\nclass. Some are low magnitude, like backgrounds, and we miss \n\ncorrelations or concepts that are shared among many classes.\n\n#### \"snorkel\" filtered by overall magnitude\n\ndrake\n\nred-breasted merganser\n\nswimming trunks\n\nmaillot\n\nbalance beam\n\ndugong\n\nkiller whale\n\nsnorkel\n\nsnorkel\n\nsnorkel\n\ncanoe\n\nrock beauty\n\npuffer\n\ndugong\n\nsnorkel\n\nsnorkel\n\nbathing cap\n\nlifeboat\n\ncanoe\n\nrock beauty\n\nvine snake\n\nagama\n\nscuba diver\n\nsnorkel\n\nsnorkel\n\ngreen mamba\n\nsnorkel\n\nsnorkel\n\nbathing cap\n\nneck brace\n\nsnorkel\n\noxygen mask\n\ngasmask\n\nfootball helmet\n\nneck brace\n\nwater bottle\n\noxygen mask\n\noxygen mask\n\nsunglass\n\nballpoint\n\nswab\n\nsunglass\n\nsunglasses\n\nHere we sort all the \n\nactivations by the magnitude toward the class in question (independent \n\nof other classes) and take the top 2,000 activations. We see more \n\ncorrelated activations that could, on their own, contribute to another \n\nclassification (We label each activation with the class that it \n\ncontributes toward most). Notice that certain fish that are commonly \n\nseen while snorkeling now show up with that class, and a \"coffee mug\" \n\nnow shows up with \"crossword puzzle\". Some of them appear spurious, \n\nhowever.\n\n scuba diver\n\n \n\n snorkel\n\n \n\n red fox\n\n \n\n ibex\n\n \n\n giant panda\n\n \n\n lesser panda\n\n \n\n zebra\n\n \n\n tiger\n\n \n\n fireboat\n\n \n\n dalmatian\n\n \n\n English setter\n\n \n\n great white shark\n\n \n\n goldfish\n\n \n\n lionfish\n\n \n\n comic book\n\n \n\n plunger\n\n \n\n knot\n\n \n\n tile roof\n\n \n\n pill bottle\n\n \n\n measuring cup\n\n \n\n crossword puzzle\n\n \n\n cauliflower\n\n \n\n head cabbage\n\n \n\n artichoke\n\n \n\n green mamba\n\n \n\n goldfinch\n\n \n\n house finch\n\n \n\n vulture\n\n \n\n chainlink fence\n\n \n\n wok\n\n \n\n grey whale\n\n \n\n waffle iron\n\n \n\n conch\n\n \n\n snowmobile\n\n \n\n sax\n\n \n\n French horn\n\n \n\n stage\n\n \n\n spider web\n\n \n\n barn spider\n\n \n\n grasshopper\n\n \n\n monarch\n\n \n\nUsing the magnitude filtering method, let's try to compare two \n\nrelated classes and see if we can more easily see what distinguishes \n\nthem. (We could have instead used rank, or a combination of the two, but\n\n magnitude will suffice to show us a good variety of concepts).\n\n \n\n#### \"snorkel\"\n\n#### \"scuba diver\"\n\nIt can be a little hard to immediately understand all the \n\ndifferences between classes. To help make the comparison easier, we can \n\ncombine the two views into one. We'll plot the difference between the \n\nattributions of the \"snorkel\" and \"scuba diver\" horizontally, and use \n\nt-SNE to cluster similar activations vertically.\n\n \n\nIn this comparison we can see some bird-like creatures and \n\nclear tubes on the left, implying a correlation with \"snorkel\", and some\n\n shark-like creatures and something round, shiny, and metallic on the \n\nright, implying correlation with \"scuba diver\" (This activation has a \n\nstrong attribution toward the class \"steam locomotive\"). Let's take an \n\nimage from the ImageNet dataset labeled as \"snorkel\" and add something \n\nthat resembles this icon to see how it affects the classification \n\nscores.\n\n \n\n![snorkel / scuba diver](Activation%20Atlas_files/snorkel-train.png)\n\n| | | |\n\n| --- | --- | --- |\n\n| 1. | snorkel | 55.6% |\n\n| 2. | coral reef | 18.6% |\n\n| 3. | scuba diver | 13.5% |\n\n| 4. | loggerhead | 5.5% |\n\n| 5. | lionfish | 1.7% |\n\n| 6. | sea snake | 1.4% |\n\n![snorkel / scuba diver](Activation%20Atlas_files/snorkel-train-medium.png)\n\n| | | |\n\n| --- | --- | --- |\n\n| 1. | scuba diver | 71.3% |\n\n| 2. | coral reef | 14.7% |\n\n| 3. | snorkel | 4.5% |\n\n| 4. | lionfish | 3.2% |\n\n| 5. | sea snake | 2.4% |\n\n| 6. | loggerhead | 0.9% |\n\nBy adding a picture of one \n\nof the concepts seen in the visualization above we can change the \n\nclassification. With an added picture of a steam train the confidence of\n\n \"scuba diver\" classification rises and \"snorkel\" drops significantly.\n\n![snorkel / scuba diver](Activation%20Atlas_files/snorkel-train-large.png)\n\n| | | |\n\n| --- | --- | --- |\n\n| 1. | steam locomotive | 89.6% |\n\n| 2. | snowplow | 6.2% |\n\n| 3. | jeep | 0.8% |\n\n| 4. | tractor | 0.5% |\n\n| 5. | scuba diver | 0.4% |\n\n| 6. | passenger car | 0.3% |\n\nThe failure mode here seems to be that the model is using \n\nits detectors for the class \"steam locomotive\" to identify air tanks to \n\nhelp classify \"scuba diver\". We'll call these \"multi-use\" \n\nfeatures — detectors that react to very different concepts that are \n\nnonetheless visually similar. Let's look at the differences between a \n\n\"grey whale\" and a \"great white shark\" to see another example of this \n\nissue.\n\n \n\nIn this example we see another detector that seems to be \n\nplaying two roles: detecting red stitching on a baseball and a sharks's \n\n \n\n \n\n \n\n![great white shark / grey whale](Activation%20Atlas_files/whale-baseball.png)\n\n| | | |\n\n| --- | --- | --- |\n\n| 1. | grey whale | 91.0% |\n\n| 2. | killer whale | 7.5% |\n\n| 3. | great white shark | 0.7% |\n\n| 4. | gar | 0.4% |\n\n| 5. | sea lion | 0.1% |\n\n| 6. | tiger shark | 0.1% |\n\n| | | |\n\n| --- | --- | --- |\n\n| 1. | great white shark | 66.7% |\n\n| 2. | baseball | 7.4% |\n\n| 3. | grey whale | 4.1% |\n\n| 4. | sombrero | 3.2% |\n\n| 5. | sea lion | 3.1% |\n\n| 6. | killer whale | 2.7% |\n\n| | | |\n\n| --- | --- | --- |\n\n| 1. | baseball | 100.0% |\n\n| 2. | rugby ball | 0.0% |\n\n| 3. | golf ball | 0.0% |\n\n| 4. | ballplayer | 0.0% |\n\n| 5. | drum | 0.0% |\n\n| 6. | sombrero | 0.0% |\n\nThe results follow the pattern in previous examples pretty \n\nclosely.\n\n Adding a small-sized baseball does change the top classification to \n\n\"great white shark\", and as it gets bigger it overpowers the \n\nclassification, so the top slot goes to \"baseball\".\n\n \n\nLet's look at one more example: \"frying pan\" and \"wok\".\n\n \n\nOne difference stands out here — the type of related foods \n\npresent. On the right we can clearly see something resembling noodles \n\n(which have a strong attribution toward the class \"carbonara\"). Let's \n\ntake a picture from ImageNet labeled as \"frying pan\" and add an inset of\n\n some noodles.\n\n \n\n![frying pan / wok](Activation%20Atlas_files/frying-pan-noodles.png)\n\n| | | |\n\n| --- | --- | --- |\n\n| 1. | frying pan | 76.5% |\n\n| 2. | wok | 15.8% |\n\n| 3. | stove | 5.4% |\n\n| 4. | spatula | 1.0% |\n\n| 5. | Dutch oven | 0.5% |\n\n| 6. | mixing bowl | 0.2% |\n\n![frying pan / wok](Activation%20Atlas_files/frying-pan-noodles-medium.png)\n\n| | | |\n\n| --- | --- | --- |\n\n| 1. | wok | 63.2% |\n\n| 2. | frying pan | 35.1% |\n\n| 3. | spatula | 0.6% |\n\n| 4. | hot pot | 0.5% |\n\n| 5. | mixing bowl | 0.1% |\n\n| 6. | stove | 0.1% |\n\n![frying pan / wok](Activation%20Atlas_files/frying-pan-noodles-large.png)\n\n| | | |\n\n| --- | --- | --- |\n\n| 1. | carbonara | 96.4% |\n\n| 2. | plate | 2.3% |\n\n| 3. | wok | 0.6% |\n\n| 4. | meat loaf | 0.1% |\n\n| 5. | dishwasher | 0.1% |\n\n| 6. | rotisserie | 0.1% |\n\nAs we make the picture of \n\nnoodles larger, its influence overpowers the other classifications, but \n\n\"wok\" remains ranked above \"frying pan\".\n\nHere the patch was not as effective at lowering the initial \n\nclassification, which makes sense since the noodle-like icons were \n\nplotted more toward the center of the visualization thus having less of a\n\n difference in attribution. We suspect that the training set simply \n\ncontained more images of woks with noodles than frying pans with \n\nnoodles.\n\n \n\n### Testing dozens of patches on thousands of images\n\nSo far we've only shown single examples of these patches. Below we \n\nshow the result of ten sample patches (each set includes the one example\n\n we explored above), run on 1,000 images from the ImageNet training set \n\nfor the class in question. While they aren't effective in all cases, \n\nthey do flip the image classification to the target class in about 2 in 5\n\n images. The success rate reaches about 1 in 2 images if we also allow \n\nto position the patch in the best of the four corners of the image (top \n\nleft, top right, bottom left, bottom right) at the most effective size. \n\nTo ensure our attack isn't just blocking out evidence for the original \n\nclass, we also compare each attack to a random noise image patch.\n\n #adversarial-analysis {\n\n display: none;\n\n grid-column: text;\n\n margin-bottom: 0;\n\n }\n\n #adversarial-analysis table {\n\n width: 100%;\n\n text-align: right;\n\n }\n\n #adversarial-analysis table th {\n\n vertical-align: initial;\n\n }\n\n #adversarial-analysis table th .figcaption {\n\n display: block;\n\n font-weight: initial;\n\n }\n\n #adversarial-analysis table th img {\n\n width: initial;\n\n }\n\n #adversarial-analysis table .detail {\n\n /\\* padding-left: 1em; \\*/\n\n font-size: 85%;\n\n color: rgba(0, 0, 0, 0.6);\n\n }\n\n #adversarial-analysis table .synset {\n\n font-family: monospace;\n\n white-space: nowrap;\n\n }\n\n #adversarial-analysis table .from,\n\n #adversarial-analysis table .to {\n\n white-space: nowrap;\n\n }\n\n #adversarial-analysis table .overall {\n\n font-weight: bold;\n\n }\n\n @supports (font-variant-numeric: tabular-nums) {\n\n #adversarial-analysis table {\n\n font-variant-numeric: tabular-nums;\n\n }\n\n }\n\n \n\n#### Dynamic corner position\n\n| icon\n\nChoosing the strongest size and corner position for each example. | Success rate\n\nMean probability assigned to the target class before and after patch. |\n\n| --- | --- | --- |\n\n| snorkel → scuba diver | 85.0% | 7,031 / 8,270 | +56.8% | 3.7% → 60.5% |\n\n| frying pan → wok | 44.9% | 2,799 / 6,240 | +24.1% | 10.3% → 34.4% |\n\n#### Consistent corner position\n\n| icon\n\n| --- | --- | --- |\n\n| snorkel → scuba diver | 65.4% | 5,409 / 8,270 | +43.0% | 3.7% → 46.7% |\n\n| frying pan → wok | 25.6% | 1,595 / 6,240 | +12.7% | 10.3% → 23.0% |\n\n#### details for each patch tested\n\n| snorkel → scuba divergrey whale → great white sharkfrying pan → wok\n\n| --- | --- | --- |\n\n| patch 0\n\nPatch 01 | 88.1% | 729 / 827 | +60.3% | 3.7% → 64.0% |\n\n| patch 1\n\nPatch 02 | 83.4% | 690 / 827 | +53.0% | 3.7% → 56.7% |\n\n| patch 2\n\nPatch 03 | 92.4% | 764 / 827 | +66.9% | 3.7% → 70.6% |\n\n| patch 3\n\nPatch 04 | 93.0% | 769 / 827 | +66.7% | 3.7% → 70.4% |\n\n| patch 4\n\nPatch 05 | 88.0% | 728 / 827 | +58.8% | 3.7% → 62.5% |\n\n| patch 5\n\nPatch 06 | 85.5% | 707 / 827 | +52.0% | 3.7% → 55.7% |\n\n| patch 6\n\nPatch 07 | 86.2% | 713 / 827 | +57.4% | 3.7% → 61.2% |\n\n| patch 7\n\nPatch 08 | 56.1% | 464 / 827 | +30.7% | 3.7% → 34.5% |\n\n| patch 8\n\nPatch 09 | 92.0% | 761 / 827 | +64.8% | 3.7% → 68.5% |\n\n| patch 9\n\nPatch 10 | 85.4% | 706 / 827 | +57.6% | 3.7% → 61.3% |\n\n| patch undefined\n\nRandom noise | 13.9% | 115 / 827 | +9.5% | 3.7% → 13.2% |\n\n Our \"attacks\" can be seen as part of a larger trend (eg. ) of researchers\n\n In many ways, our attacks are most similar to adversarial patches ,\n\n which also add a small patch to the input image.\n\n From this perspective, adversarial patches are far more effective, \n\nworking much more reliably. Instead, we see our attacks as interesting \n\nbecause they are synthesized by humans from their understanding of the \n\nmodel,\n\n and seem to be attacking the model at a higher level of abstraction.\n\n \n\n We also want to emphasize that not all class comparisons reveal \n\nthese type of patches and not all icons in the visualization have the \n\nsame (or any) effectiveness and we've only tested them on one model. If \n\nwe wanted to find these patches more systematically, a different \n\napproach would most likely be more effective. However, the class \n\nactivation atlas technique was what revealed the existence of these \n\npatches before we knew to look for them. If you'd like to explore your \n\nown comparisons and search for your own patches, we've provided a \n\nConclusion and Future Work\n\n--------------------------\n\nActivation atlases give us a new way to peer into convolutional \n\nvision networks. They give us a global, hierarchical, and \n\nhuman-interpretable overview of concepts within the hidden layers. Not \n\nonly does this allow us to better see the inner workings of these \n\ncomplicated systems, but it's possible that it could enable new \n\ninterfaces for working with images.\n\n \n\n### Surfacing Inner Properties of Models\n\nThe vast majority of neural network research focuses on \n\nquantitative evaluations of network behavior. How accurate is the model?\n\n What's the precision-recall curve?\n\n \n\n it behaves the way it does. To truly understand why a network behaves \n\nthe way it does, we would need to fully understand the rich inner world \n\nof the network — it's hidden layers. For example, understanding better \n\nhow InceptionV1 builds up a classifier for a fireboat from component \n\nparts in mixed4d can help us build confidence in our models and can \n\nsurface places where it isn't doing what we want.\n\n \n\nEngaging with this inner world also invites us to do deep \n\nlearning research in a new way. Normally, each neural network experiment\n\n gives only a few bits of feedback — whether the loss went up or \n\ndown — to inform the next round of experiments. We design architectures \n\nby almost blind trial and error, guided by vague intuitions that we \n\nbuild up over years. In the future, we hope that researchers will get \n\nrich feedback on what each layer in their model is doing in a way that \n\nwill make our current approach seem like stumbling in the dark.\n\n \n\nActivation atlases, as they presently stand, are inadequate to \n\nreally help researchers iterate on models, in part because they aren't \n\ncomparable. If you look at atlases for two slightly different models, \n\nit's hard to take away anything. In future work, we will explore how \n\nsimilar visualizations can compare models, showing similarities and \n\ndifferences beyond error rates.\n\n \n\n### New interfaces\n\nMachine learning models are usually deployed as black boxes that \n\nautomate a specific task, executing it on their own. But there's a \n\ngrowing sense that there might be an alternate way for us to relate to \n\nthem: that instead of increasingly automating a task, they could be used\n\n more directly by a person. One vision of this augmentation that we find\n\n particularly compelling is the idea that the internal representations \n\n \n\n and music\n\n \n\n  .\n\n \n\nWe think of activation atlases as revealing a machine-learned \n\nalphabet for images — a collection of simple, atomic concepts that are \n\ncombined and recombined to form much more complex visual ideas. In the \n\nsame way that we use word processors to turn letters into words, and \n\nwords into sentences, we can imagine a tool that would allow us to \n\ncreate images from a machine-learned language system for images. Similar\n\n to GAN painting , imagine \n\nusing something like an activation atlas as a palette — one could dip a \n\nbrush into a \"tree\" activation and paint with it. A palette of concepts \n\nrather than colors.\n\n \n\n has shown that this is entirely possible. In this particular instance, \n\nwe imagine constructing a grid of activations by selecting them from an \n\natlas (or some derivation), then optimizing an output image that would \n\ncorrespond to the user's constructed activation matrix.\n\n \n\n have shown us that we can use these vision networks to create nuanced \n\nvisual expression outside the explicit distribution of visual data it \n\nwas trained on. We speculate that activation atlases could be helpful in\n\n manipulating artistic styles without having to find an existing \n\nreference image, or they could help in guiding and modifying automated \n\nstyle transfer techniques.\n\n \n\nWe could also use these atlases to query large image datasets. \n\nIn the same way that we probe large corpuses of text with words, we \n\ncould, too, use activation atlases to find types of images in large \n\nimage datasets. Using words to search for something like a \"tree\" is \n\nquite powerful, but as you get more specific, human language is often \n\nill-suited to describe specific visual characteristics. In contrast, the\n\n hidden layers of neural networks are a language optimized for the sole \n\npurpose of representing visual concepts. Instead of using the proverbial\n\n thousand words to uniquely specify the image one is seeking, we can \n\nimagine someone using the language of the activation atlas.\n\n \n\nAnd lastly, we can also liken activation atlases to histograms.\n\n In the same way that traditional histograms give us good summaries of \n\nlarge datasets, activation atlases can be used to summarize large \n\nnumbers of images.\n\n \n\nIn the examples in this article we used the same dataset for \n\ntraining the model as we did for collecting the activations. But, if we \n\nuse a different dataset to collect the activations, we could use the \n\natlas as a way of inspecting an unknown dataset. An activation atlas \n\ncould show us a histogram of *learned concepts* that exist within\n\n the images. Such a tool could show us the semantics of the data and not\n\n just visual similarities, like showing histograms of common pixel \n\nvalues.\n\n \n\nWhile we are excited about the potential of activation atlases,\n\n we are even more excited at the possibility of developing similar \n\ntechniques for other types of models. Imagine having an array of machine\n\n learned, but human interpretable, languages for images, audio and text.\n\n", "bibliography_bib": [{"title": "Visualizing higher-layer features of a deep network"}, {"title": "Feature Visualization"}, {"title": "Deep inside convolutional networks: Visualising image classification models and saliency maps"}, {"title": "Deep neural networks are easily fooled: High confidence predictions for unrecognizable images"}, {"title": "Inceptionism: Going deeper into neural networks"}, {"title": "Plug & play generative networks: Conditional iterative generation of images in latent space"}, {"title": "Understanding deep image representations by inverting them"}, {"title": "The Building Blocks of Interpretability"}, {"title": "t-SNE visualization of CNN codes"}, {"title": "Multifaceted feature visualization: Uncovering the different types of features learned by each neuron in deep neural networks"}, {"title": "Imagenet: A large-scale hierarchical image database"}, {"title": "Going deeper with convolutions"}, {"title": "Imagenet large scale visual recognition challenge"}, {"title": "Visualizing data using t-SNE"}, {"title": "UMAP: Uniform Manifold Approximation and Projection"}, {"title": "Grad-cam: Why did you say that? visual explanations from deep networks via gradient-based localization"}, {"title": "Adversarial Patch"}, {"title": "Adversarial examples in the physical world"}, {"title": "A rotation and a translation suffice: Fooling cnns with simple transformations"}, {"title": "Unrestricted adversarial examples"}, {"title": "Intriguing properties of neural networks"}, {"title": "Using Artificial Intelligence to Augment Human Intelligence"}, {"title": "Visualizing Representations: Deep Learning and Human Beings"}, {"title": "Machine Learning for Visualization"}, {"title": "Image-to-image translation with conditional adversarial networks"}, {"title": "TopoSketch: Drawing in Latent Space"}, {"title": "Generative visual manipulation on the natural image manifold"}, {"title": "ML as Collaborator: Composing Melodic Palettes with Latent Loops"}, {"title": "A Hierarchical Latent Vector Model for Learning Long-Term Structure in Music"}, {"title": "GAN Dissection: Visualizing and Understanding Generative Adversarial Networks"}, {"title": "A neural algorithm of artistic style"}, {"title": "Exploring Histograms"}], "id": "557ae730d8b51dddf8aba5a4b337ee01"} +{"url": "n/a", "source": "distill", "source_type": "html", "converted_with": "python", "title": "An Overview of Early Vision in InceptionV1", "authors": ["Chris Olah", "Nick Cammarata", "Ludwig Schubert", "Gabriel Goh", "Michael Petrov", "Shan Carter"], "date_published": "2020-04-01", "abstract": " This article is part of the Circuits thread, a collection of short articles and commentary by an open scientific collaboration delving into the inner workings of neural networks. ", "journal_ref": "distill-pub", "doi": "https://doi.org/10.23915/distill.00024.002", "text": "\n\n a collection of short articles and commentary by an open scientific \n\ncollaboration delving into the inner workings of neural networks. \n\n[Curve Detectors](https://distill.pub/2020/circuits/curve-detectors/)\n\ninput\n\nsoftmax\n\n[0](#conv2d0)\n\n[1](#conv2d1)\n\n[2](#conv2d2)\n\n5a\n\n4d\n\n5b\n\n[3b](#mixed3b)\n\n4e\n\n4c\n\n4b\n\n[3a](#mixed3a)\n\n4a\n\n Over the course of these layers, we see the network go from raw pixels\n\n Along the way, we see a variety of interesting intermediate features,\n\n Firstly, it's particularly easy to study:\n\n common for vision models to have on the order of 64 channels in their \n\ninitial convolutional layers, which are applied at many spatial \n\npositions. So while there are many neurons, the number of unique neurons\n\n is orders of magnitude smaller. and the features seem \n\nquite simple.\n\n Secondly, early vision seems most likely to be universal: to have the \n\nsame features and circuits form across different architectures and \n\ntasks.\n\n Before we dive into detailed explorations of different parts of early \n\nvision, we wanted to give a broader overview of how we presently \n\nunderstand it.\n\n This article sketches out our understanding, as an annotated \n\ncollection of what we call \"neuron groups.\"\n\n We also provide illustrations of selected circuits at each layer.\n\n will not discuss the \"bottleneck\" neurons in mixed3a/mixed3b, which we \n\ngenerally think of as low-rank connections to the previous layer.\n\n But our experience is that a thousand neurons is more than enough to \n\nbe disorienting when one begins studying a model.\n\n Our hope is that this article will help readers avoid this \n\ndisorientation by providing some structure and handholds for thinking \n\nabout them.\n\n### Playing Cards with Neurons\n\n Dmitri Mendeleev is often accounted to have discovered the Periodic \n\nTable by playing \"chemical solitaire,\" writing the details of each \n\nelement on a card and patiently fiddling with different ways of \n\nclassifying and organizing them.\n\n Some modern historians are skeptical about the cards, but Mendeleev's \n\nstory is a compelling demonstration of that there can be a lot of value \n\nin simply organizing phenomena, even when you don't have a theory or \n\nfirm justification for that organization yet.\n\n Mendeleev is far from unique in this.\n\n For example, in biology, taxonomies of species preceded genetics and \n\nthe theory of evolution giving them a theoretical foundation.\n\n Our experience is that many neurons in vision models seem to fall into\n\n families of similar features.\n\n For example, it's not unusual to see a dozen neurons detecting the \n\nsame feature in different orientations or colors.\n\n Perhaps even more strikingly, the same \"neuron families\" seem to recur\n\n across models!\n\n Of course, it's well known that Gabor filters and color contrast \n\ndetectors reliably comprise neurons in the first layer of convolutional \n\nneural networks, but we were quite surprised to see this generalize to \n\nlater layers.\n\n This article shares our working categorization of units in the first \n\nfive layers of InceptionV1 into neuron families.\n\n These families are ad-hoc, human defined collections of features that \n\nseem to be similar in some way.\n\n We've found these helpful for communicating among ourselves and \n\nbreaking the problem of understanding InceptionV1 into smaller chunks.\n\n While there are some families we suspect are \"real\", many others are \n\ncategories of convenience, or categories we have low-confidence about.\n\n The main goal of these families is to help researchers orient \n\nthemselves.\n\n In constructing this categorization, our understanding of individual \n\nneurons was developed by looking at feature visualizations, dataset \n\nexamples, how a feature is built from the previous layer, how it is used\n\n by the next layer, and other analysis.\n\n It's worth noting that the level of attention we've given to \n\nindividual neurons varies greatly: we've dedicated entire forthcoming \n\narticles to detailed analysis some of these units, while many others \n\nhave only received a few minutes of cursory investigation.\n\n which correlates neurons with a pre-defined set of features and groups \n\nthem into categories like color, texture, and object.\n\n This has the advantage of removing subjectivity and being much more \n\nscalable.\n\n At the same time, it also has downsides: correlation can be misleading\n\n and the pre-defined taxonomy may miss the true feature types.\n\n[Net Dissect](http://netdissect.csail.mit.edu/) \n\n was very elegant work which advanced our ability to systematically talk\n\n about network features.\n\n However, to understand the differences between correlating features \n\nwith a pre-defined taxonomy and individually studying them, it may be \n\nillustrative to consider how it classifies some features.\n\n Net Dissect doesn't include the canonical InceptionV1, but it does \n\ninclude a variant of it.\n\n we see many units which appear from dataset examples likely to be \n\nfamiliar feature types like curve detectors, divot detectors, boundary \n\ndetectors, eye detector, and so forth, but are classified as weakly \n\ncorrelated with another feature — often objects that it seems unlikely \n\ncould be detected at such an early layer.\n\n Or in another fun case, there is a feature (372) which is most \n\ncorrelated with a cat detector, but appears to be detecting \n\nleft-oriented whiskers!\n\n \n\n frequency detectors — the fact that they are unanticipated makes them \n\nimpossible to include in a set of pre-defined features.\n\n The only way to discover them is the laborious process of manually \n\ninvestigating each feature.\n\n In the future, you could imagine hybrid approaches, where a human \n\ninvestigator is saved time by having many features sorted into a \n\n#### Caveats\n\n* This is a broad overview and our understanding of many of these \n\nunits is low-confidence. We fully expect, in retrospect, to realize we \n\nmisunderstood some units and categories.\n\n* Many neuron groups are catch-all categories or convenient \n\norganizational categories that we don't think reflect fundamental \n\nstructure.\n\n* Even for neuron groups we suspect do reflect a fundamental \n\nstructure (eg. some can be recovered from factorizing the layer's weight\n\n matrices) the boundaries of these groups can be blurry and some neurons\n\n inclusion involve judgement calls.\n\n### Presentation of Neurons\n\n In order to talk about neurons, we need to somehow represent them.\n\n We use small amounts of transformation robustness when visualizing \n\nthe first few layers, because it has a larger proportional affect on \n\ntheir small receptive fields, and increase as we move to higher layers.\n\n For low layers, we use L2 regularization to push pixels towards \n\ngray.\n\n For the first layer, we follow the convention of other papers and \n\njust show the weights, which for the special case of the first layer are\n\n equivalent to feature visualization with the right L2 penalty.\n\n \n\n When we represent a neuron with a feature visualization, we don't \n\nintend to claim that the feature visualization captures the entirety of \n\nthe neuron's behavior.\n\n Rather, the role of a feature visualization is like a variable name in\n\n understanding a program.\n\n It replaces an arbitrary number with a more meaningful symbol .\n\n### Presentation of Circuits\n\n Although this article is focused on giving an overview of the features\n\n which exist in early vision, we're also interested in understanding how\n\n they're computed from earlier features.\n\n consisting of a neuron, the units it has the strongest (L2 norm) \n\nweights to in the previous layer, and the weights between them.\n\n Some neurons in `mixed3a` and `mixed3b` are in\n\n branches consisting of a \"bottleneck\" 1x1 conv that greatly reduces the\n\n number of channels followed by a 5x5 conv. Although there is a ReLU \n\nbetween them, we generally think of them as a low rank factorization of a\n\n single weight matrix and visualize the product of the two weights. \n\nAdditionally, some neurons in these layers are in a branch consisting of\n\n maxpooling followed by a 1x1 conv; we present these units as their \n\nweights replicated over the region of their maxpooling.\n\n \n\n In some cases, we've also included a few neurons that have weaker \n\nconnections if they seem to have particular pedagogical value; in these \n\ncases, we've mentioned doing so in the caption.\n\n Neurons are visually displayed by their feature visualizations, as \n\ndiscussed above.\n\n Weights are represented using a color map with red as positive and \n\nblue as negative.\n\npositive (excitation)\n\npositive (excitation)\n\nnegative (inhibition)\n\nnegative (inhibition)\n\nClick on the feature visualization of any neuron to see more weights!\n\n At any point, you can click on a neuron's feature visualization to see\n\n its weights to the 50 neurons in the previous layer it is most \n\nconnected to (that is, how it assembled from the previous layer, and \n\nalso the 50 neurons in the next layer it is most connected to (that is, \n\nhow it is used going forward).\n\n This allows further investigation, and gives you an unbiased view of \n\nthe weights if you're concerned about cherry-picking.\n\n \n\n \n\n---\n\n \n\n \n\n`conv2d0`\n\n---------\n\n The first conv layer of every vision model we've looked at is mostly \n\ncomprised of two kinds of features: color-contrast detectors and Gabor \n\nfilters.\n\n In contrast to other models, however, the features aren't perfect \n\ncolor contrast detectors and Gabor filters.\n\n For lack of a better word, they're messy.\n\n We have no way of knowing, but it seems likely this is a result of the\n\n gradient not reaching the early layers very well during training.\n\n Note that InceptionV1 predated the adoption of modern techniques like \n\nbatch norm and Adam, which make it much easier to train deep models \n\nwell.\n\n The weights for the units in the first layer of the TF-Slim\n\n version of InceptionV1, which adds BatchNorm. (Units are sorted by the \n\nfirst principal component of the adjacency matrix between the first and \n\nsecond layers.) These features are typical of a well trained conv net. \n\nNote how, unlike the canonical InceptionV1, these units have a crisp \n\ndivision between black and white Gabors, color Gabors, color-contrast \n\nunits and color center-surround units. \n\n \n\n One subtlety that's worth noting here is that Gabor filters almost \n\nalways come in pairs of weights which are negative versions of each \n\nother, both in InceptionV1 and other vision models.\n\n A single Gabor filter can only detect edges at some offsets, but the \n\nnegative version fills in holes, allowing for the formation of complex \n\nGabor filters in the next layer.\n\n### [**Gabor Filters** 44%](#group_conv2d0_gabor_filters)\n\nShow all 28 neurons.\n\nCollapse neurons.\n\nGabor filters are a simple edge \n\ndetector, highly sensitive to the alignment of the edge. They're almost \n\nuniversally found in the fist layer of vision models. Note that Gabor \n\nfilters almost always come in pairs of negative reciprocals.\n\n### [**Color Contrast** 42%](#group_conv2d0_color_contrast)\n\nShow all 27 neurons.\n\nCollapse neurons.\n\nThese units detect a color one side of \n\ntheir receptive field, and the opposite color on the other side. Compare\n\n### [**Other Units** 14%](#group_conv2d0_other_units)\n\n \n\nUnits that don't fit in another category.\n\n \n\n---\n\n \n\n \n\n`conv2d1`\n\n---------\n\n**Complex Gabors:**\n\n A nice example of this is the \"Complex Gabor\" feature family.\n\n Like simple Gabor filters, complex Gabors detect edges.\n\n But unlike simple Gabors, they are relatively invariant to the exact \n\nposition of the edge or which side is dark or light.\n\n This is achieved by being excited by multiple Gabor filters in similar\n\n orientations — and most critically, by being excited by \"reciprocal \n\nGabor filters\" that detect the same pattern with dark and light \n\nswitched.\n\n This can be seen as an early example of the \"union over cases\" motif.\n\n both positive (excitation) and negative (inhibition).\n\n Click on a neuron to see its forwards and backwards weights.\n\n Note that `conv2d1` is a 1x1 convolution, so there's only a\n\n single weight — a single line, in this diagram — between each channel \n\nin the previous and this one.\n\n There is a pooling layer between them, so the features it connects \n\nto are pooled versions of the previous layer rather than original \n\nfeatures.\n\n This plays an important role in determining the features we observe: \n\nin models with larger convolutions in their second layer, we often see a\n\n jump to crude versions of the larger more complex features we'll see in\n\n the following layers.\n\n In addition to Complex Gabors, we see a variety of other features, \n\nincluding\n\n more invariant color contrast detectors, Gabor-like features that are \n\nless selective for a single orientation, and lower-frequency features.\n\n### [**Low Frequency** 27%](#group_conv2d1_low_frequency)\n\nShow all 17 neurons.\n\nCollapse neurons.\n\n### [**Gabor Like** 17%](#group_conv2d1_gabor_like)\n\nShow all 11 neurons.\n\nCollapse neurons.\n\nThese units respond to edges stimuli, \n\nbut seem to respond to a wider range of orientations, and also respond \n\nto color contrasts that align with the edge. We haven't studied them \n\nvery carefully.\n\n### [**Color Contrast** 16%](#group_conv2d1_color_contrast)\n\n \n\nThese units detect a color on one side \n\nof the receptive field, and a different color on the opposite side. \n\nComposed of lower-level color contrast detectors, they often respond to \n\ncolor transitions in a range of translation and orientation variations. \n\n### [**Multicolor** 14%](#group_conv2d1_multicolor)\n\n \n\n### [**Complex Gabor** 14%](#group_conv2d1_complex_gabor)\n\n \n\nLike Gabor Filters, but fairly invariant\n\n to the exact position, formed by adding together multiple Gabor \n\ndetectors in the same orientation but different phases. We call these \n\n### [**Color** 6%](#group_conv2d1_color)\n\n \n\nTwo of these units seem to track \n\nbrightness (bright vs dark), while the other two units seem to mostly \n\ntrack hue, dividing the space of hues between them. One responds to \n\nred/orange/yellow, while the other responds to purple/blue/turqoise. \n\nUnfortunately, their circuits seem to heavily rely on the existence of a\n\n### [**Other Units** 5%](#group_conv2d1_other_units)\n\n \n\nUnits that don't fit in another category.\n\n### [**hatch** 2%](#group_conv2d1_hatch)\n\n \n\n \n\n \n\n---\n\n \n\n \n\n`conv2d2`\n\n---------\n\nIn `conv2d2` we see the emergence of very simple shape \n\npredecessors.\n\nThis layer sees the first units that might be described as \"line \n\ndetectors\", preferring a single longer line to a Gabor pattern and \n\naccounting for about 25% of units.\n\nWe also see tiny curve detectors, corner detectors, divergence \n\ndetectors, and a single very tiny circle detector.\n\nOne fun aspect of these features is that you can see that they are \n\nassembled from Gabor detectors in the feature visualizations, with \n\ncurves being built from small piecewise Gabor segments.\n\nAll of these units still moderately fire in response to incomplete \n\nversions of their feature, such as a small curve running tangent to the \n\nedge detector.\n\nSince `conv2d2` is a 3x3 convolution, our understanding of \n\nthese shape precursor features (and some texture features) maps to \n\nparticular ways Gabor and lower-frequency edges are being spatially \n\nassembled into new features.\n\nAt a high-level, we see a few primary patterns:\n\nLine\n\nCurve\n\nShifted Line\n\nGabor Texture\n\nCorner / Lisp\n\nHatch Texture\n\nDivergence\n\n We also begin to see various kinds of texture and color detectors \n\nstart to become a major constituent of the layer, including \n\ncolor-contrast and color center surround features, as well as \n\nGabor-like, hatch, low-frequency and high-frequency textures.\n\n A handful of units look for different textures on different sides of \n\ntheir receptive field.\n\n### [**Color Contrast** 21%](#group_conv2d2_color_contrast)\n\nShow all 40 neurons.\n\nCollapse neurons.\n\nThese units detect a color on one side \n\nof the receptive field, and a different color on the opposite side. \n\nComposed of lower-level color contrast detectors, they often respond to \n\ncolor transitions in a range of translation and orientation variations. \n\n### [**Line** 17%](#group_conv2d2_line)\n\nShow all 33 neurons.\n\nCollapse neurons.\n\nThese units are beginning to look for a \n\nsingle primary line. Some look for different colors on each side. Many \n\nexhibit \"combing\" (small perpendicular lines along the main one), a very\n\n common but not presently understood phenomenon in line-like features \n\n### [**Shifted Line** 8%](#group_conv2d2_shifted_line)\n\nShow all 16 neurons.\n\nCollapse neurons.\n\nThese units look for edges \"shifted\" to \n\nthe side of the receptive field instead of the middle. This may be \n\n### [**Textures** 8%](#group_conv2d2_textures)\n\nShow all 15 neurons.\n\nCollapse neurons.\n\nA broad category of units detecting repeating local structure.\n\n### [**Other Units** 7%](#group_conv2d2_other_units)\n\nShow all 14 neurons.\n\nCollapse neurons.\n\nCatch-all category for all other units.\n\n### [**Color Center-Surround** 7%](#group_conv2d2_color_center_surround)\n\nShow all 13 neurons.\n\nCollapse neurons.\n\nThese units look for one color in the \n\nmiddle and another (typically opposite) on the boundary. Genereally more\n\n### [**Tiny Curves** 6%](#group_conv2d2_tiny_curves)\n\nShow all 12 neurons.\n\nCollapse neurons.\n\nVery small curve (and one circle) \n\ndetectors. Many of these units respond to a range of curvatures all the \n\n### [**Early Brightness Gradient** 6%](#group_conv2d2_early_brightness_gradient)\n\nShow all 12 neurons.\n\nCollapse neurons.\n\nThese units detect oriented gradients in\n\n brightness. They support a variety of similar units in the next layer. \n\n### [**Gabor Textures** 6%](#group_conv2d2_gabor_textures)\n\nShow all 12 neurons.\n\nCollapse neurons.\n\n### [**Texture Contrast** 4%](#group_conv2d2_texture_contrast)\n\n \n\n### [**Hatch Textures** 3%](#group_conv2d2_hatch_textures)\n\n \n\n### [**Color/Multicolor** 3%](#group_conv2d2_color_multicolor)\n\n \n\n### [**Corners** 2%](#group_conv2d2_corners)\n\n \n\n### [**Line Divergence** 1%](#group_conv2d2_line_divergence)\n\n \n\nThese units detect lines diverging from a point.\n\n \n\n \n\n---\n\n \n\n \n\n`mixed3a`\n\n---------\n\n`mixed3a` has a significant increase in the diversity of features we observe.\n\nand will be discussed again in later articles in great detail.\n\nPrior to `mixed3a`, color contrast detectors look for \n\ntransitions of a color to near complementary colors (eg. blue vs \n\nyellow).\n\nFrom this layer on, however, we'll often see color detectors which \n\ncompare a color to the absence of color.\n\nAdditionally, black and white detectors can allow the detection of \n\ngreyscale images, which may be correlated with ImageNet categories (see\n\nThe circuit for our black and white detector is quite simple:\n\nalmost all of its large weights are negative, detecting the absence of colors.\n\n \n\ninhibiting\n\npositive (excitation)\n\nnegative (inhibition)\n\n The sixteen strongest magnitude weights to the previous layer are \n\nshown.\n\n For simplicity, only one spatial weight for positive and negative \n\nhave been shown, but they all have almost identical structure.\n\n Click on a neuron to see its forwards and backwards weights.\n\nBut there's lots of other interesting examples.\n\npositive (excitation)\n\npositive (excitation)\n\nnegative (inhibition)\n\nnegative (inhibition)\n\npositive (excitation)\n\nnegative (inhibition)\n\n The circuit constructing a triangle detector.\n\n The choice of which neurons in the previous layer to show is slightly \n\ncherrypicked for pedagogy.\n\n The six neurons with the highest magnitude weights to the triangle are\n\n shown, plus one other neuron with slightly weaker weights.\n\n (Left leaning edges have slightly higher weights than right ones, but \n\nit seemed more illustrative to show two of both.) Click on neurons to \n\nsee the full weights.\n\n However, in practice, these triangle detectors (and other angle units)\n\n seem to often just be used as multi-edge detectors downstream,\n\n or in conjunction with many other units to detect convex boundaries.\n\n Below, we provide a taxonomized overview of all of them:\n\n### [**Texture** 25%](#group_mixed3a_texture)\n\nShow all 65 neurons.\n\nCollapse neurons.\n\nThis is a broad, not very well defined \n\ncategory for units that seem to look for simple local structures over a \n\nwide receptive field, including mixtures of colors. Many live in a \n\nbranch consisting of a maxpool followed by a 1x1 conv, which \n\nstructurally encourages this.Maxpool \n\nbranches (ie. maxpool 5x5 stride 1 -> conv 1x1) have large receptive \n\nfields, but can't control where in in their receptive field each feature\n\n they detect is, nor the relative position of these features. In early \n\nvision, this unstructured of feature detection makes them a good fit for\n\n textures. \n\n### [**Color Center-Surround** 12%](#group_mixed3a_color_center_surround)\n\nShow all 30 neurons.\n\nCollapse neurons.\n\nThese units look for one color in the \n\ncenter, and another (usually opposite) color surrounding it. They are \n\ntypically much more sensitive to the center color than the surrounding \n\none. In visual neuroscience, center-surround units are classically an \n\nextremely low-level feature, but we see them in the later parts of early\n\n### [**High-Low Frequency** 6%](#group_mixed3a_high_low_frequency)\n\nShow all 15 neurons.\n\nCollapse neurons.\n\n \n\n A detailed article on these is forthcoming.\n\n### [**Brightness Gradient** 6%](#group_mixed3a_brightness_gradient)\n\nShow all 15 neurons.\n\nCollapse neurons.\n\nThese units detect brightness gradients.\n\n Among other things they will help detect specularity (shininess), \n\n### [**Color Contrast** 5%](#group_mixed3a_color_contrast)\n\nShow all 14 neurons.\n\nCollapse neurons.\n\nThese units look for one color on one \n\nside of their receptive field, and another (usually opposite) color on \n\nthe opposing side. They typically don't care about the exact position or\n\n### [**Complex Center-Surround** 5%](#group_mixed3a_complex_center_surround)\n\nShow all 14 neurons.\n\nCollapse neurons.\n\nThis is a broad, not very well defined \n\ncategory for center-surround units that detect a pattern or complex \n\ntexture in their center.\n\n### [**Line Misc.** 5%](#group_mixed3a_line_misc.)\n\nShow all 14 neurons.\n\nCollapse neurons.\n\nBroad, low confidence organizational category.\n\n### [**Lines** 5%](#group_mixed3a_lines)\n\nShow all 14 neurons.\n\nCollapse neurons.\n\nUnits used to detect extended lines, \n\noften further excited by different colors on each side. A few are highly\n\n combed line detectors that aren't obviously such at first glance. The \n\ndecision to include a unit was often decided by whether it seems to be \n\nused by downstream client units as a line detector.\n\n### [**Other Units** 5%](#group_mixed3a_other_units)\n\nShow all 14 neurons.\n\nCollapse neurons.\n\nCatch-all category for all other units.\n\n### [**Repeating patterns** 5%](#group_mixed3a_repeating_patterns)\n\nShow all 12 neurons.\n\nCollapse neurons.\n\n### [**Curves** 4%](#group_mixed3a_curves)\n\nShow all 11 neurons.\n\nCollapse neurons.\n\nThese curve detectors detect \n\nsignificantly larger radii curves than their predecessors. They will be \n\nrefined into more specific, larger curve detectors in the next layer. \n\n \n\n### [**BW vs Color** 4%](#group_mixed3a_bw_vs_color)\n\n \n\nThese \"black and white\" detectors \n\nrespond to absences of color. Prior to this, color detectors contrast to\n\n the opposite hue, but from this point on we'll see many compare to the \n\n### [**Angles** 3%](#group_mixed3a_angles)\n\n \n\nUnits that detect multiple lines, \n\nforming angles, triangles and squares. They generally respond to any of \n\nthe individual lines, and more strongly to them together.\n\n### [**Fur Precursors** 3%](#group_mixed3a_fur_precursors)\n\n \n\nThese units are not yet highly selective\n\n for fur (they also fire for other high-frequency patterns), but their \n\n At the 224x224 image resolution, individual fur hairs are generally not\n\n detectable, but tufts of fur are. These units use Gabor textures to \n\ndetect those tufts in different orientations. The also detect lower \n\nfrequency edges or changes in lighting perpendicular to the tufts.\n\n### [**Eyes / Small Circles** 2%](#group_mixed3a_eyes_small_circles)\n\n \n\n### [**Crosses / Diverging Lines** 2%](#group_mixed3a_crosses_diverging_lines)\n\n \n\n### [**Thick Lines** 1%](#group_mixed3a_thick_lines)\n\n \n\nLow confidence organizational category.\n\n### [**Line Ends** 1%](#group_mixed3a_line_ends)\n\n \n\n \n\n \n\n---\n\n \n\n \n\n`mixed3b`\n\n---------\n\n`mixed3b` straddles two levels of abstraction.\n\nOn the one hand, it has some quite sophisticated features that don't \n\nreally seem like they should be characterized as \"early\" or \"low-level\":\n\n object boundary detectors, early head detectors, and more sophisticated\n\n part of shape detectors.\n\nOn the other hand, it also has many units that still feel quite \n\nlow-level, such as color center-surround units.\n\n When you first look at the feature visualizations and dataset examples,\n\n you might think these are just another iteration of edge or curve detectors.\n\npositive (excitation)\n\nnegative (inhibition)\n\nHigh-low frequency detectors\n\nEdges\n\nEnd of Line\n\nColor Contrasts\n\n We sometimes find it useful to think about the \"goal\" of early vision.\n\n**Curve-based Features:**\n\n These include more sophisticated curves,\n\n [divots](#group_mixed3b_divots), and [\"evolutes\"](#group_mixed3b_evolute)\n\nCurve\n\nCircle\n\nSpiral\n\nEvolute\n\npositive (excitation)\n\nnegative (inhibition)\n\n Again, these circuits only scratch the surface of `mixed3b`.\n\n Since it's a larger layer with lots of families, we'll go through a \n\ncouple particularly interesting and well understood families first:\n\n### [**Boundary** 8%](#group_mixed3b_boundary)\n\nShow all 36 neurons.\n\nCollapse neurons.\n\nThese units use multiple cues to detect \n\nthe boundaries of objects. They vary in orientation, detecting \n\nconvex/concave/straight boundaries, and detecting artificial vs fur \n\nforegrounds. Cues they rely on include line detectors, high-low \n\nfrequency detectors, and color contrast.\n\n### [**Proto-Head** 3%](#group_mixed3b_proto_head)\n\nShow all 12 neurons.\n\nCollapse neurons.\n\nThe tiny eye detectors, along with \n\ntexture detectors for fur, hair and skin developed at the previous layer\n\n enable these early head detectors, which will continue to be refined in\n\n the next layer.\n\n### [**Generic, Oriented Fur** 2%](#group_mixed3b_generic_oriented_fur)\n\n \n\nWe don't typically think of fur as an \n\noriented feature, but it is. These units detect fur parting in various \n\nways, much like how hair on your head parts.\n\n### [**Curves** 2%](#group_mixed3b_curves)\n\n \n\nThe third iteration of curve detectors. \n\nThey detect larger radii curves than their predecessors, and are the \n\nfirst to not slightly fire for curves rotated 180 degrees. Compare to \n\n \n\n### [**Divots** 2%](#group_mixed3b_divots)\n\n \n\nCurve-like detectors for sharp corners or bumps.\n\n### [**Square / Grid** 2%](#group_mixed3b_square_grid)\n\n \n\nUnits detecting grid patterns.\n\n### [**Brightness Gradients** 1%](#group_mixed3b_brightness_gradients)\n\n \n\n### [**Eyes** 1%](#group_mixed3b_eyes)\n\n \n\n### [**Shallow Curves** 1%](#group_mixed3b_shallow_curves)\n\n \n\n### [**Curve Shapes** 1%](#group_mixed3b_curve_shapes)\n\n \n\nSimple shapes created by composing curves, such as spirals and S-curves.\n\n### [**Circles / Loops** 1%](#group_mixed3b_circles_loops)\n\n \n\n### [**Circle Cluster** 1%](#group_mixed3b_circle_cluster)\n\n \n\n### [**Double Curves** 1%](#group_mixed3b_double_curves)\n\n \n\n### [**Evolute** 0.2%](#group_mixed3b_evolute)\n\n \n\n \n\n One frustrating issue is that `mixed3b` has many units that\n\n don't have a simple low-level articulation, but also are not yet very \n\nspecific to a high-level feature.\n\n For example, there are units which seem to be developing towards \n\ndetecting certain animal body parts, but still respond to many other \n\nstimuli as well and so are difficult to describe.\n\n### [**Color Center-Surround** 16%](#group_mixed3b_color_center_surround)\n\nShow all 77 neurons.\n\nCollapse neurons.\n\nThese units look for one color in the \n\ncenter, and another color surrounding it. These units likely have many \n\nsubtleties about the range of hues, texture preferences, and \n\ninteractions that similar neurons in earlier layers may not have. Note \n\nhow many units detect the absence (or generic presence) of color, \n\n### [**Complex Center-Surround** 15%](#group_mixed3b_complex_center_surround)\n\nShow all 73 neurons.\n\nCollapse neurons.\n\nThis is a broad, not very well defined \n\ncategory for center-surround units that detect a pattern or complex \n\ntexture in their center.\n\n### [**Texture** 9%](#group_mixed3b_texture)\n\nShow all 44 neurons.\n\nCollapse neurons.\n\nThis is a broad, not very well defined \n\ncategory for units that seem to look for simple local structures over a \n\nwide receptive field, including mixtures of colors. \n\n### [**Other Units** 9%](#group_mixed3b_other_units)\n\nShow all 42 neurons.\n\nCollapse neurons.\n\nUnits that don't fall in any other category.\n\n### [**Color Contrast/Gradient** 5%](#group_mixed3b_color_contrast_gradient)\n\nShow all 24 neurons.\n\nCollapse neurons.\n\nUnits which respond to different colors \n\non each side. These units look for one color in the center, and another \n\ncolor surrounding it. These units likely have many subtleties about the \n\nrange of hues, texture preferences, and interactions that similar \n\nneurons in earlier layers may not have. Compare to earlier color \n\n### [**Texture Contrast** 3%](#group_mixed3b_texture_contrast)\n\nShow all 12 neurons.\n\nCollapse neurons.\n\nUnits that detect one texture on one side and a different texture on the other.\n\n### [**Other Fur** 2%](#group_mixed3b_other_fur)\n\n \n\nUnits which seem to detect fur but, \n\nunlike the oriented fur detectors, don't seem to detect it parting in a \n\nparticular way. Many of these seem to prefer a particular fur pattern.\n\n### [**Lines** 2%](#group_mixed3b_lines)\n\n \n\n### [**Cross / Corner Divergence** 2%](#group_mixed3b_cross_corner_divergence)\n\n \n\n### [**Pattern** 2%](#group_mixed3b_pattern)\n\n \n\nLow confidence category.\n\n### [**Bumps** 2%](#group_mixed3b_bumps)\n\n \n\nLow confidence category.\n\n### [**Double Boundary** 1%](#group_mixed3b_double_boundary)\n\n \n\n### [**Bar / Line-Like** 1%](#group_mixed3b_bar_line_like)\n\n \n\nLow confidence category.\n\n### [**Boundary Misc** 1%](#group_mixed3b_boundary_misc)\n\n \n\nBoundary-related units we didn't know what else to do with.\n\n### [**Star** 1%](#group_mixed3b_star)\n\n \n\nLow confidence category.\n\n### [**Line Grad** 1%](#group_mixed3b_line_grad)\n\n \n\nLow confidence category.\n\n### [**Scales** 1%](#group_mixed3b_scales)\n\n \n\nWe don't really understand these units.\n\n### [**Curves misc.** 1%](#group_mixed3b_curves_misc.)\n\n \n\nLow confidence organizational category.\n\n### [**Shiny** 0.4%](#group_mixed3b_shiny)\n\n \n\nUnits that seem to detect shiny, specular surfaces.\n\n### [**Pointy** 0.4%](#group_mixed3b_pointy)\n\n \n\nLow confidence category.\n\n \n\n---\n\n \n\n \n\nConclusion\n\n----------\n\n The goal of this essay was to give a high-level overview of our \n\npresent understanding of early vision in InceptionV1.\n\n Every single feature discussed in this article is a potential topic of\n\n deep investigation.\n\n For example, are curve detectors really curve detectors? What types of\n\n curves do they fire for? How do they behave on various edge cases? How \n\nare they built?\n\n Over the coming articles, we'll do deep dives rigorously investigating\n\n these questions for a few features, starting with curves.\n\n Is there a better taxonomy, or another way to understand the space of features?\n\n Why do features often seem to form in families?\n\n To what extent do the same features families form across models?\n\n Is there a \"periodic table of low-level visual features\", in some sense?\n\n To what extent do later features admit a similar taxonomy?\n\n We think these could be interesting questions for future work.\n\n a collection of short articles and commentary by an open scientific \n\ncollaboration delving into the inner workings of neural networks. \n\n[Curve Detectors](https://distill.pub/2020/circuits/curve-detectors/)\n\n", "bibliography_bib": [{"title": "Going deeper with convolutions"}, {"title": "Network dissection: Quantifying interpretability of deep visual representations"}, {"title": "Visualizing higher-layer features of a deep network"}, {"title": "Feature Visualization"}, {"title": "Deep inside convolutional networks: Visualising image classification models and saliency maps"}, {"title": "Deep neural networks are easily fooled: High confidence predictions for unrecognizable images"}, {"title": "Inceptionism: Going deeper into neural networks"}, {"title": "Plug & play generative networks: Conditional iterative generation of images in latent space"}, {"title": "The Building Blocks of Interpretability"}, {"title": "TensorFlow-Slim: A lightweight library for defining, training and evaluating complex models in TensorFlow"}], "id": "77795ada44660483ec77756958351db8"} +{"url": "n/a", "source": "distill", "source_type": "html", "converted_with": "python", "title": "Communicating with Interactive Articles", "authors": ["Fred Hohman", "Matthew Conlen", "Jeffrey Heer", "Duen Horng (Polo) Chau"], "date_published": "2020-09-11", "abstract": " Computing has changed how people communicate. The transmission of news, messages, and ideas is instant. Anyone’s voice can be heard. In fact, access to digital communication technologies such as the Internet is so fundamental to daily life that their disruption by government is condemned by the United Nations Human Rights Council . But while the technology to distribute our ideas has grown in leaps and bounds, the interfaces have remained largely the same. ", "journal_ref": "distill-pub", "doi": "https://doi.org/10.23915/distill.00028", "text": "\n\n### Contents\n\n[Introduction](#introduction)\n\n[Interactive Articles: Theory & Practice](#interactive-articles)\n\n* [Connecting People and Data](#connecting-people-and-data)\n\n* [Making Systems Playful](#making-systems-playful)\n\n* [Prompting Self-Reflection](#prompting-self-reflection)\n\n* [Personalizing Reading](#personalizing-reading)\n\n* [Reducing Cognitive Load](#reducing-cognitive-load)\n\n[Challenges for Authoring Interactives](#challenges)\n\n[Critical Reflections](#critical-reflections)\n\n[Looking Forward](#looking-forward)\n\n Computing has changed how people communicate. The transmission \n\nof news, messages, and ideas is instant. Anyone's voice can be heard. In\n\n fact, access to digital communication technologies such as the Internet\n\n is so fundamental to daily life that their disruption by government is \n\n \n\n Parallel to the development of the internet, researchers like \n\nAlan Kay and Douglas Engelbart worked to build technology that would \n\nempower individuals and enhance cognition. Kay imagined the Dynabook \n\n in the hands of children across the world. Engelbart, while best \n\nremembered for his \"mother of all demos,\" was more interested in the \n\nability of computation to augment human intellect .\n\n Neal Stephenson wrote speculative fiction that imagined interactive \n\npaper that could display videos and interfaces, and books that could \n\nteach and respond to their readers .\n\n \n\n More recent designs (though still historical by personal computing\n\n standards) point to a future where computers are connected and assist \n\npeople in decision-making and communicating using rich graphics and \n\n unfortunately, many others have not. The most popular publishing \n\nplatforms, for example WordPress and Medium, choose to prioritize social\n\n features and ease-of-use while limiting the ability for authors to \n\ncommunicate using the dynamic features of the web.\n\n \n\n In the spirit of previous computer-assisted cognition \n\ntechnologies, a new type of computational communication medium has \n\nemerged that leverages active reading techniques to make ideas more \n\naccessible to a broad range of people. These interactive articles build \n\n \n\n In this work, for the the first time, we connect the dots between \n\ninteractive articles such as those featured in this journal and \n\npublications like *The New York Times* and the techniques, \n\ntheories, and empirical evaluations put forth by academic researchers \n\nacross the fields of education, human-computer interaction, information \n\nvisualization, and digital journalism. We show how digital designers are\n\n operationalizing these ideas to create interactive articles that help \n\nboost learning and engagement for their readers compared to static \n\nalternatives.\n\n \n\n Conducting\n\n novel research requires deep understanding and expertise in a specific \n\narea. Once achieved, researchers continue contributing new knowledge for\n\n future researchers to use and build upon. Over time, this consistent \n\naddition of new knowledge can build up, contributing to what some have \n\ncalled research debt. Not everyone is an expert in every field, and it \n\ncan be easy to lose perspective and forget the bigger picture. Yet \n\nresearch should be understood by many. Interactive articles can be used \n\nto distill the latest progress in various research fields and make their\n\n methods and results accessible and understandable to a broader \n\naudience. #### Opportunities\n\n * Engage and excite broader audience with latest research progress\n\n* Remove research debt, onboard new researchers\n\n* Make faster and clearer research progress\n\n #### Challenges\n\n * No clear incentive structure for researchers\n\n* Little funding for bespoke research dissemination and communication\n\n companion piece to a traditional research paper that uses interactive \n\nvisualizations to let readers adjust a machine learning model's behavior\n\n PhD thesis that contributes a programming language abstraction for \n\nunderstanding how programs access the context or environment in which \n\nthey execute, and walks readers through the work using two simple \n\n crash course in complex systems science, created by leading experts, \n\npractitioners, and students in the field, with accompanying interactive \n\n Interactive articles are applicable to variety of domains, such as \n\nresearch dissemination, journalism, education, and policy and decision \n\n Today there is a growing excitement around the use of interactive \n\narticles for communication since they offer unique capabilities to help \n\npeople learn and engage with complex ideas that traditional media lacks.\n\n After describing the affordances of interactive articles, we provide \n\ncritical reflections from our own experience with open-source, \n\ninteractive publishing at scale. We conclude with discussing practical \n\nchallenges and open research directions for authoring, designing, and \n\npublishing interactive articles.\n\n \n\n This style of communication — and the platforms which support \n\nit — are still in their infancy. When choosing where to publish this \n\nwork, we wanted the medium to reflect the message. Journals like *Distill*\n\n are not only pushing the boundaries of machine learning research but \n\nalso offer a space to put forth new interfaces for dissemination. This \n\nwork ties together the theory and practice of authoring and publishing \n\ninteractive articles. It demonstrates the power that the medium has for \n\nproviding new representations and interactions to make systems and ideas\n\n more accessible to broad audiences.\n\n \n\nInteractive Articles: Theory & Practice\n\n---------------------------------------\n\n Interactive articles draw from and connect many types of media, \n\nfrom static text and images to movies and animations. But in contrast to\n\n these existing forms, they also leverage interaction techniques such as\n\n details-on demand, belief elicitation, play, and models and simulations\n\n to enhance communication.\n\n \n\n While the space of possible designs is far too broad to be solved \n\nwith one-size-fits-all guidelines, by connecting the techniques used in \n\nthese articles back to underlying theories presented across disparate \n\nfields of research we provide a missing foundation for designers to use \n\nwhen considering the broad space of interactions that could be added to a\n\n born-digital article.\n\n \n\n We draw from a corpus of over sixty interactive articles to \n\nhighlight the breadth of techniques available and analyze how their \n\nauthors took advantage of a digital medium to improve the reading \n\nexperience along one or more dimensions, for example, by reducing the \n\noverall cognitive load, instilling positive affect, or improving \n\ninformation recall.\n\n \n\n| Title *arrow\\_downward* | Publication or Author | Tags | Year |\n\n| --- | --- | --- | --- |\n\n Because diverse communities create interactive content, this \n\nmedium goes by many different names and has not yet settled on a \n\n In newsrooms, data journalists, developers, and designers work together\n\n to make complex news and investigative reporting clear and engaging \n\nusing interactive stories . Educators\n\n use interactive textbooks as an alternative learning format to give \n\nstudents hands-on experience with learning material .\n\n \n\n Besides these groups, others such as academics, game developers, \n\nweb developers, and designers blend editorial, design, and programming \n\n While these all slightly differ in their technical approach and target \n\naudience, they all largely leverage the interactivity of the modern web.\n\n \n\n We focus on five unique affordances of interactive articles, \n\nlisted below. In-line videos and example interactive graphics are \n\npresented alongside this discussion to demonstrate specific techniques.\n\n \n\n### Connecting People and Data\n\n an audience which finds content to be aesthetically pleasing is more \n\nlikely to have a positive attitude towards it. This in turn means people\n\n will spend more time engaging with content and ultimately lead to \n\nimproved learning outcomes. While engagement itself may not be an end \n\ngoal of most research communications, the ability to influence both \n\naudience attitude and the amount of time that is spent is a useful lever\n\n to improve learning: we know from education research that both time \n\nspent and emotion are predictive of learning outcomes.\n\n \n\n Animations can also be used to improve engagement .\n\n While there is debate amongst researchers if animations in general are \n\nable to more effectively convey the same information compared to a well \n\n while the series of still images may be more effective for answering \n\nspecific questions like, \"Does a horse lift all four of its feet off the\n\n ground when it runs?\" watching the animation in slow motion gives the \n\nviewer a much more visceral sense of how it runs. A more modern example \n\n \n\n 1878, Eadweard Muybridge settled Leland Stanford's hotly debated \n\nquestion of whether all four feet of a horse lifted off the ground \n\nduring a trot using multiple cameras to capture motion in stop-motion \n\n Passively, animation can be used to add drama to a graphic \n\ndisplaying important information, but which readers may otherwise find \n\ndry. Scientific data which is inherently time varying may be shown using\n\n an animation to connect viewers more closely with the original data, as\n\n compared to seeing an abstracted static view. For example, Ed Hawkins \n\ndesigned \"Climate Spirals,\" which shows the average global temperature \n\nchange over time . This \n\npresentation of the data resonated with a large public audience, so much\n\n so that it was displayed at the opening ceremony at the 2016 Rio \n\nOlympics. In fact, many other climate change visualizations of this same\n\n dataset use animation to build suspense and highlight the recent spike \n\nin global temperatures .\n\n \n\n Preview,\n\n click to play\n\n Full Video.\n\n By adding variation over time, authors have access to a new \n\ndimension to encode information and an even wider design space to work \n\nin. Consider the animated graphic in *The New York Times* story \n\n\"Extensive Data Shows Punishing Reach of Racism for Black Boys,\" which \n\n by utilizing animation, it became possible for the authors to design a \n\nunit visualization in which each data point shown represented an \n\nindividual, reminding readers that the data in this story was about real\n\n peoples' lives.\n\n \n\n Unit visualizations have also been used to evoke empathy in \n\n Using person-shaped glyphs (as opposed to abstract symbols like circles\n\n or squares) has been shown not to produce additional empathic responses\n\n using visualizations. Correll argues that much of the power of \n\nvisualization comes from abstraction, but quantization stymies empathy .\n\n He instead suggests anthropomorphizing data, borrowing journalistic and\n\n rhetoric techniques to create novel designs or interventions to foster \n\nempathy in readers when viewing visualizations .\n\n \n\n Regarding the format of interactive articles, an ongoing debate \n\nwithin the data journalism community has been whether articles which \n\nutilize scroll-based graphics (scrollytelling) are more effective than \n\nthose which use step-based graphics (slideshows). McKenna et al. \n\n found that their study participants largely preferred content to be \n\ndisplayed with a step- or scroll-based navigation as opposed to \n\ntraditional static articles, but did not find a significant difference \n\nin engagement between the two layouts. In related work, Zhi et al. found\n\n that performance on comprehension tasks was better in slideshow layouts\n\n than in vertical scroll-based layouts .\n\n Both studies focused on people using desktop (rather than mobile) \n\ndevices. More work is needed to evaluate the effectiveness of various \n\nlayouts on mobile devices, however the interviews conducted by MckEnna \n\net al. suggest that additional features, such as supporting navigation \n\nthrough swipe gestures, may be necessary to facilitate the mobile \n\nreading experience.\n\n \n\nreaders play the role of a pirate commander, giving them a unique look \n\nat the economics that led to rise in piracy off the coast of Somalia. Playing\n\n Preview,\n\n click to play\n\n Full Video.\n\n \n\n the reading experience becomes closer to that of playing a game. For \n\nexample, the critically acclaimed explorable explanation \"Parable of the\n\n Polygons\" puts play at the center of the story, letting a reader \n\nmanually run an algorithm that is later simulated in the article to \n\ndemonstrate how a population of people with slight personal biases \n\nagainst diversity leads to social segregation .\n\n \n\n### Making Systems Playful\n\n Interactive articles utilize an underlying computational \n\ninfrastructure, allowing authors editorial control over the \n\ncomputational processes happening on a page. This access to computation \n\nallows interactive articles to engage readers in an experience they \n\ncould not have with traditional media. For example, in \"Drawing Dynamic \n\nVisualizations\", Victor demonstrates how an interactive visualization \n\ncan allow readers to build an intuition about the behavior of a system, \n\nleading to a fundamentally different understanding of an underlying \n\n \n\n Complex systems often require extensive setup to allow for proper \n\nstudy: conducting scientific experiments, training machine learning \n\nmodels, modeling social phenomenon, digesting advanced mathematics, and \n\nresearching recent political events, all require the configuration of \n\nsophisticated software packages before a user can interact with a system\n\n at all, even just to tweak a single parameter. This barrier to entry \n\ncan deter people from engaging with complex topics, or explicitly \n\nprevent people who do not have the necessary resources, for example, \n\ncomputer hardware for intense machine learning tasks. Interactive \n\narticles drastically lower these barriers.\n\n \n\n Science that utilizes physical and computational experiments \n\nrequires systematically controlling and changing parameters to observe \n\ntheir effect on the modeled system. In research, dissemination is \n\ntypically done through static documents, where various figures show and \n\ncompare the effect of varying particular parameters. However, efforts \n\nhave been made to leverage interactivity in academic publishing, \n\n gives readers control over the reporting of the research findings and \n\nshows great promise in helping readers both digest new ideas and learn \n\nabout existing fields that are built upon piles of research debt .\n\n \n\n Beyond reporting statistics, interactive articles are extremely \n\npowerful when the studied systems can be modeled or simulated in \n\nreal-time with interactive parameters without setup, e.g., in-browser \n\nsandboxes. Consider the example in [4](#simulation-vis)\n\n of a Boids simulation that models how birds flock together. Complex \n\nsystems such as these have many different parameters that change the \n\nresulting simulation. These sandbox simulations allow readers to play \n\nwith parameters to see their effect without worrying about technical \n\noverhead or other experimental consequences.\n\n \n\nInteract with live simulations—no setup required. This\n\n Boids visualization models the movement of a flock of birds, and \n\nexposes parameters that a reader can manipulate to change the behavior \n\nof the simulation. Boid Count \n\n At the top, drag the slider to change the number of boids in the \n\nsimulation. Underneath, adjust the different parameters to find \n\ninteresting configurations.\n\n This is a standout design pattern within interactive articles, and\n\n many examples exist ranging in complexity. \"How You Will Die\" visually \n\nsimulates the average life expectancy of different groups of people, \n\nwhere a reader can choose the gender, race, and age of a person .\n\n \"On Particle Physics\" allows readers to experiment with accelerating \n\ndifferent particles through electric and magnetic fields to build \n\nintuition behind electromagnetism foundations such as the Lorentz force \n\nand Maxwell's equations — the experiments backing these simulations \n\ncannot be done without multi-million dollar machinery .\n\n \"Should Prison Sentences Be Based On Crimes That Haven't Been Committed\n\n Yet?\" shows the outcome of calculating risk assessments for recidivism \n\nwhere readers adjust the thresholds for determining who gets parole .\n\n \n\n reader uses their own live video camera to train a machine learning \n\nimage classifier in-browser without any extra computational resources. Playing\n\n Preview,\n\n click to play\n\n Full Video.\n\n The dissemination of modern machine learning techniques has been \n\nbolstered by interactive models and simulations. Three articles, \"How to\n\n show the effect that hyperparameters and different dimensionality \n\nreduction techniques have on creating low dimensional embeddings of \n\nhigh-dimensional data. A popular approach is to demonstrate how machine \n\n Other examples are aimed at technical readers who wish to learn about \n\nspecific concepts within deep learning. Here, interfaces allow readers \n\nto choose model hyperparameters, datasets, and training procedures that,\n\n once selected, visualize the training process and model internals to \n\ninspect the effect of varying the model configuration .\n\n \n\n Interactive articles commonly communicate a single idea or concept\n\n using multiple representations. The same information represented in \n\ndifferent forms can have different impact. For example, in mathematics \n\noften a single object has both an algebraic and a geometric \n\nrepresentation. A clear example of this is the definition of a circle .\n\n Both are useful, inform one another, and lead to different ways of \n\nthinking. Examples of interactive articles that demonstrate this include\n\n various media publications' political election coverage that break down\n\n the same outcome in multiple ways, for example, by voter demographics, \n\ngeographical location, and historical perspective .\n\n \n\n as people can process information through both a visual channel and \n\nauditory channel simultaneously. Popular video creators such as \n\n3Blue1Brown and Primer \n\n exemplify these principles by using rich animation and simultaneous \n\nnarration to break down complex topics. These videos additionally take \n\nadvantage of the Redundancy Principle by including complementary \n\ninformation in the narration and in the graphics rather than repeating \n\nthe same information in both channels .\n\n \n\n Preview,\n\n click to play\n\n Full Video.\n\n While these videos are praised for their approachability and \n\nrich exposition, they are not interactive. One radical extension from \n\ntraditional video content is also incorporating user input into the \n\nvideo while narration plays. A series of these interactive videos on \n\n\"Visualizing Quaternions\" lets a reader listen to narration of a live \n\nanimation on screen, but at any time the viewer can take control of the \n\nvideo and manipulate the animation and graphics while simultaneously \n\nlistening to the narration .\n\n \n\n Utilizing multiple representations allows a reader to see \n\ndifferent abstractions of a single idea. Once these are familiar and \n\nknown, an author can build interfaces from multiple representations and \n\nlet readers interact with them simultaneously, ultimately leading to \n\ninteractive experiences that demonstrate the power of computational \n\ncommunication mediums. Next, we discuss such experiences where \n\ninteractive articles have transformed communication and learning by \n\nmaking live models and simulations of complex systems and phenomena \n\naccessible.\n\n \n\n### Prompting Self-Reflection\n\n Asking a student to reflect on material that they are studying and\n\n explain it back to themselves — a learning technique called \n\nself-explanation — is known to have a positive impact on learning \n\noutcomes . By generating explanations\n\n and refining them as new information is obtained, it is hypothesized \n\nthat a student will be more engaged with the processes which they are \n\nstudying . When writing for an \n\ninteractive environment, components can be included which prompt readers\n\n to make a prediction or reflection about the material and cause them to\n\n engage in self-explanation .\n\n \n\n While these prompts may take the form of text entry or other \n\nstandard input widgets, one of the most prominent examples of this \n\n In these visualizations, readers are prompted to complete a trendline \n\non a chart, causing them to generate an explanation based on their \n\ncurrent beliefs for why they think the trend may move in a certain \n\ndirection. Only after readers make their prediction are they shown the \n\nactual data. Kim et al. showed that using visualizations as a prompt is \n\nan effective way to encourage readers to engage in self explanation and \n\nimprove their recall of the information . [5](#you-draw-it)\n\n shows one these visualizations for CO₂ emissions from burning fossil \n\nfuels. After clicking and dragging to guess the trend, your guess will \n\nbe compared against the actual data.\n\n \n\nComplete the trend of CO₂ emissions from burning fossil fuels. Letting\n\n a reader first guess about data and only showing the ground truth \n\nafterwards challenges a reader's prior beliefs and has been shown to \n\nreaders are tasked to type the names of celebrities with challenging \n\nspellings. After submitting a guess, a visualization shows the reader's \n\nentry against everyone else's, scaled by the frequency of different \n\nspellings. Playing\n\n Preview,\n\n click to play\n\n Full Video.\n\n In the case of \"You Draw It,\" readers were also shown the \n\npredictions that others made, adding a social comparison element to the \n\nexperience. This additional social information was not shown to \n\nnecessarily be effective for improving recall .\n\n However, one might hypothesize that this social aspect may have other \n\nbenefits such as improving reader engagement, due to the popularity of \n\n \n\n Prompting readers to remember previously presented material, for\n\n example through the use of quizzes, can be an effective way to improve \n\n While testing may call to mind stressful educational experiences for \n\nmany, quizzes included in web articles can be low stakes: there is no \n\nneed to record the results or grade readers. The effect is enhanced if \n\nfeedback is given to the quiz-takers, for example by providing the \n\ncorrect answer after the user has recorded their response .\n\n \n\n Preview,\n\n click to play\n\n Full Video.\n\n assuming readers are willing to participate in the process. The idea of\n\n spaced repetition has been a popular foundation for memory building \n\napplications, for example in the Anki flash card system. More recently, \n\nauthors have experimented with building spaced repetition directly into \n\n \n\n### Personalizing Reading\n\n Content personalization — automatically modifying text and \n\nmultimedia based on a reader's individual features or input (e.g., \n\ndemographics or location) — is a technique that has been shown to \n\nincrease engagement and learning within readers and support behavioral change .\n\n The PersaLog system gives developers tools to build personalized \n\ncontent and presents guidelines for personalization based on user \n\nresearch from practicing journalists .\n\n Other work has shown that \"personalized spatial analogies,\" presenting \n\ndistance measurements in regions where readers are geographically \n\nfamiliar with, help people more concretely understand new distance \n\nmeasurements within news stories .\n\n \n\n reader enters their birthplace and birth year and is shown multiple \n\nvisualizations describing the impact of climate on their hometown. Playing\n\n Preview,\n\n click to play\n\n Full Video.\n\n Personalization alone has also been used as the standout feature \n\nof multiple interactive articles. Both \"How Much Hotter Is Your Hometown\n\n Than When You Were Born?\" and \"Human Terrain\" \n\n use a reader's location to drive stories relating to climate change and\n\n population densities respectively. Other examples ask for explicit \n\nreader input, such as a story that visualizes a reader's net worth to \n\nchallenge a reader's assumptions if they are wealthy or not (relative to\n\n In this visualization, professions are plotted to determine their \n\nlikelihood of being automated against their average annual wage. The \n\narticle encourages readers to use the search bar to type in their own \n\nprofession to highlight it against the others.\n\n \n\n An interactive medium has the potential to offer readers an \n\nexperience other than static, linear text. Non-linear stories, where a \n\nreader can choose their own path through the content, have the potential\n\n to provide a more personalized experience and focus on areas of user \n\n a technology focused news television program. Non-linear stories \n\npresent challenges for authors, as they must consider the myriad \n\npossible paths through the content, and consider the different possible \n\nexperiences that the audience would have when pursuing different \n\nbranches.\n\n \n\n [8](#matuschak2019quantum): In \"[Quantum Country](https://quantum.country/)\" , \n\nthe interactive textbook uses spaced repetition and allows a reader to \n\nopt-in and save their progress while reading through dense material and \n\nmathematical notion over time. Playing\n\n Preview,\n\n click to play\n\n Full Video.\n\n Another technique interactive articles often use is segmenting \n\ncontent into small pieces to be read in-between or alongside other \n\ngraphics. While we have already discussed cognitive load theory, the \n\nSegmenting Theory, the idea that complex lessons are broken into \n\nsmaller, bit-sized parts , also\n\n supports personalization within interactive articles. Providing a \n\nreader the ability to play, pause, and scrub content allows the reader \n\nto move at their own speed, comprehending the information at a speed \n\nthat works best for them. Segmenting also engages a reader's essential \n\nprocessing without overloading their cognitive system .\n\n \n\n Multiple studies have been conducted showing that learners perform\n\n better when information is segmented, whether it be only within an \n\nanimation or within an interface with textual descriptions .\n\n One excellent example of using segmentation and animation to \n\npersonalize content delivery is \"A Visual Introduction to Machine \n\nLearning,\" which introduces fundamental concepts within machine learning\n\n in bite-sized pieces, while transforming a single dataset into a \n\ntrained machine learning model . \n\nExtending this idea, in \"Quantum Country,\" an interactive textbook \n\ncovering quantum computing, the authors implemented a user account \n\nsystem, allowing readers to save their position in the text and consume \n\n \n\n### Reducing Cognitive Load\n\n Authors must calibrate the detail at which to discuss ideas and \n\ncontent to their readers expertise and interest to not overload them. \n\nWhen topics become multifaceted and complex, a balance must be struck \n\nbetween a high-level overview of a topic and its lower-level details. \n\nOne interaction technique used to prevent a cognitive overload within a \n\nreader is \"details-on-demand.\"\n\n \n\n Details-on-demand has become an ubiquitous design pattern. For \n\nexample, modern operating systems offer to fetch dictionary definitions \n\nwhen a word is highlighted. When applied to visualization, this \n\ntechnique allows users to select parts of a dataset to be shown in more \n\ndetail while maintaining a broad overview. This is particularly useful \n\nwhen a change of view is not required, so that users can inspect \n\nelements of interest on a point-by-point basis in the context of the \n\nwhole . Below we highlight areas \n\nwhere details-on-demand has been successfully applied to reduce the \n\namount of information present within an interface at once.\n\n \n\n#### Data Visualization\n\n Successful visualizations not only provide the base representations and\n\n techniques for these three steps, but also bridge the gaps between them\n\n . In practice, the \n\nsolidified standard for details-on-demand in data visualization \n\nmanifests as a tooltip, typically summoned on a cursor mouseover, that \n\npresents extra information in an overlay. Given that datasets often \n\ncontain multiple attributes, tooltips can show the other attributes that\n\n \n\n#### Illustration\n\n Details-on-demand is also used in illustrations, interactive \n\ntextbooks, and museum exhibits, where highlighted segments of a figure \n\ncan be selected to display additional information about the particular \n\nsegment. For example, in \"How does the eye work?\" readers can select \n\nsegments of an anatomical diagram of the human eye to learn more about \n\nspecific regions, e.g., rods and cones .\n\n Another example is \"Earth Primer,\" an interactive textbook on tablets \n\nthat allows readers to inspect the Earth's interior, surface, and biomes\n\n \n\n#### Mathematical Notation\n\n Formal mathematics, a historically static medium, can benefit from\n\n details-on-demand, for example, to elucidate a reader with intuition \n\nabout a particular algebraic term, present a geometric interpretation of\n\n an equation, or to help a reader retain high-level context while \n\n For example, in \"Why Momentum Really Works,\" equation layout is done \n\nusing Gestalt principles plus annotation to help a reader easily \n\nidentify specific terms. In \"Colorized\n\n Math Equations,\" the Fourier transform equation is presented in both \n\nmathematical notation and plain text, but the two are linked through a \n\nmouseover that highlights which term in the equation corresponds to \n\nwhich word in the text . \n\nAnother example that visualizes mathematics and computation is the \n\n\"Image Kernels\" tutorial where a reader can mouseover a real image and \n\nobserve the effect and exact computation for applying a filter over the \n\nimage .\n\n \n\n Instead of writing down long arithmetic sums, the interface allows\n\n one of Maxwell's equations is shown. Click the two buttons to reveal, \n\nor remind yourself, what each notation mark and variable represent.\n\n \n\nEnhancing mathematics design with annotation and interactivity. Working\n\n with equations requires a sharp working memory. Optional interactivity \n\ncan help people remember specific notation and variable defintions, only\n\n#### Text\n\n While not as pervasive, text documents and other long-form textual\n\n mediums have also experimented with letting readers choose a variable \n\nlevel of detail to read. This idea was explored as early as the 1960s in\n\n StretchText, a hypertext feature that allows a reader to reveal a more \n\ndescriptive or exhaustive explanation of something by expanding or \n\n One challenge that has limited this technique's adoption is the burden \n\nit places on authors to write multiple versions of their content. For \n\nexample, drag the slider in [9](#details-text)\n\n to read descriptions of the Universal Approximation Theorem in \n\nincreasing levels of detail. For other examples of details-on-demand for\n\n text, such as application in code documentation, see this small \n\ncollection of examples .\n\n \n\n networks can approximate any function that exists. However, we do not \n\nhave a guaranteed way to obtain such a neural network for every \n\nfunction.\n\n#### Previewing Content\n\n Details-on-demand can also be used as a method for previewing \n\ncontent without committing to another interaction or change of view. For\n\n example, when hovering over a hyperlink on Wikipedia, a preview card is\n\n shown that can contain an image and brief description; this gives \n\nreaders a quick preview of the topic without clicking through and \n\n within hypertext that present information about a particular topic in a\n\n location that does not obscure the source material. Both older and \n\nmodern preview techniques use perceptually-based animation and simple \n\ntooltips to ensure their interactions are natural and lightweight \n\nfeeling to readers .\n\n \n\nChallenges for Authoring Interactives\n\n-------------------------------------\n\n*If interactive articles provide clear benefits over other \n\nmediums for communicating complex ideas, then why aren't they more \n\nprevalent?* \n\n Unfortunately, creating interactive articles today is difficult. \n\nDomain-specific diagrams, the main attraction of many interactive \n\narticles, must be individually designed and implemented, often from \n\nscratch. Interactions need to be intuitive and performant to achieve a \n\nnice reading experience. Needless to say, the text must also be \n\nwell-written, and, ideally, seamlessly integrated with the graphics.\n\n \n\n The act of creating a successful interactive article is closer to \n\nbuilding a website than writing a blog post, often taking significantly \n\nmore time and effort than a static article, or even an academic \n\n Most interactive articles are created using general purpose \n\nweb-development frameworks which, while expressive, can be difficult to \n\nwork with for authors who are not also web developers. Even for expert \n\nweb developers, current tools offer lower levels of abstraction than may\n\n be desired to prototype and iterate on designs.\n\n \n\n can help authors start writing quickly and even enable rapid iteration \n\nthrough various designs (for example, letting an author quickly compare \n\nbetween sequencing content using a \"scroller\" or \"stepper\" based \n\nlayout). However, Idyll does not offer any design guidance, help authors\n\n think through where interactivity would be most effectively applied, \n\nnor highlight how their content could be improved to increase its \n\nreadability and memorability. For example, Idyll encodes no knowledge of\n\n the positive impact of self-explanation, instead it requires authors to\n\n be familiar with this research and how to operationalize it.\n\n \n\n To design an interactive article successfully requires a diverse \n\nset of editorial, design, and programming skills. While some individuals\n\n are able to author these articles on their own, many interactive \n\narticles are created by a collective team consisting of multiple members\n\n with specialized skills, for example, data analysts, scripters, \n\neditors, journalists, graphic designers, and typesetters, as outlined in\n\n requires one to clone its source code using git, install \n\nproject-specific dependencies using a terminal, and be comfortable \n\nediting HTML files. All of this complexity is incidental to task of \n\nediting text.\n\n \n\n Publishing to the web brings its own challenges: while interactive\n\n articles are available to anyone with a browser, they are burdened by \n\nrapidly changing web technologies that could break interactive content \n\nafter just a few years. For this reason, easy and accessible interactive\n\n article archival is important for authors to know their work can be \n\n Authoring interactive articles also requires designing for a diverse \n\nset of devices, for example, ensuring bespoke content can be adapted for\n\n desktop and mobile screen sizes with varying connection speeds, since \n\naccessing interactive content demands more bandwidth.\n\n \n\n There are other non-technical limitations for publishing \n\ninteractive articles. For example, in non-journalism domains, there is a\n\n mis-aligned incentive structure for authoring and publishing \n\ninteractive content: why should a researcher spend time on an \"extra\" \n\ninteractive exposition of their work when they could instead publish \n\nmore papers, a metric by which their career depends on? While different \n\ngroups of people seek to maximize their work's impact, legitimizing \n\ninteractive artifacts requires buy-in from a collective of communities.\n\n \n\n Making interactive articles accessible to people with disabilities\n\n is an open challenge. The dynamic medium exacerbates this problem \n\ncompared to traditional static writing, especially when articles combine\n\n multiple formats like audio, video, and text. Therefore, ensuring \n\ninteractive articles are accessible to everyone will require alternative\n\n modes of presenting content (e.g. text-to-speech, video captioning, \n\ndata physicalization, data sonification) and careful interaction design.\n\n \n\n It is also important to remember that not everything needs to be \n\ninteractive. Authors should consider the audience and context of their \n\nwork when deciding if use of interactivity would be valuable. In the \n\nworst case, interactivity may be distracting to readers or the \n\nfunctionality may go unused, the author having wasted their time \n\nimplementing it. However, even in a domain where the potential \n\n \n\nCritical Reflections\n\n--------------------\n\n We write this article not as media theorists, but as \n\npractitioners, researchers, and tool builders. While it has never been \n\neasier for writers to share their ideas online, current publishing tools\n\n largely support only static authoring and do not take full advantage of\n\n the fact that the web is a dynamic medium. We want that to change, and \n\nwe are not alone. Others from the explorable explanations community have\n\n identified design patterns that help share complex ideas through play .\n\n \n\n an annually published digital magazine that showcases the expository \n\npower that interactive dynamic media can have when effectively combined .\n\n In late 2018, we invited writers to respond to a call for proposals for\n\n our first issue focusing on exploring scientific and technological \n\nphenomena that stand to shape society at large. We sought to cover \n\ntopics that would benefit from using the interactive or otherwise \n\ndynamic capabilities of the web. Given the challenges of authoring \n\ninteractive articles, we did not ask authors to submit fully developed \n\npieces. Instead, we accepted idea submissions, and collaborated with the\n\n authors over the course of four months to develop the issue, offering \n\ntechnical, design, and editorial assistance collectively to the authors \n\nthat lacked experience in one of these areas. For example, we helped a \n\nwriter implement visualizations, a student frame a cohesive narrative, \n\nand a scientist recap history and disseminate to the public. Multiple \n\nviews from one article are shown in [10](#parametric).\n\n \n\n The article used techniques like animation, data visualizations, \n\nexplanatory diagrams, margin notes, and interactive simulations to \n\nexplain how biases occur in machine learning systems.\n\n We see *The Parametric Press* as a crucial connection between\n\n the often distinct worlds of research and practice. The project serves \n\nas a platform through which to operationalize the theories put forth by \n\neducation, journalism, and HCI researchers. Tools like Idyll which are \n\ndesigned in a research setting need to be validated and tested to ensure\n\n that they are of practical use; *The Parametric Press* facilitates\n\n this by allowing us to study its use in a real-world setting, by \n\nauthors who are personally motivated to complete their task of \n\nconstructing a high-quality interactive article, and only have secondary\n\n concerns and care about the tooling being used, if at all.\n\n \n\n \n\n| | Research | Practice |\n\n| --- | --- | --- |\n\n As researchers we can treat the project as a series of case \n\nstudies, where we were observers of the motivation and workflows which \n\nwere used to craft the stories, from their initial conception to their \n\npublication. Motivation to contribute to the project varied by author. \n\nWhere some authors had personal investment in an issue or dataset they \n\nwanted to highlight and raise awareness to broadly, others were drawn \n\ntowards the medium, recognizing its potential but not having the \n\nexpertise or support to communicate interactively. We also observed how \n\nresearch software packages like Apparatus , Idyll , and D3 \n\n fit into the production of interactive articles, and how authors must \n\ncombine these disparate tools to create an engaging experience for \n\nreaders. In one article, \"On Particle Physics,\" an author combined two \n\ntools in a way that allowed him to create and embed dynamic graphics \n\ndirectly into his article without writing any code beyond basic markup. \n\nOne of the creators of Apparatus had not considered this type of \n\n fantastic! Reading that article, I had no idea that Apparatus was used.\n\n This is a very exciting proof-of-concept for unconventional \n\nexplorable-explanation workflows.\"*\n\n We were able to provide editorial guidance to the authors drawing \n\non our knowledge of empirical studies done in the multimedia learning \n\nand information visualization communities to recommend graphical \n\nstructures and page layouts, helping each article's message be \n\ncommunicated most effectively. One of the most exciting outcomes of the \n\nproject is that we saw authors develop interactive communication skills \n\nlike any other skill: through continued practice, feedback, and \n\niteration. We also observed the challenges that are inherent in \n\npublishing dynamic content on the web and identified the need for \n\nimproved tooling in this area, specifically around the archiving of \n\ninteractive articles. Will an article's code still run a year from now? \n\nTen years from now? To address interactive content archival, we set up a\n\n system to publish a digital archive of all of our articles at the time \n\nthat they are first published to the site. At the top of each article on\n\n *The Parametric Press* is an archive link that allows readers to \n\ndownload a WARC (Web ARChive) file that can \"played back\" without \n\nrequiring any web infrastructure. While our first iteration of the \n\nproject relied on ad-hoc solutions to these problems, we hope to show \n\nhow digital works such as ours can be published confidently knowing that\n\n they will be preserved indefinitely.\n\n \n\n As practitioners we pushed the boundaries of the current \n\ngeneration of tools designed to support the creation of interactive \n\narticles on the web. We found bugs and limitations in Idyll, a tool \n\nwhich was originally designed to support the creation of one-off \n\narticles that we used as a content management system to power an entire \n\nmagazine issue. We were forced to write patches and plugins to work \n\n We were also forced to craft designs under a more realistic set of \n\nconstraints than academics usually deal with: when creating a \n\nvisualization it is not enough to choose the most effective visual \n\nencodings, the graphics also had to be aesthetically appealing, adhere \n\nto a house style, have minimal impact on page load time and runtime \n\nperformance, be legible on both mobile and desktop devices, and not be \n\noverly burdensome to implement. Any extra hour spent implementing one \n\ngraphic was an hour that was not spent improving some other part of the \n\nissue, such as the clarity of the text, or the overall site design.\n\n \n\n There are relatively few outlets that have the skills, technology,\n\n and desire to publish interactive articles. From its inception, one of \n\nthe objectives of *The Parametric Press* is to showcase the new \n\nforms of media and publishing that are possible with tools available \n\ntoday, and inspire others to create their own dynamic writings. For \n\n told us he had wanted to write this interactive article for years yet \n\nnever had the opportunity, support, or incentive to create it. His \n\narticle drew wide interest and critical acclaim.\n\n \n\n We also wanted to take the opportunity as an independent \n\npublication to serve as a concrete example for others to follow, to \n\nrepresent a set of best practices for publishing interactive content. To\n\n that end, we made available all of the software that runs the site, \n\nincluding reusable components, custom data visualizations, and the \n\npublishing engine itself.\n\n \n\nLooking Forward\n\n---------------\n\n A diverse community has emerged to meet these challenges, \n\nexploring and experimenting with what interactive articles could be. The\n\n [Explorable Explanations community](https://explorabl.es/) \n\nis a \"disorganized 'movement' of artists, coders & educators who \n\nwant to reunite play and learning.\" Their online hub contains 170+ \n\ninteractive articles on topics ranging from art, natural sciences, \n\nsocial sciences, journalism, and civics. The curious can also find \n\ntools, tutorials, and meta-discussion around learning, play, and \n\n \n\n Many interactive articles are self-published due to a lack of \n\nplatforms that support interactive publishing. Creating more outlets \n\nthat allow authors to publish interactive content will help promote \n\ntheir development and legitimization. The few existing examples, \n\n help, but currently target a narrow group of authors, namely those who \n\nhave programming skills. Such platforms should also provide clear paths \n\nto submission, quality and editorial standards, and authoring \n\nguidelines. For example, news outlets have clear instructions for \n\npitching written pieces, yet this is under-developed for interactive \n\narticles. Lastly, there is little funding available to support the \n\ndevelopment of interactive articles and the tools that support them. \n\nResearchers do not receive grants to communicate their work, and \n\npractitioners outside of the largest news outlets are not able to afford\n\n the time and implementation investment. Providing more funding for \n\nenabling interactive articles incentivizes their creation and can \n\ncontribute to a culture where readers expect digital communications to \n\nbetter utilize the dynamic medium.\n\n \n\n We have already discussed the breadth of skills required to author\n\n an interactive article. Can we help lower the barrier to entry? While \n\nthere have been great, practical strides in this direction ,\n\n there is still opportunity for creating tools to design, develop, and \n\nevaluate interactive articles in the wild. Specific features should \n\ninclude supporting mobile-friendly adaptations of interactive graphics \n\n(for example ),\n\n creating content for different platforms besides just the web, and \n\ntools that allow people to create interactive content without code.\n\n \n\n The usefulness of interactive articles is predicated on the \n\nassumption that these interactive articles actually facilitate \n\ncommunication and learning. There is limited empirical evaluation of the\n\n effectiveness of interactive articles. The problem is exacerbated by \n\nthe fact that large publishers are unwilling to share internal metrics, \n\n provided one of the few available data points, stating that only a \n\nfraction of readers interact with non-static content, and suggested that\n\n designers should move away from interactivity .\n\n However, other research found that many readers, even those on mobile \n\ndevices, are interested in utilizing interactivity when it is a core \n\npart of the article's message . This statement from *The New York Times*\n\n has solidified as a rule-of-thumb for designers and many choose not to \n\nutilize interactivity because of it, despite follow-up discussion that \n\ncontextualizes the original point and highlights scenarios where \n\ninteractivity can be beneficial .\n\n This means designers are potentially choosing a suboptimal presentation\n\n of their story due to this anecdote. More research is needed in order \n\nto identify the cases in which interactivity is worth the cost of \n\ncreation.\n\n \n\n We believe in the power and untapped potential of interactive \n\narticles for sparking reader's desire to learn and making complex ideas \n\naccessible and understandable to all.\n\n \n\n", "bibliography_bib": [{"title": "Report on the role of digital access providers"}, {"title": "A personal computer for children of all ages"}, {"title": "Augmenting human intellect: A conceptual framework"}, {"title": "The diamond age"}, {"title": "The knowledge navigator"}, {"title": "Getting it out of our system"}, {"title": "PLATO"}, {"title": "PhET interactive simulations"}, {"title": "Explorable explanations"}, {"title": "How y’all, youse and you guys talk"}, {"title": "Snow fall: The avalanche at tunnel creek"}, {"title": "Why outbreaks like coronavirus spread exponentially, and how to 'flatten the curve'"}, {"title": "Attacking discrimination with smarter machine learning"}, {"title": "Coeffects: Context-aware programming languages"}, {"title": "Complexity explained"}, {"title": "What's really warming the world"}, {"title": "You draw it: How family income predicts children’s college chances"}, {"title": "The Uber Game"}, {"title": "Let's learn about waveforms"}, {"title": "The book of shaders"}, {"title": "EconGraphs"}, {"title": "To build a better ballot"}, {"title": "The atlas of redistricting"}, {"title": "Is it better to rent or buy"}, {"title": "Explorable explanation"}, {"title": "Increasing the transparency of research papers with explorable multiverse analyses"}, {"title": "Workshop on Visualization for AI Explainability"}, {"title": "Exploranation: A new science communication paradigm"}, {"title": "Research debt"}, {"title": "More than telling a story: Transforming data into visually shared stories"}, {"title": "\"Concrete\" computer manipulatives in mathematics education"}, {"title": "Cybertext: Perspectives on ergodic literature"}, {"title": "Interactive non-fiction: Towards a new approach for storytelling in digital journalism"}, {"title": "Active essays on the web"}, {"title": "Newsgames: Journalism at play"}, {"title": "Simply bells and whistles? Cognitive effects of visual aesthetics in digital longforms"}, {"title": "Learning as a function of time"}, {"title": "Emotional design in multimedia learning"}, {"title": "Hooked on data videos: assessing the effect of animation and pictographs on viewer engagement"}, {"title": "Animation: Can it facilitate?"}, {"title": "Animated transitions in statistical data graphics"}, {"title": "Hypothetical outcome plots outperform error bars and violin plots for inferences about reliability of variable ordering"}, {"title": "La perception de la causalité.(Etudes Psychol.), Vol. VI"}, {"title": "The illusion of life: Disney animation"}, {"title": "The horse in motion"}, {"title": "Emergent tool use from multi-agent autocurricula"}, {"title": "Climate spirals"}, {"title": "Earth's relentless warming sets a brutal new record in 2017"}, {"title": "Global temperature"}, {"title": "It's not your imagination. Summers are getting hotter."}, {"title": "Extensive data shows punishing reach of racism for black boys"}, {"title": "Disagreements"}, {"title": "Gun deaths in america"}, {"title": "The fallen of World War II"}, {"title": "Showing people behind data: Does anthropomorphizing visualizations elicit more empathy for human rights data?"}, {"title": "Beyond memorability: Visualization recognition and recall"}, {"title": "What makes a visualization memorable?"}, {"title": "What if the data visualization is actually people"}, {"title": "Ethical dimensions of visualization research"}, {"title": "A walk among the data"}, {"title": "Visual narrative flow: Exploring factors shaping data visualization story reading experiences"}, {"title": "Linking and layout: Exploring the integration of text and visualization in storytelling"}, {"title": "Video games and learning"}, {"title": "Cutthroat Capitalism: The Game"}, {"title": "Combining software games with education: Evaluation of its educational effectiveness"}, {"title": "Narrative visualization: Telling stories with data"}, {"title": "Parable of the polygons"}, {"title": "Drawing dynamic visualizations"}, {"title": "How to read a book: The classic guide to intelligent reading"}, {"title": "Scientific communication as sequential art"}, {"title": "How you will die"}, {"title": "On Particle Physics"}, {"title": "Should prison sentences be based on crimes that haven’t been committed yet"}, {"title": "How to use t-SNE effectively"}, {"title": "The beginner's guide to dimensionality reduction"}, {"title": "Understanding UMAP"}, {"title": "Tensorflow.js: Machine learning for the web and beyond"}, {"title": "Designing (and learning from) a teachable machine"}, {"title": "Experiments in handwriting with a neural network"}, {"title": "Direct-manipulation visualization of deep networks"}, {"title": "Gan lab: Understanding complex deep generative models using interactive visual experimentation"}, {"title": "Using artificial intelligence to augment human intelligence"}, {"title": "Who will win the presidency"}, {"title": "Who will be president"}, {"title": "Live results: Presidential election"}, {"title": "Multimedia learning"}, {"title": "3Blue1Brown"}, {"title": "Primer"}, {"title": "Revising the redundancy principle in multimedia learning"}, {"title": "Visualizing quaternions: An explorable video series"}, {"title": "Self-explanations: How students study and use examples in learning to solve problems"}, {"title": "Self-explaining expository texts: The dual processes of generating inferences and repairing mental models"}, {"title": "Explaining the gap: Visualizing one's predictions improves recall and comprehension of data"}, {"title": "You draw it: Just how bad is the drug overdose epidemic"}, {"title": "You draw it: What got better or worse during Obama's presidency"}, {"title": "They draw it!"}, {"title": "Data through others' eyes: The impact of visualizing others' expectations on visualization interpretation"}, {"title": "The Gyllenhaal experiment"}, {"title": "How do you draw a circle? We analyzed 100,000 drawings to show how culture shapes our instincts"}, {"title": "Recitation as a factor in memorizing"}, {"title": "The power of testing memory: Basic research and implications for educational practice"}, {"title": "Khan Academy"}, {"title": "The instructional effect of feedback in test-like events"}, {"title": "The critical importance of retrieval for learning"}, {"title": "How to remember anything for forever-ish"}, {"title": "Quantum country"}, {"title": "Intrinsic motivation and the process of learning: Beneficial effects of contextualization, personalization, and choice."}, {"title": "Authoring and generation of individualized patient education materials"}, {"title": "PersaLog: Personalization of news article content"}, {"title": "Generating personalized spatial analogies for distances and areas"}, {"title": "How much hotter is your hometown than when you were born"}, {"title": "Human terrain"}, {"title": "Are you rich? This income-rank quiz might change how you see yourself"}, {"title": "Quiz: Let us predict whether you’re a democrat or a republican"}, {"title": "Find Out If Your Job Will Be Automated"}, {"title": "Booze calculator: What's your drinking nationality"}, {"title": "Click 1,000: How the pick-your-own-path episode was made"}, {"title": "E-learning and the science of instruction: Proven guidelines for consumers and designers of multimedia learning"}, {"title": "Multimedia learning in an interactive self-explaining environment: What works in the design of agent-based microworlds?"}, {"title": "Pictorial aids for learning by doing in a multimedia geology simulation game."}, {"title": "A visual introduction to machine learning"}, {"title": "Beyond guidelines: What can we learn from the visual information seeking mantra?"}, {"title": "The eyes have it: A task by data type taxonomy for information visualizations"}, {"title": "Information visualization and visual data mining"}, {"title": "How the recession shaped the economy, in 255 charts"}, {"title": "How does the eye work"}, {"title": "Earth primer"}, {"title": "Progressive growing of gans for improved quality, stability, and variation"}, {"title": "A style-based generator architecture for generative adversarial networks"}, {"title": "Why momentum really works"}, {"title": "Colorized math equations"}, {"title": "Image kernels"}, {"title": "Stretchtext – hypertext note #8"}, {"title": "On variable level-of-detail documents"}, {"title": "Call for proposals winter/spring 2019"}, {"title": "A UI that lets readers control how much information they see"}, {"title": "Wikipedia Preview Card"}, {"title": "Fluid links for informed and incremental link transitions"}, {"title": "Reading and writing fluid hypertext narratives"}, {"title": "Idyll: A markup language for authoring and publishing interactive articles on the web"}, {"title": "Apparatus: A hybrid graphics editor and programming environment for creating interactive diagrams"}, {"title": "Observable"}, {"title": "LOOPY: a tool for thinking in systems"}, {"title": "Webstrates: shareable dynamic media"}, {"title": "Neural networks and deep learning"}, {"title": "How I make explorable explanations"}, {"title": "Explorable explanations: 4 more design patterns"}, {"title": "Emerging and recurring data-driven storytelling techniques: Analysis of a curated collection of recent stories"}, {"title": "Issue 01: Science & Society"}, {"title": "The myth of the impartial machine"}, {"title": "D3 data-driven documents"}, {"title": "Launching the Parametric Press"}, {"title": "Techniques for flexible responsive visualization design"}, {"title": "A Comparative Evaluation of Animation and Small Multiples for Trend Visualization on Mobile Phones"}, {"title": "Visualizing ranges over time on mobile phones: a task-based crowdsourced evaluation"}, {"title": "Why we are doing fewer interactives"}, {"title": "Capture & analysis of active reading behaviors for interactive articles on the web"}, {"title": "In defense of interactive graphics"}], "id": "b7aff5fd90c3668d2cb039c8b69af2a3"} +{"url": "n/a", "source": "distill", "source_type": "html", "converted_with": "python", "title": "Weight Banding", "authors": ["Michael Petrov", "Chelsea Voss", "Ludwig Schubert", "Nick Cammarata", "Gabriel Goh", "Chris Olah"], "date_published": "2021-04-08", "abstract": " This article is part of the Circuits thread, an experimental format collecting invited short articles and critical commentary delving into the inner workings of neural networks. ", "journal_ref": "distill-pub", "doi": "https://doi.org/10.23915/distill.00024.009", "text": "\n\n![](Weight%20Banding_files/multiple-pages.svg)\n\n an experimental format collecting invited short articles and critical \n\ncommentary delving into the inner workings of neural networks.\n\n \n\nIntroduction\n\n------------\n\n Open up any ImageNet conv net and look at the weights in the last \n\nlayer. You'll find a uniform spatial pattern to them, dramatically \n\nunlike anything we see elsewhere in the network. No individual weight is\n\n unusual, but the uniformity is so striking that when we first \n\ndiscovered it we thought it must be a bug. Just as different biological \n\ntissue types jump out as distinct under a microscope, the weights in \n\nthis final layer jump out as distinct when visualized with NMF. We call \n\nthis phenomenon *weight banding*.\n\n \n\nMicroscope slides of different tissues\n\nMuscle tissue\n\nEpithelial tissue\n\nTypical layer\n\nLayer with weight banding\n\nNMF of weights at different layers\n\n \n\n and small circuits. In contrast, weight banding is an example of what \n\nwe call a \"structural phenomenon,\" a larger-scale pattern in the \n\ncircuits and features of a neural network. Other examples of structural \n\n In the case of weight banding, we think of it as a structural \n\nphenomenon because the pattern appears at the scale of an entire layer.\n\n \n\n \n\n In addition to describing weight banding, we'll explore when and \n\nwhy it occurs. We find that there appears to be a causal link between \n\nwhether a model uses global average pooling or fully connected layers at\n\n the end, suggesting that weight banding is part of an algorithm for \n\npreserving information about larger scale structure in images. \n\nEstablishing causal links like this is a step towards closing the loop \n\nbetween practical decisions in training neural networks and the \n\nphenomena we observe inside them.\n\n \n\nWhere weight banding occurs\n\n---------------------------\n\n \n\n In order to see the bands, we need to visualize the spatial \n\nstructure of the weights, as shown below. We typically do this using \n\n Visualizing Weights. For each neuron, we take the weights connecting it\n\n to the previous layer. We then use NMF to reduce the number of \n\ndimensions corresponding to channels in the previous layer down to 3 \n\nfactors, which we can map to RGB channels. Since which factor is which \n\nis arbitrary, we use a heuristic to make the mapping consistent across \n\nneurons. This reveals a very prominent pattern of horizontalThe\n\n stripes aren't always perfectly horizontal - sometimes they exhibit a \n\nslight preference for extra weight in the center of the central band, as\n\n seen in some examples below. stripes.\n\n \n\n[2](#figure-2).\n\n These common networks have pooling operations before their fully\n\n connected layers and consistently show banding at their last\n\n convolutional layers.\n\n86 weights hidden\n\nInceptionV1 \n\nmixed 5b\n\n470 weights hidden\n\nResNet50 \n\nblock 4 unit 3\n\n470 weights hidden\n\nVGG19 \n\nconv5\n\n Interestingly, AlexNet does not exhibit this phenomenon.\n\n \n\n[3](#figure-3).\n\n AlexNet does not have a pooling operation before its fully connected\n\n layers and does not show banding at its last convolutional\n\n layer.\n\n \n\n To make it easier to look for groups of similar weights, we\n\n sorted the neurons at each layer by similarity of their reduced\n\n forms.\n\n \n\n160 weights hidden\n\nAlexNet \n\nconv5\n\n Unlike most modern vision models, AlexNet does not use global \n\naverage pooling. Instead, it has a fully connected layer directly \n\nconnected to its final convolutional layer, allowing it to treat \n\ndifferent positions differently. If one looks at the weights of this \n\nfully connected layer, the weights strongly vary as a function of the \n\nglobal y position.\n\n \n\n The horizontal stripes in weight banding mean that the filters \n\ndon't care about horizontal position, but are strongly encoding relative\n\n vertical position. Our hypothesis is that weight banding is a learned \n\nway to preserve spatial information as it gets lost through various \n\npooling operations.\n\n \n\n In the next section, we will construct our own simplified vision \n\nnetwork and investigate variations on its architecture in order to \n\nunderstand exactly which conditions are necessary to produce weight \n\nbanding.\n\n \n\nWhat affects banding\n\n--------------------\n\n We'd like to understand which architectural decisions affect \n\nweight banding. This will involve trying out different architectures and\n\n seeing whether weight banding persists.\n\n Since we will only want to change a single architectural parameter\n\n at a time, we will need a consistent baseline to apply our \n\nmodifications to. Ideally, this baseline would be as simple as possible.\n\n \n\n We created a simplified network architecture with 6 groups of \n\nconvolutions, separated by L2 pooling layers. At the end, it has a \n\nglobal average pooling operation that reduces the input to 512 values \n\nthat are then fed to a fully connected layer with 1001 outputs.\n\n \n\n[4](#figure-4). Our simplified vision network architecture.\n\n \n\n This simplified network reliably produces weight banding in its last layer\n\n (and usually in the two preceding layers as well).\n\n \n\n \n\n416 weights hidden\n\nsimplified model (`5b`), baseline\n\n In the rest of this section, we'll experiment with modifying this \n\narchitecture and its training settings and seeing if weight banding is \n\npreserved.\n\n \n\n### Rotating images 90 degrees\n\n To rule out bugs in training or some strange numerical problem, we decided\n\n to do a training run with the input rotated by 90 degrees. This sanity check\n\n yielded a very clear result showing *vertical* banding in the resulting\n\n \n\n416 weights hidden\n\n[6](#figure-6). simplified model (`5b`), 90º rotation\n\n### Fully connected layer without global average pooling\n\n We remove the global average pooling step in our simplified model,\n\n allowing the fully connected layer to see all spatial positions at \n\nonce. This model did **not** exhibit weight banding, but \n\nused 49x more parameters in the fully connected layer and overfit to the\n\n training set. This is pretty strong evidence that the use of aggressive\n\n pooling after the last convolutions in common models causes weight \n\nbanding. This result is also consistent with AlexNet not showing this \n\nbanding phenomenon (since it also does not have global average pooling).\n\n \n\n416 weights hidden\n\n### Average pooling along x-axis only\n\n We average out each row of the final convolutional layer, so that \n\nvertical absolute position is preserved but horizontal absolute position\n\n is not.Since this model has 7x7 spatial \n\npositions in the final convolutional layer, this modification increases \n\nthe number of parameters in the fully connected layer by 7x, but not the\n\n \n\n[8](#figure-8).\n\n \n\n416 weights hidden\n\n simplified model (`5a`), x-axis pooling\n\n416 weights hidden\n\nsimplified model (`5b`), x-axis pooling\n\n### Approaches where weight banding persisted\n\n \n\n* Global average pooling with learned spatial masks. By applying \n\nseveral different spatial masks and global average pooling, we can allow\n\n the model to preserve some spatial information. Intuitively, each mask \n\ncan select for a different subset of spatial positions.\n\n We tried experimental runs using each of 3, 5, or 16 different \n\nmasks.\n\n The masks that were learned corresponded to large-scale global \n\nstructure, but banding was still strongly present.\n\n `5b`.\n\n* Adding a 7x7x512 mask with learned weights after `5b`. The hope was that a\n\n mask would help each `5b` neuron focus on the right parts of the 7x7 image\n\n without a convolution.\n\n* Adding CoordConv channels to the inputs\n\n of `5a` and `5b`.\n\n* Splitting the output of `5b` into 16 7x7x32 channel groups and feeding\n\n concatenated into the input of the final 1001-class fully connected layer.\n\n by VGG).\n\n \n\nConfirming banding interventions in common architectures\n\n--------------------------------------------------------\n\n In the previous section, we observed two interventions that \n\nclearly affected weight banding: rotating the dataset by 90º and \n\nremoving the global average pooling before the fully connected layer.\n\n To confirm that these effects hold beyond our simplified model, we\n\n decided to make the same interventions to three\n\n common architectures (InceptionV1, ResNet50, VGG19) and train them\n\n from\n\n scratch.\n\n \n\nWith one exception, the effect holds in all three models.\n\n#### InceptionV1\n\n[9](#figure-9). Inception V1, layer `mixed_5c`, 5x5 convolution\n\n86 weights hidden\n\nbaseline\n\n86 weights hidden\n\n90º rotation\n\n86 weights hidden\n\nglobal average pooling layer removed\n\n#### ResNet50\n\n[10](#figure-10). ResNet50, last 3x3 convolutional layer\n\n470 weights hidden\n\nbaseline\n\n470 weights hidden\n\n90º rotation\n\n470 weights hidden\n\nglobal average pooling layer removed\n\n#### VGG19\n\n[11](#figure-11). VGG19, last 3x3 convolutional layer.\n\n470 weights hidden\n\nbaseline\n\n470 weights hidden\n\n90º rotation\n\n470 weights hidden\n\nglobal average pooling layer removed\n\n The one exception is VGG19, where the removal of the pooling \n\noperation before its set of fully connected layers did not eliminate \n\nweight banding as expected; these weights look fairly similar to the \n\nbaseline. However, it clearly responds to rotation.\n\n \n\nConclusion\n\n----------\n\n Once we really understand neural networks, one would expect us to \n\nbe able to leverage that understanding to design more effective neural \n\nnetworks architectures. Early papers, like Zeiler et al,\n\n emphasized this quite strongly, but it's unclear whether there have yet\n\n been any significant successes in doing this. This hints at significant\n\n limitations in our work. It may also be a missed opportunity: it seems \n\nlikely that if interpretability was useful in advancing neural network \n\ncapabilities, it would become more integrated into other research and \n\nget attention from a wider range of researchers.\n\n \n\n It's unclear whether weight banding is \"good\" or \"bad.\"On\n\n one hand, the 90º rotation experiment shows that weight banding is a \n\nproduct of the dataset and is encoding useful information into the \n\nweights. However, if spatial information could flow through the network \n\nin a different, more efficient way, then perhaps the channels would be \n\nable to focus on encoding relationships between features without needing\n\n to track spatial positions. We don't have any \n\nrecommendation or action to take away from it. However, it is an example\n\n of a consistent link between architecture decisions and the resulting \n\ntrained weights. It has the right sort of flavor for something that \n\ncould inform architectural design, even if it isn't particularly \n\nactionable itself.\n\n \n\n More generally, weight banding is an example of a large-scale \n\nstructure. One of the major limitations of circuits has been how \n\nsmall-scale it is. We're hopeful that larger scale structures like \n\nweight banding may help circuits form a higher-level story of neural \n\nnetworks.\n\n \n\n![](Weight%20Banding_files/multiple-pages.svg)\n\n an experimental format collecting invited short articles and critical \n\ncommentary delving into the inner workings of neural networks.\n\n \n\n", "bibliography_bib": [{"title": "Muscle Tissue: Cardiac Muscle"}, {"title": "Epithelial Tissues: Stratified Squamous Epithelium"}, {"title": "Deconvolution and Checkerboard Artifacts"}, {"title": "Going Deeper with Convolutions"}, {"title": "Deep Residual Learning for Image Recognition"}, {"title": "Very Deep Convolutional Networks for Large-Scale Image Recognition"}, {"title": "ImageNet Classification with Deep Convolutional Neural Networks"}, {"title": "An Intriguing Failing of Convolutional Neural Networks and the CoordConv Solution"}, {"title": "Visualizing and Understanding Convolutional Networks"}], "id": "f0f6b05c6cf4c0438680f0444b8b1f3d"} +{"url": "n/a", "source": "distill", "source_type": "html", "converted_with": "python", "title": "Exploring Bayesian Optimization", "authors": ["Apoorv Agnihotri", "Nipun Batra"], "date_published": "2020-05-05", "abstract": " Many modern machine learning algorithms have a large number of hyperparameters. To effectively use these algorithms, we need to pick good hyperparameter values. In this article, we talk about Bayesian Optimization, a suite of techniques often used to tune hyperparameters. More generally, Bayesian Optimization can be used to optimize any black-box function. ", "journal_ref": "distill-pub", "doi": "https://doi.org/10.23915/distill.00026", "text": "\n\n Many modern machine learning algorithms have a large number of \n\nhyperparameters. To effectively use these algorithms, we need to pick \n\ngood hyperparameter values.\n\n In this article, we talk about Bayesian Optimization, a suite of \n\ntechniques often used to tune hyperparameters. More generally, Bayesian \n\nOptimization can be used to optimize any black-box function.\n\n \n\nMining Gold!\n\n============\n\n For now, we assume that the gold is distributed about a line. We \n\nwant to find the location along this line with the maximum gold while \n\nonly drilling a few times (as drilling is expensive).\n\n \n\n \n\n![](Exploring%20Bayesian%20Optimization_files/GT.svg)\n\n Initially, we have no idea about the gold distribution. We can \n\nlearn the gold distribution by drilling at different locations. However,\n\n \n\n We now discuss two common objectives for the gold mining problem.\n\n \n\n* **Problem 1: Best Estimate of Gold Distribution (Active Learning)** \n\n **Active Learning**.\n\n* **Problem 2: Location of Maximum Gold (Bayesian Optimization)** \n\n In this problem, we want to find the location of the maximum \n\ngold content. We, again, can not drill at every location. Instead, we \n\n **Bayesian Optimization**.\n\n We will soon see how these two problems are related, but not the same.\n\n \n\nActive Learning\n\n---------------\n\n For many machine learning problems, unlabeled data is readily \n\navailable. However, labeling (or querying) is often expensive. As an \n\nexample, for a speech-to-text task, the annotation requires expert(s) to\n\n label words and sentences manually. Similarly, in our gold mining \n\nproblem, drilling (akin to labeling) is expensive. \n\n \n\n Active learning minimizes labeling costs while maximizing modeling\n\n accuracy. While there are various methods in active learning \n\nliterature, we look at **uncertainty reduction**. This \n\nmethod proposes labeling the point whose model uncertainty is the \n\nhighest. Often, the variance acts as a measure of uncertainty.\n\n \n\n for the values our function takes elsewhere. This surrogate should be \n\nflexible enough to model the true function. Using a Gaussian Process \n\n(GP) is a common choice, both because of its flexibility and its ability\n\n to give us uncertainty estimates\n\n \n\n Gaussian Process supports setting of priors by using specific \n\nkernels and mean functions. One might want to look at this excellent \n\nDistill article on Gaussian Processes to learn more. \n\n \n\n .\n\n \n\n \n\n .\n\n \n\n![](Exploring%20Bayesian%20Optimization_files/prior2posterior.png)\n\n Each new data point updates our surrogate model, moving it closer \n\nto the ground truth. The black line and the grey shaded region indicate \n\n \n\n \n\n However, we want to minimize the number of evaluations. Thus, we \n\nshould choose the next query point \"smartly\" using active learning. \n\nAlthough there are many ways to pick smart points, we will be picking \n\nthe most uncertain one.\n\n \n\n This gives us the following procedure for Active Learning:\n\n \n\n2. Train on the new training set\n\n3. Go to #1 till convergence or budget elapsed\n\n \n\n![](Exploring%20Bayesian%20Optimization_files/0_004.png)\n\nThe visualization shows that one can estimate the true \n\ndistribution in a few iterations. Furthermore, the most uncertain \n\npositions are often the farthest points from the current evaluation \n\n \n\nBayesian Optimization\n\n---------------------\n\n In the previous section, we picked points in order to determine an\n\n accurate model of the gold content. But what if our goal is simply to \n\nfind the location of maximum gold content? Of course, we could do active\n\n learning to estimate the true function accurately and then find its \n\nmaximum. But that seems pretty wasteful — why should we use evaluations \n\nimproving our estimates of regions where the function expects low gold \n\ncontent when we only care about the maximum?\n\n \n\n This is the core question in Bayesian Optimization: \"Based on what\n\n we know so far, which point should we evaluate next?\" Remember that \n\nevaluating each point is expensive, so we want to pick carefully! In the\n\n active learning case, we picked the most uncertain point, exploring the\n\n function. But in Bayesian Optimization, we need to balance exploring \n\nuncertain regions, which might unexpectedly have high gold content, \n\nagainst focusing on regions we already know have higher gold content (a \n\nkind of exploitation).\n\n \n\n We make this decision with something called an acquisition \n\nfunction. Acquisition functions are heuristics for how desirable it is \n\n \n\n This brings us to how Bayesian Optimization works. At every step, \n\nwe determine what the best point to evaluate next is according to the \n\nacquisition function by optimizing it. We then update our model and \n\nrepeat this process to determine the next point to evaluate.\n\n \n\n You may be wondering what's \"Bayesian\" about Bayesian Optimization\n\n if we're just optimizing these acquisition functions. Well, at every \n\nstep we maintain a model describing our estimates and uncertainty at \n\n \n\n### Formalizing Bayesian Optimization\n\n We present the general constraints in Bayesian Optimization and \n\n * [Youtube talk](https://www.youtube.com/watch?v=c4KKvyWW_Xk),\n\n.\n\n| General Constraints | Constraints in Gold Mining example |\n\n| --- | --- |\n\n| fff's feasible set AAA is simple,\n\n| fff is continuous but lacks special structure,\n\n| fff is derivative-free:\n\n| fff is expensive to evaluate:\n\n the number of times we can evaluate it\n\n is severely limited. | Drilling is costly. |\n\nit is easy to incorporate normally distributed noise for GP regression). |\n\n To solve this problem, we will follow the following algorithm:\n\n \n\n### Acquisition Functions\n\n \n\n#### Probability of Improvement (PI)\n\n \n\nxt+1=argmax(αPI(x))=argmax(P(f(x)≥(f(x+)+ϵ)))\n\n x\\_{t+1} = argmax(lpha\\_{PI}(x)) = argmax(P(f(x) \\geq (f(x^+) +\\epsilon)))\n\n xt+1​=argmax(αPI​(x))=argmax(P(f(x)≥(f(x+)+ϵ)))\n\nxt+1=argmax(αPI(x))=argmax(P(f(x)≥(f(x+)+ϵ)))\n\n egin{aligned}\n\n x\\_{t+1} & = argmax(lpha\\_{PI}(x))\\\n\n & = argmax(P(f(x) \\geq (f(x^+) +\\epsilon)))\n\n \\end{aligned}\n\n xt+1​​=argmax(αPI​(x))=argmax(P(f(x)≥(f(x+)+ϵ)))​\n\n where, \n\n \n\n* P(⋅)P(\\cdot)P(⋅) indicates probability\n\n* ϵ\\epsilonϵ is a small positive number\n\n \n\n Looking closely, we are just finding the upper-tail probability \n\n(or the CDF) of the surrogate posterior. Moreover, if we are using a GP \n\nas a surrogate the expression above converts to,\n\n \n\n where, \n\n \n\n* Φ(⋅)\\Phi(\\cdot)Φ(⋅) indicates the CDF\n\n The violet region shows the probability density at each point. The grey\n\n regions show the probability density below the current max. The \"area\" \n\nof the violet region at each point represents the \"probability of \n\nimprovement over current maximum\". The next point to evaluate via the PI\n\n criteria (shown in dashed blue line) is x=6x = 6x=6.\n\n \n\n![](Exploring%20Bayesian%20Optimization_files/density_pi.png)\n\n##### Intuition behind ϵ\\epsilonϵ in PI\n\n PI uses ϵ\\epsilonϵ to strike a balance between exploration and exploitation. \n\n \n\n \n\n![](Exploring%20Bayesian%20Optimization_files/0_009.png)\n\n This observation also shows that we do not need to construct an \n\naccurate estimate of the black-box function to find its maximum.\n\n \n\n![](Exploring%20Bayesian%20Optimization_files/0_013.png)\n\n \n\n What happens if we increase ϵ\\epsilonϵ a bit more?\n\n \n\n![](Exploring%20Bayesian%20Optimization_files/0_012.png)\n\n We see that we made things worse! Our model now uses ϵ=3\\epsilon = 3ϵ=3,\n\n and we are unable to exploit when we land near the global maximum. \n\nMoreover, with high exploration, the setting becomes similar to active \n\nlearning.\n\n \n\n \n\n#### Expected Improvement (EI)\n\n The idea is fairly simple — choose the next query point as the one\n\n \n\n \n\nxt+1=argminxE(∣∣ht+1(x)−f(x⋆)∣∣ ∣ Dt)\n\n xt+1​=argminx​E(∣∣ht+1​(x)−f(x⋆)∣∣ ∣ Dt​)\n\n \n\n In essence, we are trying to select the point that minimizes the \n\ndistance to the objective evaluated at the maximum. Unfortunately, we do\n\n not know the ground truth function, fff. Mockus proposed\n\n the following acquisition function to overcome the issue.\n\n \n\nxt+1=argmaxxE(max{0, ht+1(x)−f(x+)} ∣ Dt)\n\n xt+1​=argmaxx​E(max{0, ht+1​(x)−f(x+)} ∣ Dt​)\n\nxt+1= argmaxxE(max{0, ht+1(x)−f(x+)} ∣ Dt)\n\n egin{aligned}\n\n x\\_{t+1} = \\ & argmax\\_x \\mathbb{E} \\\n\n & \\left( {max} \\{ 0, \\ h\\_{t+1}(x) - f(x^+) \\} \\ | \\ \\mathcal{D}\\_t \night)\n\n \\end{aligned}\n\n xt+1​= ​argmaxx​E(max{0, ht+1​(x)−f(x+)} ∣ Dt​)​\n\n \n\nEI(x)={(μt(x)−f(x+)−ϵ)Φ(Z)+σt(x)ϕ(Z),if σt(x)>00,if σt(x)=0\n\n EI(x)=\n\n egin{cases}\n\n 0, & \text{if}\\ \\sigma\\_t(x) = 0\n\n \\end{cases}\n\n EI(x)={(μt​(x)−f(x+)−ϵ)Φ(Z)+σt​(x)ϕ(Z),0,​if σt​(x)>0if σt​(x)=0​\n\nEI(x)={[(μt(x)−f(x+)−ϵ) σt(x)>0∗Φ(Z)]+σt(x)ϕ(Z),0, σt(x)=0\n\n EI(x)= egin{cases}\n\n [(\\mu\\_t(x) - f(x^+) - \\epsilon) & \\ \\sigma\\_t(x) > 0 \\\n\n \\quad \\* \\Phi(Z)] + \\sigma\\_t(x)\\phi(Z),\\\n\n 0, & \\ \\sigma\\_t(x) = 0\n\n \\end{cases}\n\n EI(x)=⎩⎪⎨⎪⎧​[(μt​(x)−f(x+)−ϵ)∗Φ(Z)]+σt​(x)ϕ(Z),0,​ σt​(x)>0 σt​(x)=0​\n\n where Φ(⋅)\\Phi(\\cdot)Φ(⋅) indicates CDF and ϕ(⋅)\\phi(\\cdot)ϕ(⋅) indicates pdf.\n\n \n\n \n\n \n\n![](Exploring%20Bayesian%20Optimization_files/0_002.png)\n\n We now increase ϵ\\epsilonϵ to explore more.\n\n \n\n![](Exploring%20Bayesian%20Optimization_files/0_010.png)\n\n As we expected, increasing the value to ϵ=0.3\\epsilon = 0.3ϵ=0.3\n\n makes the acquisition function explore more. Compared to the earlier \n\nevaluations, we see less exploitation. We see that it evaluates only two\n\n points near the global maxima.\n\n \n\n Let us increase ϵ\\epsilonϵ even more.\n\n \n\n![](Exploring%20Bayesian%20Optimization_files/0_005.png)\n\n \n\n#### PI vs Ei\n\n \n\n![](Exploring%20Bayesian%20Optimization_files/0.svg)\n\n dot is a point in the search space. Additionally, the training set used\n\n \n\n### Thompson Sampling\n\n Another common acquisition function is Thompson Sampling .\n\n At every step, we sample a function from the surrogate's posterior and \n\noptimize it. For example, in the case of gold mining, we would sample a \n\nplausible distribution of the gold given the evidence and evaluate \n\n(drill) wherever it peaks.\n\n \n\n Below we have an image showing three sampled functions from the \n\nlearned surrogate posterior for our gold mining problem. The training \n\n \n\n![](Exploring%20Bayesian%20Optimization_files/thompson.svg)\n\n We can understand the intuition behind Thompson sampling by two observations:\n\n \n\n* Locations with high uncertainty (σ(x) \\sigma(x) σ(x))\n\n will show a large variance in the functional values sampled from the \n\nsurrogate posterior. Thus, there is a non-trivial probability that a \n\nsample can take high value in a highly uncertain region. Optimizing such\n\n samples can aid **exploration**.\n\n \n\n* The sampled functions must pass through the current max \n\nvalue, as there is no uncertainty at the evaluated locations. Thus, \n\n \n\n![](Exploring%20Bayesian%20Optimization_files/0.png)\n\nThe visualization above uses Thompson sampling for optimization. Again, \n\nwe can reach the global optimum in relatively few iterations.\n\n \n\n### Random\n\n We have been using intelligent acquisition functions until now.\n\n We can create a random acquisition function by sampling xxx\n\n randomly. \n\n![](Exploring%20Bayesian%20Optimization_files/0_014.png)\n\n The visualization above shows that the performance of the random \n\nacquisition function is not that bad! However, if our optimization was \n\nmore complex (more dimensions), then the random acquisition might \n\nperform poorly.\n\n### Summary of Acquisition Functions\n\n \n\n Let us now summarize the core ideas associated with acquisition \n\nfunctions: i) they are heuristics for evaluating the utility of a point;\n\n ii) they are a function of the surrogate posterior; iii) they combine \n\nexploration and exploitation; and iv) they are inexpensive to evaluate.\n\n#### Other Acquisition Functions\n\n \n\nWe have seen various acquisition functions until now. One \n\ntrivial way to come up with acquisition functions is to have a \n\nexplore/exploit combination.\n\n \n\n### Upper Confidence Bound (UCB)\n\n One such trivial acquisition function that combines the \n\nexploration/exploitation tradeoff is a linear combination of the mean \n\nand uncertainty of our surrogate model. The model mean signifies \n\nexploitation (of our model's knowledge) and model uncertainty signifies \n\nexploration (due to our model's lack of observations).\n\n α(x)=μ(x)+λ×σ(x)lpha(x) = \\mu(x) + \\lambda \times \\sigma(x)α(x)=μ(x)+λ×σ(x)\n\n The intuition behind the UCB acquisition function is weighing \n\nof the importance between the surrogate's mean vs. the surrogate's \n\n \n\n We can further form acquisition functions by combining the \n\nexisting acquisition functions though the physical interpretability of \n\nsuch combinations might not be so straightforward. One reason we might \n\nwant to combine two methods is to overcome the limitations of the \n\nindividual methods.\n\n \n\n One such combination can be a linear combination of PI and EI.\n\n We know PI focuses on the probability of improvement, whereas \n\nEI focuses on the expected improvement. Such a combination could help in\n\n having a tradeoff between the two based on the value of λ\\lambdaλ.\n\n \n\n### Gaussian Process Upper Confidence Bound (GP-UCB)\n\n a−ba -\n\n GP-UCB's formulation is given by:\n\n \n\nαGP−UCB(x)=μt(x)+βtσt(x)\n\n lpha\\_{GP-UCB}(x) = \\mu\\_t(x) + \\sqrt{eta\\_t}\\sigma\\_t(x)\n\n αGP−UCB​(x)=μt​(x)+βt​\n\n​σt​(x)\n\n Where ttt is the timestep.\n\n \n\n \n\n### Comparison\n\n slides from Nando De Freitas. We have used the \n\noptimum hyperparameters for each acquisition function.\n\n We ran the random acquisition function several times with \n\ndifferent seeds and plotted the mean gold sensed at every iteration.\n\n \n\n![](Exploring%20Bayesian%20Optimization_files/comp.svg)\n\n strategy grows slowly. In comparison, the other acquisition functions \n\ncan find a good solution in a small number of iterations. In fact, most \n\nacquisition functions reach fairly close to the global maxima in as few \n\nas three iterations.\n\n \n\nHyperparameter Tuning\n\n---------------------\n\nBefore we talk about Bayesian optimization for hyperparameter tuning,\n\n we will quickly differentiate between hyperparameters and parameters: \n\nhyperparameters are set before learning and the parameters are learned \n\nfrom the data. To illustrate the difference, we take the example of \n\nRidge regression.\n\n \n\nθ^ridge=argminθ ∈ Rp∑i=1n(yi−xiTθ)2+λ∑j=1pθj2\n\n \\hat{\theta}\\_{ridge} = argmin\\_{\theta\\ \\in \\ \\mathbb{R}^p} \n\n\\sum\\limits\\_{i=1}^{n} \\left(y\\_i - x\\_i^T\theta \night)^2 + \\lambda \n\n\\sum\\limits\\_{j=1}^{p} \theta^2\\_j\n\n θ^ridge​=argminθ ∈ Rp​i=1∑n​(yi​−xiT​θ)2+λj=1∑p​θj2​\n\nθ^ridge=argminθ ∈ Rp∑i=1n(yi−xiTθ)2+λ∑j=1pθj2\n\n egin{aligned}\n\n \\hat{\theta}\\_{ridge} = & argmin\\_{\theta\\ \\in \\ \\mathbb{R}^p}\n\n \\sum\\limits\\_{i=1}^{n} \\left(y\\_i - x\\_i^T\theta \night)^2 \\\n\n & + \\lambda \\sum\\limits\\_{j=1}^{p} \theta^2\\_j\n\n \\end{aligned}\n\n θ^ridge​=​argminθ ∈ Rp​i=1∑n​(yi​−xiT​θ)2+λj=1∑p​θj2​​\n\n If we solve the above regression problem via gradient descent \n\noptimization, we further introduce another optimization parameter, the \n\nlearning rate αlphaα.\n\n \n\nWhen training a model is not expensive and time-consuming, we \n\ncan do a grid search to find the optimum hyperparameters. However, grid \n\nsearch is not feasible if function evaluations are costly, as in the \n\ncase of a large neural network that takes days to train. Further, grid \n\nsearch scales poorly in terms of the number of hyperparameters.\n\n \n\n### Example 1 — Support Vector Machine (SVM)\n\nIn this example, we use an SVM to classify on sklearn's moons \n\ndataset and use Bayesian Optimization to optimize SVM hyperparameters.\n\n \n\n Let us have a look at the dataset now, which has two classes and two features.\n\n![](Exploring%20Bayesian%20Optimization_files/moons.svg)\n\n the surface plots you see for the Ground Truth Accuracies below were \n\ncalculated for each possible hyperparameter for showcasing purposes \n\nonly. We do not have these values in real applications.\n\n \n\n![](Exploring%20Bayesian%20Optimization_files/0_003.png)\n\n![](Exploring%20Bayesian%20Optimization_files/0_006.png)\n\n### Comparison\n\n \n\n![](Exploring%20Bayesian%20Optimization_files/comp3d.svg)\n\n method seemed to perform much better initially, but it could not reach \n\nthe global optimum, whereas Bayesian Optimization was able to get fairly\n\n close. The initial subpar performance of Bayesian Optimization can be \n\nattributed to the initial exploration.\n\n \n\n#### Other Examples\n\n### Example 2 — Random Forest\n\n Using Bayesian Optimization in a Random Forest Classifier.\n\n We will continue now to train a Random Forest on the moons \n\ndataset we had used previously to learn the Support Vector Machine \n\nmodel. The primary hyperparameters of Random Forests we would like to \n\noptimize our accuracy are the **number** of\n\n \n\n \n\n We will be again using Gaussian Processes with Matern kernel \n\nto estimate and predict the accuracy function over the two \n\nhyperparameters.\n\n \n\n![](Exploring%20Bayesian%20Optimization_files/0_007.png)\n\n \n\n![](Exploring%20Bayesian%20Optimization_files/0_015.png)\n\n![](Exploring%20Bayesian%20Optimization_files/0_008.png)\n\n \n\n![](Exploring%20Bayesian%20Optimization_files/0_011.png)\n\nLet us now use the Random acquisition function.\n\n![](Exploring%20Bayesian%20Optimization_files/RFcomp3d.svg)\n\n The optimization strategies seemed to struggle in this \n\nexample. This can be attributed to the non-smooth ground truth. This \n\nshows that the effectiveness of Bayesian Optimization depends on the \n\nsurrogate's efficiency to model the actual black-box function. It is \n\ninteresting to notice that the Bayesian Optimization framework still \n\nbeats the *random* strategy using various acquisition functions.\n\n \n\n### Example 3 — Neural Networks\n\n Let us take this example to get an idea of how to apply \n\n which also provides us support for optimizing function with a search \n\nspace of categorical, integral, and real variables. We will not be \n\nplotting the ground truth here, as it is extremely costly to do so. \n\nBelow are some code snippets that show the ease of using Bayesian \n\nOptimization packages for hyperparameter tuning.\n\n \n\n The code initially declares a search space for the \n\noptimization problem. We limit the search space to be the following:\n\n \n\n* batch\\_size — This hyperparameter sets the number of \n\ntraining examples to combine to find the gradients for a single step in \n\ngradient descent. \n\n \n\n* learning rate — This hyperparameter sets the stepsize with\n\n which we will perform gradient descent in the neural network. \n\n* activation — We will have one categorical variable, i.e. \n\nthe activation to apply to our neural network layers. This variable can \n\ntake on values in the set {relu, sigmoid}\\{ relu, \\ sigmoid \\}{relu, sigmoid}.\n\n log\\_batch\\_size = Integer(\n\n low=2,\n\n high=7,\n\n name='log\\_batch\\_size'\n\n )\n\n lr = Real(\n\n low=1e-6,\n\n high=1e0,\n\n prior='log-uniform',\n\n name='lr'\n\n )\n\n activation = Categorical(\n\n categories=['relu', 'sigmoid'],\n\n name='activation'\n\n )\n\n dimensions = [\n\n dim\\_num\\_batch\\_size\\_to\\_base,\n\n dim\\_learning\\_rate,\n\n dim\\_activation\n\n ]\n\n \n\n \n\n # initial parameters (1st point)\n\n default\\_parameters = \n\n [4, 1e-1, 'relu']\n\n # bayesian optimization\n\n search\\_result = gp\\_minimize(\n\n func=train,\n\n dimensions=dimensions,\n\n acq\\_func='EI', # Expctd Imprv.\n\n n\\_calls=11,\n\n x0=default\\_parameters\n\n )\n\n \n\n![](Exploring%20Bayesian%20Optimization_files/conv.svg)\n\n \n\n Looking at the above example, we can see that incorporating \n\nBayesian Optimization is not difficult and can save a lot of time. \n\nOptimizing to get an accuracy of nearly one in around seven iterations \n\n Let us get the numbers into perspective. If we had run this \n\n iterations. Whereas Bayesian Optimization only took seven iterations. \n\nEach iteration took around fifteen minutes; this sets the time required \n\nfor the grid search to complete around seventeen hours!\n\n \n\nConclusion and Summary\n\n======================\n\n In this article, we looked at Bayesian Optimization for \n\noptimizing a black-box function. Bayesian Optimization is well suited \n\nwhen the function evaluations are expensive, making grid or exhaustive \n\nsearch impractical. We looked at the key components of Bayesian \n\nOptimization. First, we looked at the notion of using a surrogate \n\nfunction (with a prior over the space of objective functions) to model \n\nour black-box function. Next, we looked at the \"Bayes\" in Bayesian \n\nOptimization — the function evaluations are used as data to obtain the \n\nsurrogate posterior. We look at acquisition functions, which are \n\nfunctions of the surrogate posterior and are optimized sequentially. \n\nThis new sequential optimization is in-expensive and thus of utility of \n\nus. We also looked at a few acquisition functions and showed how these \n\ndifferent functions balance exploration and exploitation. Finally, we \n\nlooked at some practical examples of Bayesian Optimization for \n\noptimizing hyper-parameters for machine learning models.\n\n \n\n \n\n", "bibliography_bib": [{"title": "A statistical approach to some basic mine valuation problems on the Witwatersrand "}, {"title": "Active Learning Literature Survey"}, {"title": "Active learning: theory and applications"}, {"title": "Taking the Human Out of the Loop: A Review of Bayesian Optimization"}, {"title": "A\n Tutorial on Bayesian Optimization of Expensive Cost Functions, with \nApplication to Active User Modeling and Hierarchical Reinforcement \nLearning"}, {"title": "A Visual Exploration of Gaussian Processes"}, {"title": "Gaussian Processes in Machine Learning"}, {"title": "Bayesian approach to global optimization and application to multiobjective and constrained problems"}, {"title": "On The Likelihood That One Unknown Probability Exceeds Another In View Of The Evidence Of Two Samples"}, {"title": "Using Confidence Bounds for Exploitation-Exploration Trade-Offs"}, {"title": "Gaussian Process Optimization in the Bandit Setting: No Regret and Experimental Design"}, {"title": "Practical Bayesian Optimization of Machine Learning Algorithms"}, {"title": "Algorithms for Hyper-Parameter Optimization"}, {"title": "Making a Science of Model Search: Hyperparameter Optimization in Hundreds of Dimensions for Vision Architectures"}, {"title": "Scikit-learn: Machine Learning in {P}ython"}, {"title": "Bayesian Optimization with Gradients"}, {"title": "Hyperband: A Novel Bandit-Based Approach to Hyperparameter Optimization"}, {"title": "Fast Bayesian Optimization of Machine Learning Hyperparameters on Large Datasets"}, {"title": "Safe Exploration for Optimization with Gaussian Processes"}, {"title": "Scalable Bayesian Optimization Using Deep Neural Networks"}, {"title": "Portfolio Allocation for Bayesian Optimization"}, {"title": "Bayesian Optimization for Sensor Set Selection"}, {"title": "Constrained Bayesian Optimization with Noisy Experiments"}, {"title": "Parallel Bayesian Global Optimization of Expensive Functions"}], "id": "baf70f63659c5b806b761882b6360ff8"} +{"url": "n/a", "source": "distill", "source_type": "html", "converted_with": "python", "title": "Multimodal Neurons in Artificial Neural Networks", "authors": ["Gabriel Goh", "Nick Cammarata †", "Chelsea Voss †", "Shan Carter", "Michael Petrov", "Ludwig Schubert", "Alec Radford", "Chris Olah"], "date_published": "2021-03-04", "abstract": "In 2005, a letter published in Nature described human neurons responding to specific people, such as Jennifer Aniston or Halle Berry . The exciting thing wasn’t just that they selected for particular people, but that they did so regardless of whether they were shown photographs, drawings, or even images of the person’s name. The neurons were multimodal. As the lead author would put it: \"You are looking at the far end of the transformation from metric, visual shapes to conceptual… information.\" Quiroga's full quote, from New Scientist reads: \"I think that’s the excitement to these results. You are looking at the far end of the transformation from metric, visual shapes to conceptual memory-related information. It is that transformation that underlies our ability to understand the world. It’s not enough to see something familiar and match it. It’s the fact that you plug visual information into the rich tapestry of memory that brings it to life.\" We elided the portion discussing memory since it was less relevant.", "journal_ref": "distill-pub", "doi": "https://doi.org/10.23915/distill.00030", "text": "#### Contents\n\n* [Emotion Neurons](#emotion-neurons)\n\n* [Region Neurons](#region-neurons)\n\n* [Feature Properties](#feature-properties)\n\n* [Understanding Language](#understanding-language)\n\n* [Emotion Composition](#emotional-intelligence)\n\n* [Faceted Feature Visualization](#faceted-feature-visualization)\n\n* [CLIP Training](#clip)\n\nIn\n\n 2005, a letter published in Nature described human neurons responding \n\nto specific people, such as Jennifer Aniston or Halle Berry .\n\n The exciting thing wasn't just that they selected for particular \n\npeople, but that they did so regardless of whether they were shown \n\nphotographs, drawings, or even images of the person's name. The neurons \n\nwere multimodal. As the lead author would put it: \"You are looking at \n\nthe far end of the transformation from metric, visual shapes to \n\n reads: \"I think that's the excitement to these results. You are looking\n\n at the far end of the transformation from metric, visual shapes to \n\nconceptual memory-related information. It is that transformation that \n\nunderlies our ability to understand the world. It's not enough to see \n\nsomething familiar and match it. It's the fact that you plug visual \n\ninformation into the rich tapestry of memory that brings it to life.\" We\n\n elided the portion discussing memory since it was less relevant.\n\nWe\n\n report the existence of similar multimodal neurons in artificial neural\n\n networks. This includes neurons selecting for prominent public figures \n\n important to note that the vast majority of people these models \n\nrecognize don't have a specific neuron, but instead are represented by a\n\n combination of neurons. Often, the contributing neurons are \n\nconceptually related. For example, we found a Donald Trump neuron which \n\nfires (albeit more weakly) for Mike Pence, contributing to representing \n\nhim.Some of the \n\nneurons we found seem strikingly similar to those described in \n\nneuroscience. A Donald Trump neuron we found might be seen as similar to\n\n And although we don't find an exact Jennifer Aniston neuron, we do find\n\n a neuron for the TV show \"Friends\" which fires for her. \n\nLike the biological multimodal neurons, these artificial neurons respond\n\n to the same subject in photographs, drawings, and images of their name:\n\n neurons only scratch the surface of the highly abstract neurons we've \n\nfound. Some neurons seem like topics out of a kindergarten curriculum: \n\nweather, seasons, letters, counting, or primary colors. All of these \n\nfeatures, even the trivial-seeming ones, have rich multimodality, such \n\nWe find these multimodal neurons in the recent CLIP models ,\n\n although it's possible similar undiscovered multimodal neurons may \n\n authors also kindly shared an alternative version from earlier \n\nexperiments, where the training objective was an autoregressive language\n\n modelling objective, instead of a contrastive objective. The features \n\nseem pretty similar. There are several CLIP models of \n\nvarying sizes; we find multimodal neurons in all of them, but focus on \n\nstudying the mid-sized RN50-x4 model. We \n\nfound it challenging to make feature visualization work on the largest \n\nCLIP models. The reasons why remain unclear. See faceted feature \n\n for more detailed discussion of CLIP's architecture and performance. \n\nOur analysis will focus on CLIP's vision side, so when we talk about a \n\nmultimodal neuron responding to text we mean the model \"reading\" text in\n\n images. The alignment with the text side \n\nof the model might be seen as an additional form of multimodality, \n\nperhaps analogous to a human neuron responding to hearing a word rather \n\nthan seeing it (see Quiroga's later work). But since that is an expected\n\n result of the training objective, it seems less interesting.\n\nCLIP's\n\n abstract visual features might be seen as the natural result of \n\naligning vision and text. We expect word embeddings (and language models\n\n generally) to learn abstract \"topic\" features .\n\n Either the side of the model which processes captions (the \"language \n\nside\") needs to give up those features, or its counterpart, the \"vision \n\nside\", needs to build visual analogues. Many\n\n researchers are interested in \"grounding\" language models by training \n\nthem on tasks involving another domain, in the hope of them learning a \n\nmore real world understanding of language. The abstract features we find\n\n in vision models can be seen as a kind of \"inverse grounding\": vision \n\ntaking on more abstract features by connection to language. This\n\n includes some of the classic kinds of bias we see in word embeddings, \n\nsuch as a \"terrorism\"/\"Islam\" neuron, or an \"Immigration\"/\"Mexico\" \n\nneuron. See discussion in the [region neurons section](#region-neurons).\n\n But even if these features seem natural in retrospect, they are \n\nqualitatively different from neurons previously studied in vision models\n\n (eg. ).\n\n They also have real world implications: these models are vulnerable to a\n\n kind of \"typographic attack\" where adding adversarial text to images \n\ncan cause them to be systematically misclassified.\n\n A typographic attack.\n\n \n\n \n\n---\n\n \n\nA Guided Tour of Neuron Families\n\n--------------------------------\n\nWhat\n\n features exist in CLIP models? In this section, we examine neurons \n\nfound in the final convolutional layer of the vision side across four \n\nmodels. A majority of these neurons seem to be interpretable.We\n\n checked a sample of 50 neurons from this layer and classified them as \n\ninterpretable, polysemantic, or uninterpretable. We found that 76% of \n\nthe sampled neurons were interpretable. (As a 95% confidence interval, \n\nthat's between 64% and 88%.) A further 18% were polysemantic but with \n\ninterpretable facets, and 6% were as yet uninterpretable. \n\nEach layer consists of thousands of neurons, so for our preliminary \n\nanalysis we looked at feature visualizations, the dataset examples that \n\nmost activated the neuron, and the English words which most activated \n\nthe neuron when rastered as images. This revealed an incredible \n\ndiversity of features, a sample of which we share below:\n\n neurons respond to content associated with with a geographic region, \n\nwith neurons ranging in scope from entire hemispheres to individual \n\n this, we mean both that it responds to people presenting as this \n\ngender, as well as that it responds to concepts associated with that \n\n neurons detect features that an image might contain, whether it's \n\nnormal object recognition or detection of more exotic features such as \n\n despite being able to \"read\" words and map them to semantic features, \n\nthe model keeps a handful of more typographic features in its high-level\n\n representations. Like a child spelling out a word they don't know, we \n\n many of the neurons in the model contribute to recognizing an \n\nincredible diversity of abstract concepts that cannot be cleanly \n\n neurons respond to any visual information that contextualizes the image\n\n in a particular time – for some it's a season, for others it's a \n\n This diagram presents selected neurons from the final layer of four \n\nCLIP models, hand organized into \"families\" of similar neurons. Each \n\nneuron is represented by a feature visualization (selected from regular \n\nor [faceted feature visualization](#faceted-feature-visualization)\n\n to best illustrate the neuron) with human-chosen labels to help quickly\n\n provide a sense of each neuron. Labels were picked after looking at \n\nhundreds of stimuli that activate the neuron, in addition to feature \n\nvisualizations. \n\n \n\n You can click on any neuron to open it up in OpenAI Microscope to \n\nsee feature visualizations, dataset examples that maximally activate the\n\n neuron, and more.\n\nThese neurons don't just select for a single \n\nobject. They also fire (more weakly) for associated stimuli, such as a \n\nBarack Obama neuron firing for Michelle Obama or a morning neuron firing\n\n for images of breakfast. They also tend to be maximally inhibited by \n\nstimuli which could be seen, in a very abstract way, as their opposite. Some\n\n neurons seem less abstract. For example, typographic features like the \n\n\"-ing\" detector seem to roughly fire based on how far a string is away \n\nin Levenshtein distance. Although, even these show remarkable \n\ngeneralization, such as responding to different font sizes and rotated \n\ntext.\n\nHow should we think of these neurons? From an \n\ninterpretability perspective, these neurons can be seen as extreme \n\nexamples of \"multi-faceted neurons\" which respond to multiple distinct \n\n is a classic example in neuroscience of a hypothetical neuron that \n\nresponds in a highly specific way to some complex concept or stimulus – \n\n but this framing might encourage people to overinterpret these \n\nartificial neurons. Instead, the authors generally think of these \n\nneurons as being something like the visual version of a topic feature, \n\nactivating for features we might expect to be similar in a word \n\nembedding.\n\nMany of these neurons deal with sensitive topics, from \n\npolitical figures to emotions. Some neurons explicitly represent or are \n\nclosely related to protected characteristics: age, gender, race, \n\nreligion, sexual orientation,There's a \n\nneuron we conceptualize as an LGBT neuron, which responds to the Pride \n\nflag, rainbows, and images of words like \"LGBT\". Previous work (Wang \n\n& Kosinski) has suggested that neural networks might be able to \n\ndetermine sexual orientation from facial structure. This work has since \n\nbeen thoroughly rebutted and we wish to emphasize that we see no \n\n neurons related to age and gender, see \"person trait neurons.\" Region \n\nneurons seem closely related to race and national origin, responding to \n\nethnicities associated with given regions of the world. For sexual \n\n These neurons may reflect prejudices in the \"associated\" stimuli they \n\nrespond to, or be used downstream to implement biased behavior. There \n\nare also a small number of people detectors for individuals who have \n\ncommitted crimes against humanity, and a \"toxic\" neuron which responds \n\nto hate speech and sexual content. Having neurons corresponding to \n\nsensitive topics doesn't necessarily mean a network will be prejudiced. \n\nYou could even imagine explicit representations helping in some cases: \n\nthe toxic neuron might help the model match hateful images with captions\n\n that refute them. But they are a warning sign for a wide range of \n\npossible biases, and studying them may help us find potential biases \n\nwhich might be less on our radar.Examples\n\n of bias in AI models, and work drawing attention to it, has helped the \n\nresearch community to become somewhat \"alert\" to potential bias with \n\nregards to gender and race. However, CLIP could easily have biases which\n\n we are less alert to, such as biased behavior towards parents when \n\nthere's a child's drawing in the background.\n\nCLIP \n\ncontains a large number of interesting neurons. To allow detailed \n\nexamination we'll focus on three of the \"neuron families\" shown above: \n\npeople neurons, emotion neurons, and region neurons. We invite you to \n\nexplore others in Microscope.\n\n### Person Neurons\n\nThis\n\n section will discuss neurons representing present and historical \n\nfigures. Our discussion is intended to be descriptive and frank about \n\nwhat the model learned from the internet data it was trained on, and is \n\nnot endorsement of associations it makes or of the figures discussed, \n\nwho include political figures and people who committed crimes against \n\nhumanity. This content may be disturbing to some readers.To \n\ncaption images on the Internet, humans rely on cultural knowledge. If \n\nyou try captioning the popular images of a foreign place, you'll quickly\n\n find your object and scene recognition skills aren't enough. You can't \n\ncaption photos at a stadium without recognizing the sport, and you may \n\neven need to know specific players to get the caption right. Pictures of\n\n politicians and celebrities speaking are even more difficult to caption\n\n if you don't know who's talking and what they talk about, and these are\n\n some of the most popular pictures on the Internet. Some public figures \n\nelicit strong reactions, which may influence online discussion and \n\ncaptions regardless of other content.\n\nWith this in mind, perhaps \n\nit's unsurprising that the model invests significant capacity in \n\nrepresenting specific public and historical figures — especially those \n\n detects Christian symbols like crosses and crowns of thorns, paintings \n\nof Jesus, his written name, and feature visualization shows him as a \n\n recognizes the masked hero and knows his secret identity, Peter Parker.\n\n It also responds to images, text, and drawings of heroes and villians \n\n learns to detect his face and body, symbols of the Nazi party, relevant\n\n historical documents, and other loosely related concepts like German \n\nfood. Feature visualization shows swastikas and Hitler seemingly doing a\n\n Nazi salute.\n\nWhich\n\n people the model develops dedicated neurons for is stochastic, but \n\nseems correlated with the person's prevalence across the datasetThe\n\n model's dataset was collected in 2019 and likely emphasizes content \n\nfrom around that time. In the case of the Donald Trump neuron, it seems \n\nlikely there would have also been a Hillary Clinton neuron if data had \n\n and the intensity with which people respond to them. The one person \n\nwe've found in every CLIP model is Donald Trump. It strongly responds to\n\n images of him across a wide variety of settings, including effigies and\n\n caricatures in many artistic mediums, as well as more weakly activating\n\n for people he's worked closely with like Mike Pence and Steve Bannon. \n\nIt also responds to his political symbols and messaging (eg. \"The Wall\" \n\nand \"Make America Great Again\" hats). On the other hand, it most \n\n\\*negatively\\* activates to musicians like Nicky Minaj and Eminem, video \n\ngames like Fortnite, civil rights activists like Martin Luther King Jr.,\n\n and LGBT symbols like rainbow flags.\n\nTo understand the Trump neuron in more depth, we collected about 650 \n\nimages that cause it to fire different amounts and labeled them by hand \n\ninto categories we created. This lets us estimate the conditional \n\n for details. As the black / LGBT category contains only a few images, \n\nsince they don't occur frequently in the dataset, we validated they \n\ncause negative activations with a futher experimentAs\n\n we were labeling images for the conditional probability plot in Figure 2\n\n we were surprised that images related to black and gay rights \n\nconsistently caused strong negative activations. However, since there \n\nwere four images in that category, we decided to do a follow-up \n\nexperiment on more images. \n\n \n\nWe searched Google Images for the \n\nterms \"black rights\" and \"gay rights\" and took ten top images for each \n\nterm without looking at their activations. Then we validated these \n\nimages reliably cause the Trump neuron to fire in the range of roughly \n\nnegative ~3-6 standard deviations from zero. The images that cause less \n\nstrong negative activations near -3 standard deviations tend to have \n\nbroad symbols such as an image of several black teenagers raising their \n\narm and fist that causes a -2.5 standard deviations. Conversely, images \n\nof more easy to recognize and specific symbols such as rainbow flags or \n\nphotos of Martin Luther King Jr consistently cause activations of at \n\nleast -4 standard deviations. In Figure 3 we also show activations \n\nrelated to photos of Martin Luther King Jr.. \n\n \n\n Across\n\n all categories, we see that higher activations of the Trump neuron are \n\nhighly selective, as more than 90% of the images with a standard \n\ndeviation greater than 30 are related to Donald Trump.While\n\n labeling images for the previous experiment it became clear the neuron \n\nactivates different amounts for specific people. We can study this more \n\nby searching the Internet for pictures of specific people and measuring \n\nhow the images of each person makes the neuron fire.\n\nTo see how the Trump neuron responds to different individuals, we \n\nsearched the query \"X giving a speech at a microphone\" for various \n\nindividuals on Google Images. We cleaned the data by hand, excluding \n\nphotos that are not clear photos of the individual's face. The bar \n\nlength for each individual shows the median activation of the person's \n\nphotos in standard deviations of the neuron over the dataset, and the \n\nrange over the bar shows the standard deviation of the activations of \n\nthe person's photos.Presumably, person \n\nneurons also exist in other models, such as facial recognition models. \n\nWhat makes these neurons unique is that they respond to the person \n\nacross modalities and associations, situating them in a cultural \n\ncontext. In particular, we're struck by how the neuron's response tracks\n\n an informal intuition with how associated people are. In this sense, \n\nperson neurons can be thought of as a landscape of person-associations, \n\nwith the person themself as simply the tallest peak.\n\n### Emotion Neurons\n\nThis\n\n section will discuss neurons representing emotions, and a neuron for \n\n\"mental illness.\" Our discussion is intended to be descriptive and frank\n\n about what the model learned from the internet data it was trained on \n\nand is not endorsement. This content may be disturbing to some readers.Since\n\n a small change in someone's expression can radically change the meaning\n\n of a picture, emotional content is essential to the task of captioning.\n\n The model dedicates dozens of neurons to this task, each representing a\n\n different emotion.\n\nThese emotion neurons don't just respond to \n\nfacial expressions associated with an emotion -- they're flexible, \n\nresponding to body language and facial expressions in humans and \n\n activates even when the majority of the face is obscured. It responds \n\nto slang like \"OMG!\" and \"WTF\", and text feature visualization produces \n\nsimilar words of shock and surprise. There are even some emotion neurons\n\n in discussion of emotions because (1) they are sometimes included in \n\nemotion wheels, (2) they seem to play a role in captioning emotions and \n\nfeelings, and (3) being more inclusive in our discussion allow us to \n\nexplore more of the model. Of course, these neurons simply \n\nrespond to cues associated with an emotion and don't necessarily \n\ncorrespond to the mental state of subjects in an image.In\n\n addition to CLIP neurons potentially incorrectly recognizing cues, they\n\n cues themselves don't necessarily reflect people's mental states. For \n\nexample, facial expressions don't reliably correspond to someone \n\n addition to these emotion neurons, we also find which neurons respond \n\nto an emotion as a secondary role, but mostly respond to something else.\n\n which primarily responds to jail and incarceration helps represent \n\nemotions such as \"persecuted.\" Similarly, a neuron that primarily \n\ndetects pornographic content seems to have a secondary function of \n\nto see some of these different facets. In particular, the face facet \n\nshows facial expressions corresponding to different emotions, such as \n\nsmiling, crying, or wide-eyed shock. Click on any neuron to open it in \n\nMicroscope to see more information, including dataset examples.While\n\n most emotion neurons seem to be very abstract, there are also some \n\nneurons which simply respond to specific body and facial expressions, \n\n It activates most to the internet-born duckface expression and peace \n\nsigns, and we'll see later that both words show up in the maximally \n\ncorresponding captions.\n\nOne\n\n neuron that doesn't represent a single emotion but rather a high level \n\n This neuron activates when images contain words associated with \n\nnegative mental states (eg. \"depression,\" \"anxiety,\" \"lonely,\" \n\n\"stressed\"), words associated with clinical mental health treatment \n\n(\"psychology\", \"mental,\" \"disorder\", \"therapy\") or mental health \n\npejoratives (\"insane,\" \"psycho\"). It also fires more weakly for images \n\nof drugs, and for facial expressions that look sad or stressed, and for \n\nthe names of negative emotions.\n\n we wouldn't think of mental illness as a dimension of emotion. However,\n\n a couple things make this neuron important to frame in the emotion \n\ncontext. First, in its low-mid range activations, it represents common \n\nnegative emotions like sadness. Secondly, words like \"depressed\" are \n\noften colloquially used to describe non-clinical conditions. Finally, \n\nwe'll see in a later section that this neuron plays an important role in\n\n captioning emotions, composing with other emotion neurons to \n\ndifferentiate \"healthy\" and \"unhealthy\" versions of an emotion.\n\nTo\n\n better understand this neuron we again estimated the conditional \n\nprobabilities of various categories by activation magnitude. The \n\nstrongest positive activations are concepts related to mental illness. \n\nConversely, the strongest negative activations correspond to activities \n\nlike exercise, sports, and music events.\n\n To understand the \"mental illness neuron\" in more depth, we collected \n\nimages that cause it to fire different amounts and labeled them by hand \n\ninto categories we created. This lets us estimate the conditional \n\n for details. During the labeling process we couldn't see how much it \n\nmade the neuron fire. We see that the strongest activations all belong \n\nto labels corresponding to low-valence mental states. On the other hand,\n\n many images with a negative pre-ReLU activation are of scenes we may \n\ntypically consider high-valence, like photos with pets, travel photos, \n\nand or pictures of sporting events.### Region Neurons\n\nThis\n\n section will discuss neurons representing regions of the world, and \n\nindirectly ethnicity. The model's representations are learned from the \n\ninternet, and may reflect prejudices and stereotypes, sensitive regional\n\n situations, and colonialism. Our discussion is intended to be \n\ndescriptive and frank about what the model learned from the internet \n\ndata it was trained on, and is not endorsement of the model's \n\nrepresentations or associations. This content may be disturbing to some \n\nreaders.From local weather and food, to travel and immigration,\n\n to language and race: geography is an important implicit or explicit \n\ncontext in a great deal of online discourse. Blizzards are more likely \n\n They respond to a wide variety of modalities and facets associated with\n\n a given region: country and city names, architecture, prominent public \n\nfigures, faces of the most common ethnicity, distinctive clothing, \n\nwildlife, and local script (if not the Roman alphabet). If shown a world\n\n map, even without labels, these neurons fire selectively for the \n\nrelevant region on the map.Map responses \n\nseem to be strongest around distinctive geographic landmarks, such as \n\nthe Gulf Of Carpentaria and Cape York Peninsula for Australia, or the \n\nGulf of Guinea for Africa.\n\n which responds to bears, moose, coniferous forest, and the entire \n\nNorthern third of a world map — down to sub-regions of countries, such \n\nas the US West Coast.One interesting \n\nproperty of the regional neuron \"hierarchy\" is that the parent neuron \n\noften doesn't fire when a child is uniquely implicated. So while the \n\nEurope neuron fires for the names of European cities, the general United\n\n States neuron generally does not, and instead lets neurons like the \n\nWest Coast neuron fire. See also another example of a neuron \"hierarchy\"\n\n Some region neurons seem to form more consistently than others. Which \n\nneurons form doesn't seem to be fully explained by prevalence in the \n\n but not all models seem to have a UK neuron. Why is that? One intuition\n\n is that there's more variance in neurons when there's a natural \n\nsupercategory they can be grouped into. For example, when an individual \n\nUK neuron doesn't exist, it seems to be folded into a Europe neuron. In \n\nAfrica, we sometimes see multiple different Africa neurons (in \n\nparticular a South/West Africa neuron and an East Africa neuron), while \n\nother times there seems to be a single unified Africa neuron. In \n\ncontrast, Australia is perhaps less subdividable, since it's both a \n\ncontinent and country.\n\nGeographical Activation of Region Neurons **Unlabeled map activations**:\n\n Spatial activations of neurons in response to unlabeled geographical \n\nworld map. Activations averaged over random crops. Note that neurons for\n\n Countries colored by activations of neurons in response to rastered \n\nimages of country names. Activations averaged over font sizes, max over \n\nword positions. **City name activations**:\n\n Cities colored by activations of neurons in response to rastered images\n\n of city names. Activations averaged over font sizes, max over word \n\npositions. Selected Region Neurons Most Activating Words\n\nHover on a neuron to isolate activations. Click to open in Microscope.\n\n \n\n This diagram contextualizes region neurons with a map.\n\n Each neuron is mapped to a hue, and then regions where it \n\nactivates are colored in that hue, with intensity proportional to \n\nactiviation. If multiple neurons of opposing hues fire, the region will \n\nbe colored in a desaturated gray.\n\n It can show their response\n\n to an unlabeled geographical map,\n\n to country names,\n\n and to city names.\n\n \n\n \n\n \"large region neurons\"\n\n (such as the \"Northern Hemisphere\" neuron)\n\n and at\n\n \"secondarily regional neurons\"\n\n \"entrepreneurship\"\n\n or\n\n may not. This means that visualizing behavior on a global map \n\nunderrepresents the sheer number of region neurons that exist in CLIP. \n\nUsing the top-activating English words as a heuristic, we estimate \n\naround 4% of neurons are regional.To \n\nestimate the fraction of neurons that are regional, we looked at what \n\nfraction of each neuron's top-activating words (ie. words it responds to\n\n when rastered as images) were explicitly linked to geography, and used \n\nthis as a heuristic for whether a neuron was regional. To do this, we \n\ncreated a list of geographic words consisting of continent / country / \n\n \n\n We found 2.5% (64) of RN50-x4 neurons had geographic words for all of \n\nthe five maximally activating words. This number varied between 2-4% in \n\nother CLIP models. However, looking only at neurons for which all top \n\nfive words are explicitly geographic misses many region neurons which \n\nrespond strongly to words with implicit regional connotations (eg. \n\n\"hockey\" for a Canada neuron, \"volkswagen\" for a German neuron, \"palm\" \n\nfor an equatorial neuron). We bucketed neurons by fraction of five most \n\nactivating words that are geographic, then estimated the fraction of \n\neach bucket that were regional. With many neurons, the line was quite \n\nblurry (should we include polysemantic neurons where one case is \n\nregional? What about \"secondarily regional neurons\"?). For a relatively \n\nconservative definition, this seems to get us about 4%, but with a more \n\nliberal one you might get as high as 8%.\n\n caution is needed in interpreting these neurons as truly regional, \n\nrather than spuriously weakly firing for part of a world map. Important \n\nvalidations are that they fire for the same region on multiple different\n\n maps, and if they respond to words for countries or cities in that \n\nregion. These neurons don't have a region as the primary \n\nfocus, but have some kind of geographic information baked in, firing \n\n also find that the linear combination of neurons that respond to Russia\n\n on a map strongly responds to Pepe the frog, a symbol of white \n\nnationalism in the United States allegedly promoted by Russia. Our \n\nimpression is that Russians probably wouldn't particularly see this as a\n\n symbol of Russia, suggesting it is more \"Russia as understood by the \n\nUS.\"\n\n#### Case Study: Africa Neurons\n\nDespite these\n\n examples of neurons learning Americentric caricatures, there are some \n\nareas where the model seems slightly more nuanced than one might fear, \n\nespecially given that CLIP was only trained on English language data. \n\nFor example, rather than blurring all of Africa into a monolithic \n\nentity, the RN50-x4 model develops neurons for three regions within \n\nAfrica. This is significantly less detailed than its representation of \n\nmany Western countries, which sometimes have neurons for individual \n\ncountries or even sub-regions of countries, but was still striking to \n\nus.It's important to keep in mind that \n\nthe model can represent many more things using combinations of neurons. \n\nWhere the model dedicates neurons may give us some sense of the level of\n\n nuance, but we shouldn't infer, for example, that it doesn't somehow \n\nrepresent individual African countries. To\n\n contextualize this numerically, the model seems to dedicate ~4% of its \n\nregional neurons to Africa, which accounts for ~20% of the world's \n\nlandmass, and ~15% of the world's population.\n\n early explorations it quickly became clear these neurons \"know\" more \n\nabout Africa than the authors. For example, one of the first feature \n\nvisualizations of the South African regional neuron drew the text \n\n Learning about a TV drama might not be the kind of deep insights one \n\nmight have envisioned, but it is a charming proof of concept.\n\nWe\n\n chose the East Africa neuron for more careful investigation, again \n\nusing a conditional probability plot. It fires most strongly for flags, \n\n activations tend to follow an exponential distribution in their tails, a\n\n point that was made to us by Brice Menard. This means that strong \n\nactivations are more common than you'd expect in a Gaussian (where the \n\ntail decays at exp(-x^2)), but are much less common than weaker \n\nactivations. — have a significantly different distribution \n\nand seems to be mostly about ethnicity. Perhaps this is because \n\nethnicity is implicit in all images of people, providing weak evidence \n\nfor a region, while features like flags are far less frequent, but \n\nprovide strong evidence when they do occur. This is the first neuron \n\nwe've studied closely with a distinct regime change between medium and \n\nstrong activations.\n\nWe labeled more than 400 images that causes a neuron that most strongly \n\nresponds to the word \"Ghana\" to fire at different levels of activation, \n\nwithout access to how much each image caused the neuron to fire while \n\nlabeling. See [the appendix](#conditional-probability) for details. \n\n It fires most strongly for people of African descent as well as African\n\n words like country names. It's pre-ReLU activation is negative for \n\nsymbols associated with other countries, like the Tesla logo or British \n\nflag, as well as people of non-African descent. Many of its strongest \n\nnegative activations are for weaponry such as military vehicles and \n\nhandguns. Ghana, the country name it responds to most strongly, has a \n\nGlobal Peace Index rating higher than most African countries, and \n\nperhaps it learns this anti-association.We\n\n also looked at the activations of the other two Africa neurons. We \n\nsuspect they have interesting differences beyond their detection of \n\ndifferent country names and flags — why else would the model dedicate \n\nthree neurons — but we lacked the cultural knowledge to appreciate their\n\n subtleties.\n\n### Feature properties\n\nSo\n\n far, we've looked at particular neurons to give a sense of the kind of \n\nfeatures that exist in CLIP models. It's worth noting several properties\n\n that might be missed in the discussion of individual features:\n\n**Image-Based Word Embedding:**\n\n Despite being a vision model, one can produce \"image-based word \n\nembeddings\" with the visual CLIP model by rastering words into images \n\nand then feeding these images into the model, and then subtracting off \n\nthe average over words. Like normal word embeddings, the nearest \n\nOriginal \n\nWord\n\nNearest Neighbors \n\nCollobert embeddings\n\nNearest Neighbors \n\nCLIP image-based embeddings\n\nFrance\n\nFrench, Francis, Paris, Les, Des, Sans, Le, Pairs, Notre, Et\n\nJesus\n\nGod, Sati, Christ, Satan, Indra, Vishnu, Ananda, Parvati, Grace\n\nChrist, God, Bible, Gods, Praise, Christians, Lord, Christian, Gospel, Baptist\n\nxbox\n\nAmiga, Playstation, Msx, Ipod, Sega, Ps#, Hd, Dreamcast, Geforce, Capcom\n\n*V(Img(*\"King\"*)) - V(Img(*\"Man\"*)) + V(Img(*\"Woman\"*)) = V(Img(*\"Queen\"*))* \n\n work in some cases if we mask non-semantic lexicographic neurons (eg. \n\n\"-ing\" detectors). It seems likely that mixed arithmetic of words and \n\nimages should be possible.\n\n**Limited Multilingual Behavior:** \n\nAlthough CLIP's training data was filtered to be English, many features \n\n responds to images of English \"Thank You\", French \"Merci\", German \n\n\"Danke\", and Spanish \"Gracias,\" and also to English \"Congratulations\", \n\nGerman \"Gratulieren\", Spanish \"Felicidades\", and Indonesian \"Selamat\". \n\nAs the example of Indonesian demonstrates, the model can recognize some \n\nwords from non Romance/Germanic languages. However, we were unable to \n\nfind any examples of the model mapping words in non-Latin script to \n\nsemantic meanings. It can recognize many scripts (Arabic, Chinese, \n\nJapanese, etc) and will activate the corresponding regional neurons, but\n\n doesn't seem to be able to map words in those scripts to their \n\nmeanings.One interesting question is why \n\nthe model developed reading abilities in latin alphabet languages, but \n\nnot others. Was it because more data of that type slipped into the \n\ntraining data, or (the more exciting possibility) because it's easier to\n\n learn a language from limited data if you already know the alphabet?\n\n The most striking examples are likely racial and religious bias. As \n\n which responds to images of words such as \"Terrorism\", \"Attack\", \n\n\"Horror\", \"Afraid\", and also \"Islam\", \"Allah\", \"Muslim\". This isn't just\n\n an illusion from looking at a single neuron: the image-based word \n\nembedding for \"Terrorist\" has a cosine similarity of 0.52 with \n\n\"Muslims\", the highest value we observe for a word that doesn't include \n\n\"terror.\"By \"image-based word embedding\",\n\n we mean the activation for an image of that word, with the average \n\nactivation over images of 10,000 English words subtracted off. The \n\nintuition is that this removes generic \"black text on white background\" \n\nfatures. If one measures the cosine similarity between \"Terrorism\" and \n\n\"Muslim\" without subtracting off the average, it's much higher at about \n\n0.98, but that's because all values are shifted up due to sharing the \n\n**Polysemanticity and Conjoined Neurons:**\n\n Our qualitative experience has been that individual neurons are more \n\ninterpretable than random directions; this mirrors observations made in \n\nprevious work.Although\n\n we've focused on neurons which seem to have a single clearly defined \n\nconcept they respond to, many CLIP neurons are \"polysemantic\" ,\n\n responding to multiple unrelated features. Unusually, polysemantic \n\nneurons in CLIP often have suspicious links between the different \n\n The concepts in these neurons seem \"conjoined\", overlapping in a \n\nsuperficial way in one facet, and then generalizing out in multiple \n\ndirections. We haven't ruled out the possibility that these are just \n\ncoincidences, given the large number of facets that could overlap for \n\neach concept. But if conjoined features genuinely exist, they hint at \n\nnew potential explanations of polysemanticity.In\n\n the past, when we've observed seemingly polysemantic neurons, we've \n\nconsidered two possibilities: either it is responding to some shared \n\nfeature of the stimuli, in which case it isn't really polysemantic, or \n\nit is genuinely responding to two unrelated cases. Usually we \n\ndistinguish these cases with feature visualization. For example, \n\nInceptionV1 4e:55 responds to cars and cat heads. One could imagine it \n\nbeing the case that it's responding to some shared feature — perhaps cat\n\n eyes and car lights look similar. But feature visualization establishes\n\n a facet selecting for a globally coherent cat head, whiskers and all, \n\nas well as the metal chrome and corners of a car. We concluded that it \n\nwas genuinely *OR(cat, car)*. \n\n \n\nConjoined features can be seen\n\n as a kind of mid-point between detecting a shared low-level feature and\n\n detecting independent cases. Detecting Santa Claus and \"turn\" are \n\nclearly true independent cases, but there was a different facet where \n\nthey share a low-level feature. \n\n \n\nWhy would models have conjoined \n\nfeatures? Perhaps they're a vestigial phenomenon from early in training \n\nwhen the model couldn't distinguish between the two concepts in that \n\nfacet. Or perhaps there's a case where they're still hard to \n\ndistinguish, such as large font sizes. Or maybe it just makes concept \n\npacking more efficient, as in the superposition hypothesis.\n\n \n\n---\n\n \n\nUsing Abstractions\n\n------------------\n\nWe\n\n typically care about features because they're useful, and CLIP's \n\nfeatures are more useful than most. These features, when ensembled, \n\nallow direct retrieval on a variety of queries via the dot product \n\nalone.\n\nUntangling the image into its semantics \n\n enables the model to perform a wide variety of downstream tasks \n\nincluding imagenet classification, facial expression detection, \n\ngeolocalization and more. How do they do this? Answering these questions\n\n will require us to look at how neurons work in concert to represent a \n\nbroader space of concepts.\n\n### The Imagenet Challenge\n\nTo\n\n study how CLIP classifies Imagenet, it helps to look at the simplest \n\ncase. We use a sparse linear model for this purpose, following the \n\nmethodology of Radford et al .\n\n With each class using only 3 neurons on average, it is easy to look at \n\nall of the weights. This model, by any modern standard, fares poorly \n\nwith a top-5 accuracy of 56.4%, but the surprising thing is that such a \n\nmiserly model can do anything at all. How is each weight carrying so \n\nmuch weight?\n\nImageNet \n\n organizes images into categories borrowed from another project called \n\nWordNet.\n\nNeural networks typically classify images treating ImageNet classes as \n\nstructureless labels. But WordNet actually gives them a rich structure \n\nof higher level nodes. For example, a Labrador Retriever is a Canine \n\nwhich is a Mammal which is an Animal.\n\nWe find that the weights and\n\n neurons of CLIP reflect some of this structure. At the highest levels \n\nwe find conventional categories such as\n\n This diagram visualizes a submatrix of the full weight matrix that \n\ntakes neurons in the penultimate layer of Resnet 4x to the imagenet \n\nclasses. Each grey circle represents a positive weight. We see the model\n\n fails in ways that close but incorrect, such as its labeling of \n\nscorpion as a fish.\n\n arrive at a surprising discovery: it seems as though the neurons appear\n\n to arrange themselves into a taxonomy of classes that appear to mimic, \n\nvery approximately, the imagenet hierarchy. While there have been \n\nattempts to explicitly integrate this information ,\n\n CLIP was not given this information as a training signal. The fact that\n\n these neurons naturally form a hierarchy — form a hierarchy without \n\neven being trained on ImageNet — suggests that such hierarchy may be a \n\nuniversal feature of learning systems.We've\n\n seen hints of similar structure in region neurons, with a whole world \n\nneuron, a northern hemisphere neuron, a USA neuron, and then a West \n\nCoast neuron.\n\n### Understanding Language\n\nThe\n\n most exciting aspect of CLIP is its ability to do zero-shot \n\nclassification: it can be \"programmed\" with natural language to classify\n\n images into new categories, without fitting a model. Where linear \n\nprobes had fixed weights for a limited set of classes, now we have \n\ndynamic weight vectors that can be generated automatically from text. \n\nIndeed, CLIP makes it possible for end-users to 'roll their own \n\nclassifier' by programming the model via intuitive, natural language \n\ncommands - this will likely unlock a broad range of downstream uses of \n\nCLIP-style models.\n\nRecall that CLIP has two sides, a vision side \n\n(which we've discussed up to this point) and a language side. The two \n\nsides meet at the end, going through some processing and then performing\n\n a dot product to create a logit. If we ignore spatial structureIn\n\n order to use a contrastive loss, the 3d activation tensor of the last \n\nconvolutional layer must discard spatial information and be reduced to a\n\n single vector which can be dot producted with the language embedding. \n\nCLIP does this with an attention layer, first generating attention \n\n is the text embedding. We focus on the bilinear interaction term, which\n\n governs local interactions in most directions. Although this \n\nWe'll\n\n mostly be focusing on using text to create zero-shot weights for \n\nimages. But it's worth noting one tool that the other direction gives \n\nus. If we fix a neuron on the vision side, we can search for the text \n\nthat maximizes the logit. We do this with a hill climbing algorithm to \n\nfind what amounts to the text maximally corresponding to that neuron. \n\n### Emotion Composition\n\nAs\n\n we see above, English has far more descriptive words for emotions than \n\nthe vision side has emotion neurons. And yet, the vision side recognizes\n\n these more obscure emotions. How can it do that?\n\nWe can see what \n\ndifferent emotion words correspond to on the vision side by taking \n\nattribution, as described in the previous section, to \"I feel X\" on the \n\nlanguage side. This gives us a vector of image neurons for each emotion \n\nword.Since the approximations we made in \n\nthe previous section aren't exact, we double-checked these attribution \n\nvectors for all of the \"emotion equations\" shown by taking the top image\n\n neuron in each one, artificially increasing its activation at the last \n\nlayer on the vision side when run on a blank image, and confirming that \n\nthe logit for the corresponding emotion word increases on the language \n\n for the prompts \"i am feeling {emotion}, \"Me feeling {emotion} on my \n\nface\", \"a photo of me with a {emotion} expression on my face\" on each \n\none of the emotion-words on the emotion-wheel. We assign each prompt a \n\nlabel corresponding to the emotion-word, and then we then run sparse \n\nlogistic regression to find the neurons that maximally discriminate \n\nbetween the attribution vectors. For the purposes of this article, these\n\n vectors are then cleaned up by hand by removing neurons that respond to\n\n bigrams. This may relate to a line of thinking in \n\npsychology where combinations of basic emotions form the \"complex \n\nemotions\" we experience.The theory of constructed emotion.\n\nFor\n\n example, the jealousy emotion is success + grumpy. Bored is relaxed + \n\ngrumpy. Intimate is soft smile + heart - sick. Interested is question \n\nmark + heart and inquisitive is question mark + shocked. Surprise is \n\ncelebration + shock.\n\n physical objects contribute to representing emotions.\n\nFor example, part of \"powerful\" is a lightning neuron, part of \n\n\"creative\" is a painting neuron, part of \"embarrassed\" is a neuron \n\n also see concerning use of sensitive topics in these emotion vectors, \n\nsuggesting that problematic spurious correlations are used to caption \n\nexpressions of emotion. For instance, \"accepted\" detects LGBT. \n\n\"Confident\" detects overweight. \"Pressured\" detects Asian culture.\n\n can also search for examples where particular neurons are used, to \n\nexplore their role in complex emotions. We see the mental illness neuron\n\n contributes to emotions like \"stressed,\" \"anxious,\" and \"mad.\"\n\n far, we've only looked at a subset of these emotion words. We can also \n\nsee a birds-eye view of this broader landscape of emotions by \n\nvisualizing every attribution vector together.\n\n of complex emotions by applying non-negative matrix factorization to \n\nthe emotion attribution vectors and using the factors to color each \n\ncell. The atlas resembles common feeling wheels \n\n hand-crafted by psychologists to explain the space of human emotions, \n\nindicating that the vectors have a high-level structure that resembles \n\nemotion research in psychology.This atlas has a\n\n few connections to classical emotion research. When we use just 2 \n\nfactors, we roughly reconstruct the canonical mood-axes used in much of \n\npsychology: valence and arousal. If we increase to 7 factors, we nearly \n\nreconstruct a well known categorization of these emotions into happy, \n\nsurprised, sad, bad, disgusted, fearful, and angry, except with \n\n\"disgusted\" switched for a new category related to affection that \n\nincludes \"valued,\" \"loving,\" \"lonely,\" and \"insignificant.\"\n\n \n\n---\n\n \n\nTypographic Attacks\n\n-------------------\n\nAs\n\n we've seen, CLIP is full of multimodal neurons which respond to both \n\nimages and text for a given concept. Given how strongly these neurons \n\nreact to text, we wonder: can we perform a kind of non-programmatic \n\nadversarial attack – a *typographic attack* – simply using handwriting?\n\nTo\n\n test this hypothesis, we took several common items and deliberately \n\nmislabeled them. We then observed how this affects ImageNet \n\n#### No label\n\n#### Labeled \"ipod\"\n\n#### Labeled \"library\"\n\n#### Labeled \"pizza\"\n\n Physical typographic attacks.\n\n Above we see the CLIP RN50-4x model's classifications of \n\nobjects labeled with incorrect ImageNet classes. Each row corresponds to\n\n an object, and each column corresponds to a labeling. Some attacks are \n\nmore effective than others, and some objects are more resilient to \n\nattack.\n\n \n\n Expand more examples\n\n \n\n Recall that there are two ways to use CLIP for ImageNet \n\nclassification: zero-shot and linear probes.\n\n For this style of attack, we observe that the zero-shot \n\nmethodology is somewhat consistently effective, but that the linear \n\nprobes methodology is ineffective. Later on, we show an attack style \n\n \n\n Displayed ImageNet classification method:\n\n Zero-shot\n\n Linear probes\n\n Adversarial patches are stickers that can be placed on real-life \n\nobjects in order to cause neural nets to misclassify those objects as \n\nsomething else – for example, as toasters. Physical adversarial examples\n\n are complete 3D objects that are reliably misclassified from all \n\nperspectives, such as a 3D-printed turtle that is reliably misclassified\n\n as a rifle. Typographic attacks are both weaker and stronger than \n\nthese. On the one hand, they only work for models with multimodal \n\nneurons. On the other hand, once you understand this property of the \n\n### Evaluating Typographic Attacks\n\nOur\n\n physical adversarial examples are a proof of concept, but they don't \n\ngive us a very good sense of how frequently typographic attacks succeed.\n\n Duct tape and markers don't scale, so we create an automated setup to \n\nmeasure the attack's success rate on the ImageNet validation set.\n\nTarget class:\n\n`pizza`\n\nAttack text:\n\n \n\nWe found text snippets for our attacks in \n\ntwo different ways. Firstly, we manually looked through the multimodal \n\nmodel's neurons for those that appear sensitive to particular kinds of \n\n attacks. Secondly, we brute-force searched through all of the ImageNet \n\nclass names looking for short class names which are, in and of \n\n this setup, we found several attacks to be reasonably effective. The \n\nmost successful attacks achieve a 97% attack success rate with only \n\naround 7% of the image's pixels changed. These results are competitive \n\nwith the results found in *Adversarial Patch*, albeit on a different model.\n\n| Target class | Attack text | Pixel cover | Success Linear probes |\n\n| --- | --- | --- | --- |\n\n| `waste container` | *trash* | 7.59% | 95.4% |\n\n| `iPod` | *iPod* | 6.8% | 94.7% |\n\n| `rifle` | *rifle* | 6.41% | 91% |\n\n| `pizza` | *pizza* | 8.11% | 92.3% |\n\n| `radio` | *radio* | 7.73% | 77% |\n\n| `great white shark` | *shark* | 8.33% | 62.2% |\n\n| `library` | *library* | 9.95% | 75.9% |\n\n| `Siamese cat` | *meow* | 8.44% | 46.5% |\n\n| `piggy bank` | *$\\$\\$\\$$* | 6.99% | 36.4% |\n\n **Pixel cover** measures the attack's impact on the original \n\nimage: the average percentage of pixels that were changed by any amount \n\n(an L0-norm) in order to add the attack.\n\n **Success rate** is measured over 1000 ImageNet validation \n\nimages with an attack considered to have succeeded if the attack class \n\nis the most likely. We do not consider an attack to have succeeded if \n\nthe attack-free image was already classified as the attack class.\n\n### Comparison with the Stroop Effect\n\n Just as our models make errors when adversarial text is added to \n\nimages, humans are slower and more error prone when images have \n\nincongruent labels.\n\n is harder than normal. To compare CLIP's behavior to these human \n\nexperiments, we had CLIP classify these stimuli by color, using its \n\nzero-shot classification. Unlike humans, CLIP can't slow down to \n\ncompensate for the harder task. Instead of taking a longer amount of \n\ntime for the incongruent stimuli, it has a very high error rate.\n\n A Stroop effect experiment.\n\n Above we see the CLIP RN50-4x model's classifications of \n\nvarious words colored with various colors. Activations were gathered \n\n Expand more examples\n\n \n\n---\n\n \n\nAppendix: Methodological Details\n\n--------------------------------\n\n### Conditional Probability Plots\n\nIf\n\n we really want to understand the behavior of a neuron, it's not enough \n\nto look at the cases where it maximally fires. We should look at the \n\nfull spectrum: the cases where it weakly fired, the cases where it was \n\non the border of firing, and the cases where it was strongly inhibited \n\nfrom firing. This seems especially true for highly abstract neurons, \n\nwhere weak activations can reveal \"associated stimuli,\" such as a Donald\n\n Trump neuron firing for Mike Pence.\n\nSince we have access to a \n\nvalidation set from the same distribution the model was trained on, we \n\ncan sample the distribution of stimuli that cause a certain level of \n\nactivation by iterating through the validation set until we find an \n\nimage that causes that activation.\n\nTo more rigorously characterize\n\n this, we create a plot showing the conditional probability of various \n\ncategories as a function of neuron activation, following the example of \n\nCurve Detectors . To do\n\n this, we defined uniformly spaced buckets between the maximally \n\ninhibitory and maximally excitatory activation values, and sampled a \n\nfixed number of stimuli for each activation range. Filling in the most \n\nextreme buckets requires checking the neuron activations for millions of\n\n stimuli. Once we have a full set of stimuli in each bucket, we blind a \n\nlabeler to the activation of each stimuli, and have them select salient \n\ncategories they observed, informed by the hypothesis we have for the \n\nneuron. The human labeler then categorized each stimuli into these \n\ncategories, while blinded to the activation.\n\nWe plot the \n\nactivation axis in terms of standard deviations of activation from zero,\n\n since activations have an arbitrary scale. But keep in mind that \n\nactivations aren't Gaussian distributed, and have much thicker tails.\n\nIn\n\n reading these graphs, it's important to keep in mind that different \n\nactivation levels can have many orders of magnitude differences in \n\nprobability density. In particular, probability density peaks around \n\nzero and decays exponentially to the tails. This means that false \n\nnegatives for a rare category will tend to not be very visible, because \n\nthey'll be crowded out at zero: these graphs show a neuron's precision, \n\nbut not recall. Curve Detectors discusses these issues in more detail.\n\nAn\n\n alternative possibility is to look at the distribution of activations \n\nconditioned on a category. We take this approach in our second plot for \n\nthe Trump neuron. These plots can help characterize how the neuron \n\nresponds to rare categories in regions of higher density, and can help \n\nresolve concerns about recall. However, one needs some way to get \n\nsamples conditioned on a category for these experiments, and it's \n\npossible that your process may not be representative. For our purposes, \n\nsince these neurons are so high-level, we used a popular image search to\n\n sample images in a category.\n\n### Faceted Feature Visualization\n\nA neuron is said to have multiple facets\n\n if it responds to multiple, distinct categories of images. For example,\n\n a pose-invariant dog-head detector detects dog heads tilted to the \n\n look for a difference in texture from one side to the other but doesn't\n\n care which is which. A neuron may even fire for many different, \n\nunrelated categories of images . We refer to these as polysemantic neurons.\n\n[Feature visualization](https://distill.pub/2017/feature-visualization/)is\n\n a technique where the input to a neural network is optimized to create a\n\n stimuli demonstrating some behavior, typically maximizing the \n\nactivation of a neuron. Neurons that possess multiple facets present \n\nparticular challenges to feature visualization as the multiple facets \n\nare difficult to represent as a single image. When such neurons are \n\nencountered, feature visualization often tries to draw both facets at \n\nonce (making it nonsensical), or just reveal one facet The\n\n difference between the two is believed to be related to the phenomena \n\nof mutual inhibition, see the InceptionV1 pose invariant dog head \n\ncircuit.. Both cases are inadequate.\n\nWe\n\n are aware of two past approaches to improving feature visualization for\n\n multi-faceted neurons. The first approach is to find highly diverse \n\nimages that activate a given neuron, and use them as seeds for the \n\nfeature visualization optimization process.\n\n The second tries to combine feature visualization together with a term \n\nthat encourages diversity of the activations on earlier layers.\n\n that allows us to steer the feature visualization towards a particular \n\ntheme (e.g. text, logos, facial features, etc), defined by a collection \n\nof images. The procedure works as follows: first we collect examples of \n\nimages in this theme, and train a linear probe on the lower layers of \n\nthe model to discriminate between those images and generic natural \n\nimages. We then do feature visualization by maximizing the penalized \n\nThe reader may be curious why we do not maximize f(g(x)) + w^Tg(x)\n\n instead. We have found that, in practice, the former objective produces\n\n far higher quality feature visualizations; we believe this is because \n\n", "bibliography_bib": [{"title": "Invariant visual representation by single neurons in the human brain"}, {"title": "Explicit encoding of multimodal percepts by single neurons in the human brain"}, {"title": "Learning Transferable Visual Models From Natural Language Supervision"}, {"title": "Deep Residual Learning for Image Recognition"}, {"title": "Attention is all you need"}, {"title": "Improved deep metric learning with multi-class n-pair loss objective"}, {"title": "Contrastive multiview coding"}, {"title": "Linear algebraic structure of word senses, with applications to polysemy"}, {"title": "Visualizing and understanding recurrent networks"}, {"title": "Object detectors emerge in deep scene cnns"}, {"title": "Network Dissection: Quantifying Interpretability of Deep Visual Representations"}, {"title": "Zoom In: An Introduction to Circuits"}, {"title": "Multifaceted feature visualization: Uncovering the different types of features learned by each neuron in deep neural networks"}, {"title": "Sparse but not ‘grandmother-cell’ coding in the medial temporal lobe"}, {"title": "Concept cells: the building blocks of declarative memory functions"}, {"title": "Emotional expressions reconsidered: Challenges to inferring emotion from human facial movements"}, {"title": "Geographical evaluation of word embeddings"}, {"title": "Using Artificial Intelligence to Augment Human Intelligence"}, {"title": "Visualizing Representations: Deep Learning and Human Beings"}, {"title": "Natural language processing (almost) from scratch"}, {"title": "Linguistic regularities in continuous space word representations"}, {"title": "Man is to computer programmer as woman is to homemaker? debiasing word embeddings"}, {"title": "Intriguing properties of neural networks"}, {"title": "Visualizing higher-layer features of a deep network"}, {"title": "Feature Visualization"}, {"title": "How does the brain solve visual object recognition?"}, {"title": "Imagenet: A large-scale hierarchical image database"}, {"title": "BREEDS: Benchmarks for Subpopulation Shift"}, {"title": "Global Weighted Average Pooling Bridges Pixel-level Localization and Image-level Classification"}, {"title": "Separating style and content with bilinear models"}, {"title": "The feeling wheel: A tool for expanding awareness of emotions and increasing spontaneity and intimacy"}, {"title": "Activation atlas"}, {"title": "Adversarial Patch"}, {"title": "Synthesizing Robust Adversarial Examples"}, {"title": "Studies of interference in serial verbal reactions."}, {"title": "Curve Detectors"}, {"title": "An overview of early vision in inceptionv1"}, {"title": "Deep inside convolutional networks: Visualising image classification models and saliency maps"}, {"title": "Deep neural networks are easily fooled: High confidence predictions for unrecognizable images"}, {"title": "Inceptionism: Going deeper into neural networks"}, {"title": "Plug & play generative networks: Conditional iterative generation of images in latent space"}, {"title": "Sun database: Large-scale scene recognition from abbey to zoo"}, {"title": "The pascal visual object classes (voc) challenge"}, {"title": "Fairface: Face attribute dataset for balanced race, gender, and age"}, {"title": "A style-based generator architecture for generative adversarial networks"}], "id": "b7baa8637459e65b064840e35c9c17d1"} +{"url": "n/a", "source": "distill", "source_type": "html", "converted_with": "python", "title": "Sequence Modeling with CTC", "authors": ["Awni Hannun"], "date_published": "2017-11-27", "abstract": "Consider speech recognition. We have a dataset of audio clips and corresponding transcripts. Unfortunately, we don’t know how the characters in the transcript align to the audio. This makes training a speech recognizer harder than it might at first seem.", "journal_ref": "distill-pub", "doi": "https://doi.org/10.23915/distill.00008", "text": "\n\nIntroduction\n\n------------\n\nConsider speech recognition. We have a dataset of audio clips and\n\ncorresponding transcripts. Unfortunately, we don't know how the characters in\n\nthe transcript align to the audio. This makes training a speech recognizer\n\nharder than it might at first seem.\n\nWithout this alignment, the simple approaches aren't available to us. We\n\ncould devise a rule like \"one character corresponds to ten inputs\". But\n\npeople's rates of speech vary, so this type of rule can always be broken.\n\nAnother alternative is to hand-align each character to its location in the\n\naudio. From a modeling standpoint this works well — we'd know the ground truth\n\nfor each input time-step. However, for any reasonably sized dataset this is\n\nprohibitively time consuming.\n\nThis problem doesn't just turn up in speech recognition. We see it in many\n\nother places. Handwriting recognition from images or sequences of pen strokes\n\nis one example. Action labelling in videos is another.\n\n![](Sequence%20Modeling%20with%20CTC_files/handwriting_recognition.svg)\n\n**Handwriting recognition:** The input can be\n\n (x,y)(x,y)(x,y) coordinates of a pen stroke or\n\n pixels in an image.\n\n \n\n![](Sequence%20Modeling%20with%20CTC_files/speech_recognition.svg)\n\n**Speech recognition:** The input can be a spectrogram or some\n\n other frequency based feature extractor.\n\n \n\nConnectionist Temporal Classification (CTC) is a way to get around not\n\nknowing the alignment between the input and the output. As we'll see, it's\n\nespecially well suited to applications like speech and handwriting\n\nrecognition.\n\n---\n\nTo be a bit more formal, let's consider mapping input sequences\n\nWe want to find an accurate mapping from XXX's to YYY's.\n\nThere are challenges which get in the way of us\n\nusing simpler supervised learning algorithms. In particular:\n\n* Both XXX and YYY\n\n can vary in length.\n\n* The ratio of the lengths of XXX and YYY\n\n can vary.\n\n* We don't have an accurate alignment (correspondence of the elements) of\n\n XXX and Y.Y.Y.\n\nThe CTC algorithm overcomes these challenges. For a given XXX\n\nit gives us an output distribution over all possible YYY's. We\n\ncan use this distribution either to *infer* a likely output or to assess\n\nthe *probability* of a given output.\n\nNot all ways of computing the loss function and performing inference are\n\ntractable. We'll require that CTC do both of these efficiently.\n\n**Loss Function:** For a given input, we'd like to train our\n\nmodel to maximize the probability it assigns to the right answer. To do this,\n\nwe'll need to efficiently compute the conditional probability\n\np(Y∣X).p(Y \\mid X).p(Y∣X). The function p(Y∣X)p(Y \\mid X)p(Y∣X) should\n\nalso be differentiable, so we can use gradient descent.\n\n**Inference:** Naturally, after we've trained the model, we\n\nwant to use it to infer a likely YYY given an X.X.X.\n\nThis means solving\n\nY∗=argmaxYp(Y∣X).\n\nY∗=Yargmax​p(Y∣X).\n\nIdeally Y∗Y^\\*Y∗ can be found efficiently. With CTC we'll settle\n\nfor an approximate solution that's not too expensive to find.\n\nThe Algorithm\n\n-------------\n\nThe CTC algorithm can assign a probability for any YYY\n\ngiven an X.X.X. The key to computing this probability is how CTC\n\nthinks about alignments between inputs and outputs. We'll start by looking at\n\nthese alignments and then show how to use them to compute the loss function and\n\nperform inference.\n\n### Alignment\n\nThe CTC algorithm is *alignment-free* — it doesn't require an\n\nalignment between the input and the output. However, to get the probability of\n\nan output given an input, CTC works by summing over the probability of all\n\npossible alignments between the two. We need to understand what these\n\nalignments are in order to understand how the loss function is ultimately\n\ncalculated.\n\nTo motivate the specific form of the CTC alignments, first consider a naive\n\napproach. Let's use an example. Assume the input has length six and Y=Y\n\n=Y= [c, a, t]. One way to align XXX and YYY\n\nis to assign an output character to each input step and collapse repeats.\n\n![](Sequence%20Modeling%20with%20CTC_files/naive_alignment.svg)\n\nThis approach has two problems.\n\n* Often, it doesn't make sense to force every input step to align to\n\n some output. In speech recognition, for example, the input can have stretches\n\n of silence with no corresponding output.\n\n* We have no way to produce outputs with multiple characters in a row.\n\n Consider the alignment [h, h, e, l, l, l, o]. Collapsing repeats will\n\n produce \"helo\" instead of \"hello\".\n\nTo get around these problems, CTC introduces a new token to the set of\n\nallowed outputs. This new token is sometimes called the *blank* token. We'll\n\nrefer to it here as ϵ.\\epsilon.ϵ. The\n\nϵ\\epsilonϵ token doesn't correspond to anything and is simply\n\nremoved from the output.\n\nThe alignments allowed by CTC are the same length as the input. We allow any\n\nalignment which maps to YYY after merging repeats and removing\n\nϵ\\epsilonϵ tokens:\n\n![](Sequence%20Modeling%20with%20CTC_files/ctc_alignment_steps.svg)\n\nIf YYY has two of the same character in a row, then a valid\n\nalignment must have an ϵ\\epsilonϵ between them. With this rule\n\nin place, we can differentiate between alignments which collapse to \"hello\" and\n\nthose which collapse to \"helo\".\n\nLet's go back to the output [c, a, t] with an input of length six. Here are\n\na few more examples of valid and invalid alignments.\n\n![](Sequence%20Modeling%20with%20CTC_files/valid_invalid_alignments.svg)\n\nThe CTC alignments have a few notable properties. First, the allowed\n\nalignments between XXX and YYY are monotonic.\n\nIf we advance to the next input, we can keep the corresponding output the\n\nsame or advance to the next one. A second property is that the alignment of\n\nXXX to YYY is many-to-one. One or more input\n\nelements can align to a single output element but not vice-versa. This implies\n\na third property: the length of YYY cannot be greater than the\n\nlength of X.X.X.\n\n### Loss Function\n\nThe CTC alignments give us a natural way to go from probabilities at each\n\ntime-step to the probability of an output sequence.\n\n![](Sequence%20Modeling%20with%20CTC_files/full_collapse_from_audio.svg)\n\nTo be precise, the CTC objective\n\nfor a single (X,Y)(X, Y)(X,Y) pair is:\n\np(Y∣X)=p(Y \\mid X) \\;\\; =p(Y∣X)=\n\n∑A∈AX,Y\\sum\\_{A \\in \\mathcal{A}\\_{X,Y}}A∈AX,Y​∑​\n\n∏t=1Tpt(at∣X)\\prod\\_{t=1}^T \\; p\\_t(a\\_t \\mid X)t=1∏T​pt​(at​∣X)\n\n The CTC conditional **probability**\n\n**marginalizes** over the set of valid alignments\n\n \n\n computing the **probability** for a single alignment step-by-step.\n\n \n\nModels trained with CTC typically use a recurrent neural network (RNN) to\n\nestimate the per time-step probabilities, pt(at∣X).p\\_t(a\\_t \\mid X).pt​(at​∣X).\n\nAn RNN usually works well since it accounts for context in the input, but we're\n\nfree to use any learning algorithm which produces a distribution over output\n\nclasses given a fixed-size slice of the input.\n\nIf we aren't careful, the CTC loss can be very expensive to compute. We\n\ncould try the straightforward approach and compute the score for each alignment\n\nsumming them all up as we go. The problem is there can be a massive number of\n\nalignments.\n\n For a YYY of length UUU without any repeat\n\n characters and an XXX of length TTT the size\n\n of the set is (T+UT−U).{T + U \\choose T - U}.(T−UT+U​). For T=100T=100T=100 and\n\n U=50U=50U=50 this number is almost 1040.10^{40}.1040.\n\nFor most problems this would be too slow.\n\nThankfully, we can compute the loss much faster with a dynamic programming\n\nalgorithm. The key insight is that if two alignments have reached the same\n\noutput at the same step, then we can merge them.\n\n![](Sequence%20Modeling%20with%20CTC_files/all_alignments.svg)\n\n Summing over all alignments can be very expensive.\n\n \n\n![](Sequence%20Modeling%20with%20CTC_files/merged_alignments.svg)\n\n Dynamic programming merges alignments, so it's much faster.\n\n \n\nSince we can have an ϵ\\epsilonϵ before or after any token in\n\nYYY, it's easier to describe the algorithm\n\nusing a sequence which includes them. We'll work with the sequence\n\nZ=[ϵ, y1, ϵ, y2, …, ϵ, yU, ϵ]\n\nZ=[ϵ, y1​, ϵ, y2​, …, ϵ, yU​, ϵ]\n\nwhich is YYY with an ϵ\\epsilonϵ at\n\nthe beginning, end, and between every character.\n\nLet's let αlphaα be the score of the merged alignments at a\n\ngiven node. More precisely, αs,tlpha\\_{s, t}αs,t​ is the CTC score of\n\nthe subsequence Z1:sZ\\_{1:s}Z1:s​ after ttt input steps.\n\nAs we'll see, we'll compute the final CTC score, P(Y∣X)P(Y \\mid X)P(Y∣X),\n\nfrom the αlphaα's at the last time-step. As long as we know\n\nthe values of αlphaα at the previous time-step, we can compute\n\nαs,t.lpha\\_{s, t}.αs,t​. There are two cases.\n\n**Case 1:**\n\n![](Sequence%20Modeling%20with%20CTC_files/cost_no_skip.svg)\n\nIn this case, we can't jump over zs−1z\\_{s-1}zs−1​, the previous\n\ntoken in Z.Z.Z. The first reason is that the previous token can\n\nbe an element of YYY, and we can't skip elements of\n\nY.Y.Y. Since every element of YYY in\n\nZZZ is followed by an ϵ\\epsilonϵ, we can\n\nidentify this when zs=ϵ. z\\_{s} = \\epsilon.zs​=ϵ. The second reason is\n\nthat we must have an ϵ\\epsilonϵ between repeat characters in\n\nY.Y.Y. We can identify this when\n\nzs=zs−2.z\\_s = z\\_{s-2}.zs​=zs−2​.\n\nTo ensure we don't skip zs−1z\\_{s-1}zs−1​, we can either be there\n\nat the previous time-step or have already passed through at some earlier\n\ntime-step. As a result there are two positions we can transition from.\n\nαs,t=lpha\\_{s, t} \\; =αs,t​=\n\n The CTC probability of the two valid subsequences after\n\n t−1t-1t−1 input steps.\n\n \n\npt(zs∣X)p\\_t(z\\_{s} \\mid X)pt​(zs​∣X)\n\n The probability of the current character at input step t.t.t.\n\n![](Sequence%20Modeling%20with%20CTC_files/cost_regular.svg)\n\n**Case 2:**\n\nIn the second case, we're allowed to skip the previous token in\n\nZ.Z.Z. We have this case whenever zs−1z\\_{s-1}zs−1​ is\n\nan ϵ\\epsilonϵ between unique characters. As a result there are\n\nthree positions we could have come from at the previous step.\n\nαs,t=lpha\\_{s, t} \\; =αs,t​=\n\n The CTC probability of the three valid subsequences after\n\n t−1t-1t−1 input steps.\n\n \n\npt(zs∣X)p\\_t(z\\_{s} \\mid X)pt​(zs​∣X)\n\n The probability of the current character at input step t.t.t.\n\nBelow is an example of the computation performed by the dynamic programming\n\nalgorithm. Every valid alignment has a path in this graph.\n\n output \n\nY=Y =Y= [a, b]\n\n \n\n input, XXX\n\n![](Sequence%20Modeling%20with%20CTC_files/ctc_cost.svg)\n\n Node (s,t)(s, t)(s,t) in the diagram represents\n\n αs,tlpha\\_{s, t}αs,t​ – the CTC score of\n\n the subsequence Z1:sZ\\_{1:s}Z1:s​ after\n\n ttt input steps.\n\n \n\nThere are two valid starting nodes and two valid final nodes since the\n\nϵ\\epsilonϵ at the beginning and end of the sequence is\n\noptional. The complete probability is the sum of the two final nodes.\n\nNow that we can efficiently compute the loss function, the next step is to\n\ncompute a gradient and train the model. The CTC loss function is differentiable\n\nwith respect to the per time-step output probabilities since it's just sums and\n\nproducts of them. Given this, we can analytically compute the gradient of the\n\nloss function with respect to the (unnormalized) output probabilities and from\n\nthere run backpropagation as usual.\n\nFor a training set D\\mathcal{D}D, the model's parameters\n\nare tuned to minimize the negative log-likelihood\n\n∑(X,Y)∈D−logp(Y∣X)\n\n\\sum\\_{(X, Y) \\in \\mathcal{D}} -\\log\\; p(Y \\mid X)\n\n(X,Y)∈D∑​−logp(Y∣X)\n\ninstead of maximizing the likelihood directly.\n\n### Inference\n\nAfter we've trained the model, we'd like to use it to find a likely output\n\nfor a given input. More precisely, we need to solve:\n\nY∗=argmaxYp(Y∣X)\n\nY∗=Yargmax​p(Y∣X)\n\nOne heuristic is to take the most likely output at each time-step. This\n\ngives us the alignment with the highest probability:\n\nA∗=argmaxA∏t=1Tpt(at∣X)\n\nA∗=Aargmax​t=1∏T​pt​(at​∣X)\n\nWe can then collapse repeats and remove ϵ\\epsilonϵ tokens to\n\nget Y.Y.Y.\n\nFor many applications this heuristic works well, especially when most of the\n\nprobability mass is alloted to a single alignment. However, this approach can\n\nsometimes miss easy to find outputs with much higher probability. The problem\n\nis, it doesn't take into account the fact that a single output can have many\n\nalignments.\n\nHere's an example. Assume the alignments [a, a, ϵ\\epsilonϵ]\n\nand [a, a, a] individually have lower probability than [b, b, b]. But\n\nthe sum of their probabilities is actually greater than that of [b, b, b]. The\n\nnaive heuristic will incorrectly propose Y=Y =Y= [b] as\n\nthe most likely hypothesis. It should have chosen Y=Y =Y= [a].\n\nTo fix this, the algorithm needs to account for the fact that [a, a, a] and [a,\n\na, ϵ\\epsilonϵ] collapse to the same output.\n\nWe can use a modified beam search to solve this. Given limited\n\ncomputation, the modified beam search won't necessarily find the\n\nmost likely Y.Y.Y. It does, at least, have\n\nthe nice property that we can trade-off more computation\n\n(a larger beam-size) for an asymptotically better solution.\n\nA regular beam search computes a new set of hypotheses at each input step.\n\nThe new set of hypotheses is generated from the previous set by extending each\n\nhypothesis with all possible output characters and keeping only the top\n\ncandidates.\n\n![](Sequence%20Modeling%20with%20CTC_files/beam_search.svg)\n\n A standard beam search algorithm with an alphabet of\n\n {ϵ,a,b}\\{\\epsilon, a, b\\}{ϵ,a,b} and a beam size\n\n of three.\n\n \n\nWe can modify the vanilla beam search to handle multiple alignments mapping to\n\nthe same output. In this case instead of keeping a list of alignments in the\n\nbeam, we store the output prefixes after collapsing repeats and removing\n\nϵ\\epsilonϵ characters. At each step of the search we accumulate\n\nscores for a given prefix based on all the alignments which map to it.\n\n The CTC beam search algorithm with an output alphabet\n\n {ϵ,a,b}\\{\\epsilon, a, b\\}{ϵ,a,b}\n\n and a beam size of three.\n\n \n\nA proposed extension can map to two output prefixes if the character is a\n\nrepeat. This is shown at T=3T=3T=3 in the figure above\n\nwhere 'a' is proposed as an extension to the prefix [a]. Both [a] and [a, a] are\n\nvalid outputs for this proposed extension.\n\nWhen we extend [a] to produce [a,a], we only want include the part of the\n\nprevious score for alignments which end in ϵ.\\epsilon.ϵ. Remember, the\n\nϵ\\epsilonϵ is required between repeat characters. Similarly,\n\nwhen we don't extend the prefix and produce [a], we should only include the part\n\nof the previous score for alignments which don't end in ϵ.\\epsilon.ϵ.\n\nGiven this, we have to keep track of two probabilities for each prefix\n\nin the beam. The probability of all alignments which end in\n\nϵ\\epsilonϵ and the probability of all alignments which don't\n\nend in ϵ.\\epsilon.ϵ. When we rank the hypotheses at\n\neach step before pruning the beam, we'll use their combined scores.\n\nThe implementation of this algorithm doesn't require much code, but it is\n\ndense and tricky to get right. Checkout this\n\n[gist](https://gist.github.com/awni/56369a90d03953e370f3964c826ed4b0)\n\nfor an example implementation in Python.\n\nIn some problems, such as speech recognition, incorporating a language model\n\nover the outputs significantly improves accuracy. We can include the language\n\nmodel as a factor in the inference problem.\n\nY∗=argmaxYY^\\* \\enspace = \\enspace {\\mathop{\text{argmax}}\\limits\\_{Y}} \n\n Y∗=Yargmax​\n\np(Y∣X)⋅p(Y \\mid X) \\quad \\cdotp(Y∣X)⋅\n\n The CTC conditional probability.\n\n \n\np(Y)α⋅p(Y)^lpha \\quad \\cdotp(Y)α⋅\n\n The language model probability.\n\n \n\nL(Y)βL(Y)^etaL(Y)β\n\n The \"word\" insertion bonus.\n\n \n\nThe function L(Y)L(Y)L(Y) computes the length of\n\nYYY in terms of the language model tokens and acts as a word\n\ninsertion bonus. With a word-based language model L(Y)L(Y)L(Y)\n\ncounts the number of words in Y.Y.Y. If we use a character-based\n\nlanguage model then L(Y)L(Y)L(Y) counts the number of characters\n\nin Y.Y.Y. The language model scores are only included when a\n\nprefix is extended by a character (or word) and not at every step of the\n\nalgorithm. This causes the search to favor shorter prefixes, as measured by\n\nL(Y)L(Y)L(Y), since they don't include as many language model\n\nupdates. The word insertion bonus helps with this. The parameters\n\nαlphaα and βetaβ are usually set by\n\ncross-validation.\n\nThe language model scores and word insertion term can be included in the\n\nbeam search. Whenever we propose to extend a prefix by a character, we can\n\ninclude the language model score for the new character given the prefix so\n\nfar.\n\nProperties of CTC\n\n-----------------\n\nWe mentioned a few important properties of CTC so far. Here we'll go\n\ninto more depth on what these properties are and what trade-offs they offer.\n\n### Conditional Independence\n\nOne of the most commonly cited shortcomings of CTC is the conditional\n\nindependence assumption it makes.\n\n![](Sequence%20Modeling%20with%20CTC_files/conditional_independence.svg)\n\n Graphical model for CTC.\n\n \n\nThe model assumes that every output is conditionally independent of\n\nthe other outputs given the input. This is a bad assumption for many\n\nsequence to sequence problems.\n\nSay we had an audio clip of someone saying \"triple A\".\n\n Another valid transcription could\n\nbe \"AAA\". If the first letter of the predicted transcription is 'A', then\n\nthe next letter should be 'A' with high probability and 'r' with low\n\nprobability. The conditional independence assumption does not allow for this.\n\n![](Sequence%20Modeling%20with%20CTC_files/triple_a.svg)\n\n If we predict an 'A' as the first letter then the suffix 'AA' should get much\n\n more probability than 'riple A'. If we predict 't' first, the opposite\n\n should be true.\n\n \n\nIn fact speech recognizers using CTC don't learn a language model over the\n\noutput nearly as well as models which are conditionally dependent.\n\n However, a separate language model can\n\nbe included and usually gives a good boost to accuracy.\n\nThe conditional independence assumption made by CTC isn't always a bad\n\nthing. Baking in strong beliefs over output interactions makes the model less\n\nadaptable to new or altered domains. For example, we might want to use a speech\n\nrecognizer trained on phone conversations between friends to transcribe\n\ncustomer support calls. The language in the two domains can be quite different\n\neven if the acoustic model is similar. With a CTC acoustic model, we can easily\n\nswap in a new language model as we change domains.\n\n### Alignment Properties\n\nThe CTC algorithm is *alignment-free*. The objective function\n\nmarginalizes over all alignments. While CTC does make strong assumptions about\n\nthe form of alignments between XXX and YYY, the\n\nmodel is agnostic as to how probability is distributed amongst them. In some\n\nproblems CTC ends up allocating most of the probability to a single alignment.\n\nHowever, this isn't guaranteed.\n\nWe could force the model to choose a single\n\nalignment by replacing the sum with a max in the objective function,\n\np(Y∣X)=maxA∈AX,Y∏t=1Tp(at∣X).\n\np(Y∣X)=A∈AX,Y​max​t=1∏T​p(at​∣X).\n\nAs mentioned before, CTC only allows *monotonic* alignments. In\n\nproblems such as speech recognition this may be a valid assumption. For other\n\nproblems like machine translation where a future word in a target sentence\n\ncan align to an earlier part of the source sentence, this assumption is a\n\ndeal-breaker.\n\nAnother important property of CTC alignments is that they are\n\n*many-to-one*. Multiple inputs can align to at most one output. In some\n\ncases this may not be desirable. We might want to enforce a strict one-to-one\n\ncorrespondence between elements of XXX and\n\nY.Y.Y. Alternatively, we may want to allow multiple output\n\nelements to align to a single input element. For example, the characters\n\n\"th\" might align to a single input step of audio. A character based CTC model\n\nwould not allow that.\n\nThe many-to-one property implies that the output can't have more time-steps\n\nthan the input.\n\n If YYY has rrr consecutive\n\n repeats, then the length of YYY must be less than\n\n the length of XXX by 2r−1.2r - 1.2r−1.\n\nThis is usually not a problem for speech and handwriting recognition since the\n\ninput is much longer than the output. However, for other problems where\n\nYYY is often longer than XXX, CTC just won't\n\nwork.\n\nCTC in Context\n\n--------------\n\nIn this section we'll discuss how CTC relates to other commonly used\n\nalgorithms for sequence modeling.\n\n### HMMs\n\nAt a first glance, a Hidden Markov Model (HMM) seems quite different from\n\nCTC. But, the two algorithms are actually quite similar. Understanding the\n\nrelationship between them will help us understand what advantages CTC has over\n\nHMM sequence models and give us insight into how CTC could be changed for\n\nvarious use cases.\n\nLet's use the same notation as before,\n\nXXX is the input sequence and YYY\n\nis the output sequence with lengths TTT and\n\nUUU respectively. We're interested in learning\n\np(Y∣X).p(Y \\mid X).p(Y∣X). One way to simplify the problem is to apply\n\nBayes' Rule:\n\np(Y∣X)∝p(X∣Y)p(Y).\n\np(Y \\mid X) \\; \\propto \\; p(X \\mid Y) \\; p(Y).\n\np(Y∣X)∝p(X∣Y)p(Y).\n\nThe p(Y)p(Y)p(Y) term can be any language model, so let's focus on\n\np(X∣Y).p(X \\mid Y).p(X∣Y). Like before we'll let\n\nA\\mathcal{A}A be a set of allowed alignments between\n\nXXX and Y.Y.Y. Members of\n\nA\\mathcal{A}A have length T.T.T.\n\nLet's otherwise leave A\\mathcal{A}A unspecified for now. We'll\n\ncome back to it later. We can marginalize over alignments to get\n\np(X∣Y)=∑A∈Ap(X,A∣Y).\n\np(X \\mid Y)\\; = \\; \\sum\\_{A \\in \\mathcal{A}} \\; p(X, A \\mid Y).\n\np(X∣Y)=A∈A∑​p(X,A∣Y).\n\nTo simplify notation, let's remove the conditioning on YYY, it\n\nwill be present in every p(⋅).p(\\cdot).p(⋅). With two assumptions we can\n\nwrite down the standard HMM.\n\np(X)=p(X) \\quad =p(X)=\n\n The probability of the input\n\n \n\n∑A∈A∏t=1T\\sum\\_{A \\in \\mathcal{A}} \\; \\prod\\_{t=1}^T∑A∈A​∏t=1T​\n\n Marginalizes over alignments\n\n \n\np(xt∣at)⋅p(x\\_t \\mid a\\_t) \\quad \\cdotp(xt​∣at​)⋅\n\n The emission probability\n\n \n\np(at∣at−1)p(a\\_t \\mid a\\_{t-1})p(at​∣at−1​)\n\n The transition probability\n\n \n\nThe first assumption is the usual Markov property. The state\n\nata\\_tat​ is conditionally independent of all historic states given\n\nthe previous state at−1.a\\_{t-1}.at−1​. The second is that the observation\n\nxtx\\_txt​ is conditionally independent of everything given the\n\ncurrent state at.a\\_t.at​.\n\n![](Sequence%20Modeling%20with%20CTC_files/hmm.svg)\n\n The graphical model for an HMM.\n\n \n\nNow we can take just a few steps to transform the HMM into CTC and see how\n\nthe two models relate. First, let's assume that the transition probabilities\n\np(at∣at−1)p(a\\_t \\mid a\\_{t-1})p(at​∣at−1​) are uniform. This gives\n\np(X)∝∑A∈A∏t=1Tp(xt∣at).\n\np(X)∝A∈A∑​t=1∏T​p(xt​∣at​).\n\nThere are only two differences from this equation and the CTC loss function.\n\nThe first is that we are learning a model of XXX given\n\nYYY as opposed to YYY given X.X.X.\n\nThe second is how the set A\\mathcal{A}A is produced. Let's deal\n\nwith each in turn.\n\nTo do this, we apply Bayes' rule and rewrite the model as\n\np(X)∝∑A∈A∏t=1Tp(at∣xt)p(xt)p(at)\n\np(X)∝A∈A∑​t=1∏T​p(at​)p(at​∣xt​)p(xt​)​\n\n∝∑A∈A∏t=1Tp(at∣xt)p(at). \n\n∝A∈A∑​t=1∏T​p(at​)p(at​∣xt​)​.\n\nIf we assume a uniform prior over the states aaa and condition on all of\n\nXXX instead of a single element at a time, we arrive at\n\np(X)∝∑A∈A∏t=1Tp(at∣X).\n\np(X)∝A∈A∑​t=1∏T​p(at​∣X).\n\nThe above equation is essentially the CTC loss function, assuming the set\n\nA\\mathcal{A}A is the same. In fact, the HMM framework does not specify what\n\nA\\mathcal{A}A should consist of. This part of the model can be designed on a\n\nper-problem basis. In many cases the model doesn't condition on YYY and the\n\nset A\\mathcal{A}A consists of all possible length TTT sequences from the\n\noutput alphabet. In this case, the HMM can be drawn as an *ergodic* state\n\ntransition diagram in which every state connects to every other state. The\n\nfigure below shows this model with the alphabet or set of unique hidden states\n\nas {a,b,c}.\\{a, b, c\\}.{a,b,c}.\n\nIn our case the transitions allowed by the model are strongly related to\n\nY.Y.Y. We want the HMM to reflect this. One possible model could\n\nbe a simple linear state transition diagram. The figure below shows this with\n\nthe same alphabet as before and Y=Y =Y= [a, b]. Another commonly\n\nused model is the *Bakis* or left-right HMM. In this model any\n\ntransition which proceeds from the left to the right is allowed.\n\n![](Sequence%20Modeling%20with%20CTC_files/ergodic_hmm.svg)\n\n**Ergodic HMM:** Any node can be either a starting or\n\n final state.\n\n \n\n![](Sequence%20Modeling%20with%20CTC_files/linear_hmm.svg)\n\n**Linear HMM:** Start on the left, end on the right.\n\n \n\n![](Sequence%20Modeling%20with%20CTC_files/ctc_hmm.svg)\n\n**CTC HMM:** The first two nodes are the starting\n\n states and the last two nodes are the final states.\n\n \n\nIn CTC we augment the alphabet with ϵ\\epsilonϵ and the HMM model allows a\n\nsubset of the left-right transitions. The CTC HMM has two start\n\nstates and two accepting states.\n\nOne possible source of confusion is that the HMM model differs for any unique\n\nY.Y.Y. This is in fact standard in applications such as speech recognition. The\n\nstate diagram changes based on the output Y.Y.Y. However, the functions which\n\nestimate the observation and transition probabilities are shared.\n\nLet's discuss how CTC improves on the original HMM model. First, we can think\n\nof the CTC state diagram as a special case HMM which works well for many\n\nproblems of interest. Incorporating the blank as a hidden state in the HMM\n\nallows us to use the alphabet of YYY as the other hidden states. This model\n\nalso gives a set of allowed alignments which may be a good prior for some\n\nproblems.\n\nPerhaps most importantly, CTC is discriminative. It models p(Y∣X)p(Y \\mid\n\n X)p(Y∣X) directly, an idea that's been important in the past with other\n\ndiscriminative improvements to HMMs.\n\nDiscriminative training let's us apply powerful learning algorithms like the\n\nRNN directly towards solving the problem we care about.\n\n### Encoder-Decoder Models\n\nThe encoder-decoder is perhaps the most commonly used framework for sequence\n\nmodeling with neural networks. These models have an encoder\n\nand a decoder. The encoder maps the input sequence XXX into a\n\nhidden representation. The decoder consumes the hidden representation and\n\nproduces a distribution over the outputs. We can write this as\n\nH=encode(X)p(Y∣X)=decode(H).\n\negin{aligned}\n\nH\\enspace &= \\enspace\textsf{encode}(X) \\[.5em]\n\np(Y \\mid X)\\enspace &= \\enspace \textsf{decode}(H).\n\n\\end{aligned}\n\nHp(Y∣X)​=encode(X)=decode(H).​\n\nThe encode(⋅)\textsf{encode}(\\cdot)encode(⋅) and\n\ndecode(⋅)\textsf{decode}(\\cdot)decode(⋅) functions are typically RNNs. The\n\ndecoder can optionally be equipped with an attention mechanism. The hidden\n\nstate sequence HHH has the same number of time-steps as the\n\ninput, T.T.T. Sometimes the encoder subsamples the input. If the\n\nencoder subsamples the input by a factor sss then\n\nHHH will have T/sT/sT/s time-steps.\n\nWe can interpret CTC in the encoder-decoder framework. This is helpful to\n\nunderstand the developments in encoder-decoder models that are applicable to\n\nCTC and to develop a common language for the properties of these\n\nmodels.\n\n**Encoder:** The encoder of a CTC model can be just about any\n\nencoder we find in commonly used encoder-decoder models. For example the\n\nencoder could be a multi-layer bidirectional RNN or a convolutional network.\n\nThere is a constraint on the CTC encoder that doesn't apply to the others. The\n\ninput length cannot be sub-sampled so much that T/sT/sT/s\n\nis less than the length of the output.\n\n**Decoder:** We can view the decoder of a CTC model as a simple\n\nlinear transformation followed by a softmax normalization. This layer should\n\nproject all TTT steps of the encoder output\n\nHHH into the dimensionality of the output alphabet.\n\nWe mentioned earlier that CTC makes a conditional independence assumption over\n\nthe characters in the output sequence. This is one of the big advantages that\n\nother encoder-decoder models have over CTC — they can model the\n\ndependence over the outputs. However in practice, CTC is still more commonly\n\nused in tasks like speech recognition as we can partially overcome the\n\nconditional independence assumption by including an external language model.\n\nPractitioner's Guide\n\n--------------------\n\nSo far we've mostly developed a conceptual understanding of CTC. Here we'll go\n\nthrough a few implementation tips for practitioners.\n\n**Software:** Even with a solid understanding of CTC, the\n\nimplementation is difficult. The algorithm has several edge cases and a fast\n\nimplementation should be written in a lower-level programming language.\n\nOpen-source software tools make it much easier to get started:\n\n* Baidu Research has open-sourced\n\n [warp-ctc](https://github.com/baidu-research/warp-ctc). The\n\n package is written in C++ and CUDA. The CTC loss function runs on either\n\n the CPU or the GPU. Bindings are available for Torch, TensorFlow and\n\n [PyTorch](https://github.com/awni/warp-ctc).\n\n* TensorFlow has built in\n\n [CTC loss](https://www.tensorflow.org/api_docs/python/tf/nn/ctc_loss)\n\n functions for the CPU.\n\n* Nvidia also provides a GPU implementation of CTC in\n\n [cuDNN](https://developer.nvidia.com/cudnn) versions 7 and up.\n\n**Numerical Stability:** Computing the CTC loss naively is\n\nnumerically unstable. One method to avoid this is to normalize the\n\nαlphaα's at each time-step. The original publication\n\nhas more detail on this including the adjustments to the gradient.\n\n In practice this works well enough\n\nfor medium length sequences but can still underflow for long sequences.\n\nA better solution is to compute the loss function in log-space with the\n\nlog-sum-exp trick.\n\nWhen computing the sum of two probabilities in log space use the identity\n\nlog(ea+eb)=max{a,b}+log(1+e−∣a−b∣)\n\n\\log(e^a + e^b) = \\max\\{a, b\\} + \\log(1 + e^{-|a-b|})\n\nlog(ea+eb)=max{a,b}+log(1+e−∣a−b∣)\n\nMost programming languages have a stable function to compute\n\nlog(1+x)\\log(1 + x)log(1+x) when\n\nxxx is close to zero.\n\nInference should also be done in log-space using the log-sum-exp trick.\n\n**Beam Search:** There are a couple of good tips to know about\n\nwhen implementing and using the CTC beam search.\n\nThe correctness of the beam search can be tested as follows.\n\n1. Run the beam search algorithm on an arbitrary input.\n\n2. Save the inferred output Y¯ar{Y}Y¯ and the corresponding\n\n score c¯.ar{c}.c¯.\n\n3. Compute the actual CTC score ccc for\n\n Y¯.ar{Y}.Y¯.\n\n4. Check that c¯≈car{c} pprox cc¯≈c with the former being no\n\n greater than the latter. As the beam size increases the inferred output\n\n Y¯ar{Y}Y¯ may change, but the two numbers should grow\n\n closer.\n\nA common question when using a beam search decoder is the size of the beam\n\nto use. There is a trade-off between accuracy and runtime. We can check if the\n\nbeam size is in a good range. To do this first compute the CTC score for the\n\ninferred output ci.c\\_i.ci​. Then compute the CTC score for the ground\n\ntruth output cg.c\\_g.cg​. If the two outputs are not the same, we\n\nshould have cg