{"url": "n/a", "source": "distill", "source_type": "html", "converted_with": "python", "title": "Adversarial Reprogramming of Neural Cellular Automata", "authors": ["Ettore Randazzo", "Alexander Mordvintsev", "Eyvind Niklasson", "Michael Levin"], "date_published": "2021-05-06", "abstract": " This article is part of the Differentiable Self-organizing Systems Thread, an experimental format collecting invited short articles delving into differentiable self-organizing systems, interspersed with critical commentary from several experts in adjacent fields. ", "journal_ref": "distill-pub", "doi": "https://doi.org/10.23915/distill.00027.004", "text": "\n\n### Contents\n\n[Adversarial MNIST CAs](#adversarial-mnist-cas) | \n\n[Perturbing the states of Growing CAs](#perturbing-the-states-of-growing-cas) | \n\n[Related Work](#related-work)\n\n[Discussion](#discussion)\n\n \n\n \n\n This article is part of the\n\n an experimental format collecting invited short articles delving into\n\n differentiable self-organizing systems, interspersed with critical\n\n commentary from several experts in adjacent fields.\n\n \n\n[Self-Organising Textures](https://distill.pub/selforg/2021/textures/)\n\nIn a complex system, whether biological, technological, or social, \n\nhow can we discover signaling events that will alter system-level \n\nbehavior in desired ways? Even when the rules governing the individual \n\ncomponents of these complex systems are known, the inverse problem - \n\ngoing from desired behaviour to system design - is at the heart of many \n\nbarriers for the advance of biomedicine, robotics, and other fields of \n\nimportance to society.\n\nBiology, specifically, is transitioning from a focus on mechanism \n\n(what is required for the system to work) to a focus on information \n\n(what algorithm is sufficient to implement adaptive behavior). Advances \n\nin machine learning represent an exciting and largely untapped source of\n\n inspiration and tooling to assist the biological sciences. Growing \n\nNeural Cellular Automata and Self-classifying MNIST Digits \n\n introduced the Neural Cellular Automata (Neural CA) model and \n\ndemonstrated how tasks requiring self-organisation, such as pattern \n\ngrowth and self-classification of digits, can be trained in an \n\nend-to-end, differentiable fashion. The resulting models were robust to \n\nvarious kinds of perturbations: the growing CA expressed regenerative \n\ncapabilities when damaged; the MNIST CA were responsive to changes in \n\nthe underlying digits, triggering reclassification whenever necessary. \n\nThese computational frameworks represent quantitative models with which \n\nto understand important biological phenomena, such as scaling of single \n\ncell behavior rules into reliable organ-level anatomies. The latter is a\n\n kind of anatomical homeostasis, achieved by feedback loops that must \n\nrecognize deviations from a correct target morphology and progressively \n\nreduce anatomical error.\n\nIn this work, we *train adversaries* whose goal is to reprogram \n\nCA into doing something other than what they were trained to do. In \n\norder to understand what kinds of lower-level signals alter \n\nsystem-level behavior of our CA, it is important to understand how these\n\n CA are constructed and where local versus global information resides.\n\nThe system-level behavior of Neural CA is affected by:\n\n* **Individual cell states.** States store \n\ninformation which is used for both diversification among cell behaviours\n\n and for communication with neighbouring cells.\n\n* **The model parameters.** These describe the \n\ninput/output behavior of a cell and are shared by every cell of the same\n\n family. The model parameters can be seen as *the way the system works*.\n\n* **The perceptive field.** This is how cells perceive \n\ntheir environment. In Neural CA, we always restrict the perceptive field\n\n to be the eight nearest neighbors and the cell itself. The way cells \n\nare perceived by each other is different between the Growing CA and \n\nMNIST CA. The Growing CA perceptive field is a set of weights fixed both\n\n during training and inference, while the MNIST CA perceptive field is \n\nlearned as part of the model parameters.\n\nWe will explore two kinds of adversarial attacks: 1) injecting a few \n\nadversarial cells into an existing grid running a pretrained model; and \n\n2) perturbing the global state of all cells on a grid.\n\nFor the first type of adversarial attacks we train a new CA model \n\nthat, when placed in an environment running one of the original models \n\ndescribed in the previous articles, is able to hijack the behavior of \n\nthe collective mix of adversarial and non-adversarial CA. This is an \n\nexample of injecting CA with differing *model parameters* into the \n\nsystem. In biology, numerous forms of hijacking are known, including \n\n Especially fascinating are the many cases of non-cell-autonomous \n\nsignaling developmental biology and cancer, showing that some cell \n\nbehaviors can significantly alter host properties both locally and at \n\nlong range. For example, bioelectrically-abnormal cells can trigger \n\nmetastatic conversion in an otherwise normal body (with no genetic \n\n All of these phenomena underlie the importance of understanding how \n\ncell groups make collective decisions, and how those tissue-level \n\ndecisions can be subverted by the activity of a small number of cells. \n\nIt is essential to develop quantitative models of such dynamics, in \n\norder to drive meaningful progress in regenerative medicine that \n\ncontrols system-level outcomes top-down, where cell- or molecular-level \n\nmicromanagement is infeasible .\n\n We apply a global state perturbation to all living cells. This can be \n\nseen as inhibiting or enhancing combinations of state values, in turn \n\nhijacking proper communications among cells and within the cell’s own \n\nstates. Models like this represent not only ways of thinking about \n\nadversarial relationships in nature (such as parasitism and evolutionary\n\n arms races of genetic and physiological mechanisms), but also a roadmap\n\n for the development of regenerative medicine strategies. \n\nNext-generation biomedicine will need computational tools for inferring \n\nminimal, least-effort interventions that can be applied to biological \n\nsystems to predictively change their large-scale anatomical and \n\nbehavioral properties.\n\nRecall how the Self-classifying MNIST digits task consisted of \n\nplacing CA cells on a plane forming the shape of an MNIST digit. The \n\ncells then had to communicate among themselves in order to come to a \n\ncomplete consensus as to which digit they formed.\n\n (a) Local information neighbourhood - each cell can only observe itself\n\n and its neighbors’ states, or the absence of neighbours. \n\nYour browser does not support the video tag.\n\n \n\n and freeze its parameters. We then train a new CA whose model \n\narchitecture is identical to the frozen model but is randomly \n\ninitialized. The training regime also closely approximates that of \n\nself-classifying MNIST digits CA. There are three important differences:\n\n* For each batch and each pixel, the CA is randomly chosen to be \n\neither the pretrained model or the new adversarial one. The adversarial \n\nCA is used 10% of the time, and the pre-trained, frozen, model the rest \n\nof the time.\n\nThe adversarial attack as defined here only modifies a small \n\npercentage of the overall system, but the goal is to propagate signals \n\nthat affect all the living cells. Therefore, these adversaries have to \n\nsomehow learn to communicate deceiving information that causes wrong \n\nclassifications in their neighbours and further cascades in the \n\npropagation of deceiving information by ‘unaware’ cells. The unaware \n\ncells’ parameters cannot be changed so the only means of attack by the \n\nadversaries is to cause a change in the cells’ states. Cells’ states are\n\n responsible for communication and diversification.\n\nThe task is remarkably simple to optimize, reaching convergence in as\n\n little as 2000 training steps (as opposed to the two orders of \n\nmagnitude more steps needed to construct the original MNIST CA). By \n\nvisualising what happens when we remove the adversaries, we observe that\n\n the adversaries must be constantly communicating with their \n\nnon-adversarial neighbours to keep them convinced of the malicious \n\nclassification. While some digits don’t recover after the removal of \n\nadversaries, most of them self-correct to the right classification. \n\nBelow we show examples where we introduce the adversaries at 200 steps \n\nand remove them after a further 200 steps.\n\nYour browser does not support the video tag.\n\n \n\nWe introduce the adversaries (red pixels) after 200 steps and remove \n\nthem after 200 more steps. Most digits recover, but not all. We \n\nhighlight mistakes in classification with a red background.\n\nWhile we trained the adversaries with a 10-to-90% split of \n\nadversarial vs. non-adversarial cells, we observe that often \n\nsignificantly fewer adversaries are needed to succeed in the deception. \n\nBelow we evaluate the experiment with just one percent of cells being \n\nadversaries.\n\nYour browser does not support the video tag.\n\n \n\nAdversaries constituting up 1% of the cell collective (red pixels). We \n\nhighlight mistakes in classification with a red background.\n\nWe created a demo playground where the reader can draw digits and \n\nplace adversaries with surgical precision. We encourage the reader to \n\nplay with the demo to get a sense of how easily non-adversarial cells \n\nare swayed towards the wrong classification.\n\nThe natural follow up question is whether these adversarial attacks \n\nwork on Growing CA, too. The Growing CA goal is to be able to grow a \n\ncomplex image from a single cell, and having its result be persistent \n\nover time and robust to perturbations. In this article, we focus on the \n\nlizard pattern model from Growing CA.\n\nYour browser does not support the video tag.\n\n \n\nThe target CA to hijack.\n\nThe goal is to have some adversarial cells change the global \n\nconfiguration of all the cells. We choose two new targets we would like \n\nthe adversarial cells to try and morph the lizard into: a tailless \n\nlizard and a red lizard.\n\nThe desired mutations we want to apply.\n\nThese targets have different properties: \n\n* **Red lizard:** converting a lizard from green to \n\nred would show a global change in the behaviour of the cell collective. \n\nThis behavior is not present in the dynamics observed by the original \n\nmodel. The adversaries are thus tasked with fooling other cells into \n\ndoing things they have never done before (create the lizard shape as \n\nbefore, but now colored in red).\n\n* **Tailless lizard:** having a severed tail is a more \n\nlocalized change that only requires some cells to be fooled into \n\nbehaving in the wrong way: the cells at the base of the tail need to be \n\nconvinced they constitute the edge or silhouette of the lizard, instead \n\nof proceeding to grow a tail as before.\n\nWe first train adversaries for the tailless target with a 10% chance \n\nfor any given cell to be an adversary. We prohibit cells to be \n\nadversaries if they are outside the target pattern; i.e. the tail \n\ncontains no adversaries.\n\nYour browser does not support the video tag.\n\n \n\n10% of the cells are adversarial.\n\nThe video above shows six different instances of the same model with \n\ndiffering stochastic placement of the adversaries. The results vary \n\nconsiderably: sometimes the adversaries succeed in removing the tail, \n\nsometimes the tail is only shrunk but not completely removed, and other \n\ntimes the pattern becomes unstable. Training these adversaries required \n\nmany more gradient steps to achieve convergence, and the pattern \n\nconverged to is qualitatively worse than what was achieved for the \n\nadversarial MNIST CA experiment.\n\nThe red lizard pattern fares even worse. Using only 10% adversarial \n\ncells results in a complete failure: the original cells are unaffected \n\nby the adversaries. Some readers may wonder whether the original \n\npretrained CA has the requisite skill, or ‘subroutine’ of producing a \n\nred output at all, since there are no red regions in the original \n\ntarget, and may suspect this was an impossible task to begin with. \n\nTherefore, we increased the proportion of adversarial cells until we \n\nmanaged to find a successful adversarial CA, if any were possible.\n\nYour browser does not support the video tag.\n\n \n\nIn the video above we can see how, at least in the first stages of \n\nmorphogenesis, 60% of adversaries are capable of coloring the lizard \n\n where we hide the adversarial cells and show only the original cells. \n\nThere, we see how a handful of original cells are colored in red. This \n\nis proof that the adversaries successfully managed to steer neighboring \n\ncells to color themselves red, where needed.\n\nHowever, the model is very unstable when iterated for periods of time\n\n longer than seen during training. Moreover, the learned adversarial \n\nattack is dependent on a majority of cells being adversaries. For \n\ninstance, when using fewer adversaries on the order of 20-30%, the \n\nconfiguration is unstable.\n\nIn comparison to the results of the previous experiment, the Growing \n\nCA model shows a greater resistance to adversarial perturbation than \n\nthose of the MNIST CA. A notable difference between the two models is \n\nthat the MNIST CA cells have to always be ready and able to change an \n\nopinion (a classification) based on information propagated through \n\nseveral neighbors. This is a necessary requirement for that model \n\nbecause at any time the underlying digit may change, but most of the \n\ncells would not observe any change in their neighbors’ placements. For \n\ninstance, imagine the case of a one turning into a seven where the lower\n\n stroke of each overlap perfectly. From the point of view of the cells \n\nin the lower stroke of the digit, there is no change, yet the digit \n\nformed is now a seven. We therefore hypothesise MNIST CA are more \n\nreliant and ‘trusting’ of continuous long-distance communication than \n\nGrowing CA, where cells never have to reconfigure themselves to generate\n\n something different to before.\n\nWe suspect that more general-purpose Growing CA that have learned a \n\nvariety of target patterns during training are more likely to be \n\nsusceptible to adversarial attacks.\n\nWe observed that it is hard to fool Growing CA into changing their \n\nmorphology by placing adversarial cells inside the cell collective. \n\nThese adversaries had to devise complex local behaviors that would cause\n\n the non-adversarial cells nearby, and ultimately globally throughout \n\nthe image, to change their overall morphology.\n\nIn this section, we explore an alternative approach: perturbing the \n\nglobal state of all cells without changing the model parameters of any \n\ncell.\n\nAs before, we base our experiments on the Growing CA model trained to\n\n produce a lizard. Every cell of a Growing CA has an internal state \n\nvector with 16 elements. Some of them are phenotypical elements (the \n\nRGBA states) and the remaining 12 serve arbitrary purposes, used for \n\nstoring and communicating information. We can perturb the states of \n\nthese cells to hijack the overall system in certain ways (the discovery \n\nof such perturbation strategies is a key goal of biomedicine and \n\nsynthetic morphology). There are a variety of ways we can perform state \n\nperturbations. We will focus on *global state perturbations*, \n\ndefined as perturbations that are applied on every living cell at every \n\ntime step (analogous to “systemic” biomedical interventions, that are \n\ngiven to the whole organism (e.g., a chemical taken internally), as \n\nopposed to highly localized delivery systems). The new goal is to \n\ndiscover a certain type of global state perturbation that results in a \n\nstable new pattern.\n\nDiagram showing some possible stages for perturbing a lizard\n\n pattern. (a) We start from a seed that grows into a lizard (b) Fully \n\nconverged lizard. (c) We apply a global state perturbation at every \n\nstep. As a result, the lizard loses its tail. (d) We stop perturbing the\n\n state. We observe the lizard immediately grows back its tail.\n\nWe show 6 target patterns: the tailless and red lizard from the \n\nprevious experiment, plus a blue lizard and lizards with various severed\n\n limbs and severed head.\n\nMosaic of the desired mutations we want to apply.\n\n To give insight on why we chose this, an even simpler “state addition” \n\nmutation (a mutation consisting only of the addition of a vector to \n\nevery state) would be insufficient because the value of the states of \n\nour models are unbounded, and often we would want to suppress something \n\nby setting it to zero. The latter is generally impossible with constant \n\nstate additions, as a constant addition or subtraction of a value would \n\ngenerally lead to infinity, except for some fortunate cases where the \n\nnatural residual updates of the cells would cancel out with the constant\n\n addition at precisely state value zero. However, matrix multiplications\n\n have the possibility of amplifying/suppressing combinations of elements\n\n in the states: multiplying a state value repeatedly for a constant \n\nvalue less than one can easily suppress a state value to zero. We \n\nconstrain the matrix to be symmetric for reasons that will become clear \n\nin the following section.\n\n* The underlying CA parameters are frozen and we only train AAA.\n\n* We consider the set of initial image configurations to be both the \n\nseed state and the state with a fully grown lizard (as opposed to the \n\nGrowing CA article, where initial configurations consisted of the seed \n\nstate only).\n\nYour browser does not support the video tag.\n\n \n\nEffect of applying the trained perturbations.\n\nThe video above shows the model successfully discovering global state\n\n perturbations able to change a target pattern to a desired variation. \n\nWe show what happens when we stop perturbing the states (an \n\nout-of-training situation) at step 500 through step 1000, then \n\nreapplying the mutation. This demonstrates the ability of our \n\nperturbations to achieve the desired result both when starting from a \n\nseed, and when starting from a fully grown pattern. Furthermore it \n\ndemonstrates that the original CA easily recover from these state \n\nperturbations once it goes away. This last result is perhaps not \n\nsurprising given how robust growing CA models are in general.\n\nNot all perturbations are equally effective. In particular, the \n\nheadless perturbation is the least successful as it results in a loss of\n\n other details across the whole lizard pattern such as the white \n\ncoloring on its back. We hypothesize that the best perturbation our \n\ntraining regime managed to find, due to the simplicity of the \n\nperturbation, was suppressing a “structure” that contained both the \n\nmorphology of the head and the white colouring. This may be related to \n\nthe concept of differentiation and distinction of biological organs. \n\nPredicting what kinds of perturbations would be harder or impossible to \n\nbe done, before trying them out empirically, is still an open research \n\nquestion in biology. On the other hand, a variant of this kind of \n\nsynthetic analysis might help with defining higher order structures \n\nwithin biological and synthetic systems.\n\n### Directions and compositionality of perturbations\n\nOur choice of using a symmetric matrix for representing global state \n\nperturbations is justified by a desire to have compositionality. Every \n\ncomplex symmetric matrix AAA can be diagonalized as follows: \n\nA=QΛQ⊺A = Q \\Lambda Q^\\intercalA=QΛQ⊺\n\nwhere Λ\\LambdaΛ is the diagonal eigenvalues matrix and QQQ\n\n is the unitary matrix of its eigenvectors. Another way of seeing this \n\nis applying a change of basis transformation, scaling each component \n\nproportional to the eigenvalues, and then changing back to the original \n\nbasis. This should also give a clearer intuition on the ease of \n\nsuppressing or amplifying combinations of states. Moreover, we can now \n\ninfer what would happen if all the eigenvalues were to be one. In that \n\nLet us then take the tailless perturbation and see what happens as we vary kkk:\n\nYour browser does not support the video tag.\n\n \n\n negative, the lizard grows a longer tail. Unfortunately, the further \n\naway we go, the more unstable the system becomes and eventually the \n\nlizard pattern grows in an unbounded fashion. This behaviour likely \n\nstems from that perturbations applied on the states also affect the \n\nhomeostatic regulation of the system, making some cells die out or grow \n\nin different ways than before, resulting in a behavior akin to “cancer” \n\nin biological systems.\n\nIn that case, \n\n this could result in a stable perturbation. An intuitive understanding \n\nof this is interpolating stable perturbations using the direction \n\ncoefficients.\n\nIn practice, however, the eigenvectors are also different, so the \n\nresults of the combination will likely be worse the more different the \n\nrespective eigenvector bases are.\n\nBelow, we interpolate the direction coefficients, while keeping their\n\n sum to be one, of two types of perturbations: tailless and no-leg \n\nlizards.\n\nYour browser does not support the video tag.\n\n \n\nWhile it largely achieves what we expect, we observe some unintended \n\neffects such as the whole pattern starting to traverse vertically in the\n\n grid. Similar results happen with other combinations of perturbations. \n\nWhat happens if we remove the restriction of the sum of kkks\n\n being equal to one, and instead add both perturbations in their \n\nentirety? We know that if the two perturbations were the same, we would \n\nend twice as far away from the identity perturbation, and in general we \n\nexpect the variance of these perturbations to increase. Effectively, \n\nthis means going further and further away from the stable perturbations \n\ndiscovered during training. We would expect more unintended effects that\n\n may disrupt the CA as the sum of kkks increases.\n\nBelow, we demonstrate what happens when we combine the tailless and \n\nthe no-leg lizard perturbations at their fullest. Note that when we set \n\nYour browser does not support the video tag.\n\n \n\nEffect of composing two perturbations.\n\nSurprisingly, the resulting pattern is almost as desired. However, it\n\n also suffers from the vertical movement of the pattern observed while \n\ninterpolating kkks.\n\n \n\nThis framework can be generalized to any arbitrary number of \n\nperturbations. Below, we have created a small playground that allows the\n\n reader to input their desired combinations. Empirically, we were \n\nsurprised by how many of these combinations result in the intended \n\nRelated work\n\n------------\n\nThis work is inspired by Generative Adversarial Networks (GANs) .\n\n While with GANs it is typical to cotrain pairs of models, in this work \n\nwe froze the original CA and trained the adversaries only. This setup is\n\n### Influence maximization\n\nAdversarial cellular automata have parallels to the field of \n\ninfluence maximization. Influence maximization involves determining the \n\noptimal nodes to influence in order to maximize influence over an entire\n\n graph, commonly a social graph, with the property that nodes can in \n\nturn influence their neighbours. Such models are used to model a wide \n\nvariety of real-world applications involving information spread in a \n\ngraph. \n\n A common setting is that each vertex in a graph has a binary state, \n\nwhich will change if and only if a sufficient fraction of its \n\nneighbours’ states switch. Examples of such models are social influence \n\nmaximization (maximally spreading an idea in a network of people), \n\n (when small perturbations to a system bring about a larger ‘phase \n\nchange’). At the time of writing this article, for instance, contagion \n\nminimization is a model of particular interest. NCA are a graph - each \n\ncell is a vertex and has edges to its eight neighbours, through which it\n\n can pass information. This graph and message structure is significantly\n\n more complex than the typical graph underlying much of the research in \n\ninfluence maximization, because NCA cells pass vector-valued messages \n\nand have a complex update rules for their internal states, whereas \n\ngraphs in influence maximization research typically consist of more \n\nsimple binary cells states and threshold functions on edges determining \n\nwhether a node has switched states. Many concepts from the field could \n\nbe applied and are of interest, however.\n\nFor example, in this work, we have made an assumption that our \n\nadversaries can be positioned anywhere in a structure to achieve a \n\ndesired behaviour. A common focus of investigation in influence \n\nmaximization problems is deciding which nodes in a graph will result in \n\nmaximal influence on the graph, referred to as target set selection .\n\n This problem isn’t always tractable, often NP-hard, and solutions \n\nfrequently involve simulations. Future work on adversarial NCA may \n\ninvolve applying techniques from influence maximization in order to find\n\n the optimal placement of adversarial cells.\n\nDiscussion\n\n----------\n\nThis article showed two different kinds of adversarial attacks on Neural CA.\n\nInjections of adversarial CA in a pretrained Self-classifying MNIST \n\nCA showed how an existing system of cells that are heavily reliant on \n\nthe passing of information among each other is easily swayed by \n\ndeceitful signaling. This problem is routinely faced by biological \n\nsystems, which face hijacking of behavioral, physiological, and \n\nmorphological regulatory mechanisms by parasites and other agents in the\n\n biosphere with which they compete. Future work in this field of \n\ncomputer technology can benefit from research on biological \n\ncommunication mechanisms to understand how cells maximize reliability \n\nand fidelity of inter- and intra-cellular messages required to implement\n\n adaptive outcomes. \n\nThe adversarial injection attack was much less effective against \n\nGrowing CA and resulted in overall unstable CA. This dynamic is also of \n\nimportance to the scaling of control mechanisms (swarm robotics and \n\nnested architectures): a key step in “multicellularity” (joining \n\ntogether to form larger systems from sub-agents )\n\n is informational fusion, which makes it difficult to identify the \n\nsource of signals and memory engrams. An optimal architecture would need\n\n to balance the need for validating control messages with a possibility \n\nof flexible merging of subunits, which wipes out metadata about the \n\nspecific source of informational signals. Likewise, the ability to \n\nrespond successfully to novel environmental challenges is an important \n\ngoal for autonomous artificial systems, which may import from biology \n\nstrategies that optimize tradeoff between maintaining a specific set of \n\nsignals and being flexible enough to establish novel signaling regimes \n\nwhen needed.\n\nThe global state perturbation experiment on Growing CA shows how it \n\nis still possible to hijack these CA towards stable out-of-training \n\nconfigurations and how these kinds of attacks are somewhat composable in\n\n a similar way to how embedding spaces are manipulable in the natural \n\n We hypothesize that this is partially due to the regenerative \n\ncapabilities of the pretrained CA, and that other models may be less \n\ncapable of recovery from arbitrary perturbations.\n\n", "bibliography_bib": [{"title": "Growing Neural Cellular Automata"}, {"title": "Self-classifying MNIST Digits"}, {"title": "Herpes Simplex Virus: The Hostile Guest That Takes Over Your Home"}, {"title": "The\n role of gut microbiota (commensal bacteria) and the mucosal barrier in \nthe pathogenesis of inflammatory and autoimmune diseases and cancer: \ncontribution of germ-free and gnotobiotic animal models of human \ndiseases"}, {"title": "Regulation of axial and head patterning during planarian regeneration by a commensal bacterium"}, {"title": "Toxoplasma gondii infection and behavioral outcomes in humans: a systematic review"}, {"title": "Resting potential, oncogene-induced tumorigenesis, and metastasis: the bioelectric basis of cancer in vivo"}, {"title": "Transmembrane voltage potential of somatic cells controls oncogene-mediated tumorigenesis at long-range"}, {"title": "Cross-limb communication during Xenopus hindlimb regenerative response: non-local bioelectric injury signals"}, {"title": "Local\n and long-range endogenous resting potential gradients antagonistically \nregulate apoptosis and proliferation in the embryonic CNS"}, {"title": "Top-down models in biology: explanation and control of complex living systems above the molecular level"}, {"title": "Generative Adversarial Networks"}, {"title": "Adversarial Reprogramming of Neural Networks"}, {"title": "Efficient Estimation of Word Representations in Vector Space"}, {"title": "Fader Networks: Manipulating Images by Sliding Attributes"}, {"title": "Maximizing the spread of influence through a social network"}, {"title": "The Independent Cascade and Linear Threshold Models"}, {"title": "A Survey on Influence Maximization in a Social Network"}, {"title": "Simplicial models of social contagion"}, {"title": "Cascading Behavior in Networks: Algorithmic and Economic Issues"}, {"title": "On the Approximability of Influence in Social Networks"}, {"title": "The Computational Boundary of a “Self”: Developmental Bioelectricity Drives Multicellularity and Scale-Free Cognition"}], "filename": "Adversarial Reprogramming of Neural Cellular Automata.html", "id": "8e3e8f3c9cff2ce1775115420c63e4cd"} {"url": "n/a", "source": "distill", "source_type": "html", "converted_with": "python", "title": "Deconvolution and Checkerboard Artifacts", "authors": ["Augustus Odena", "Vincent Dumoulin", "Chris Olah"], "date_published": "2016-10-17", "abstract": "When we look very closely at images generated by neural networks, we often see a strange checkerboard pattern of artifacts. It’s more obvious in some cases than others, but a large fraction of recent models exhibit this behavior.", "journal_ref": "distill-pub", "doi": "https://doi.org/10.1109/cvpr.2016.207", "text": "\n\nDeconvolution and Checkerboard Artifacts\n\n========================================\n\n dt-byline {\n\n font-size: 12px;\n\n line-height: 18px;\n\n display: block;\n\n border-top: 1px solid rgba(0, 0, 0, 0.1);\n\n border-bottom: 1px solid rgba(0, 0, 0, 0.1);\n\n color: rgba(0, 0, 0, 0.5);\n\n padding-top: 12px;\n\n padding-bottom: 12px;\n\n }\n\n dt-article.centered dt-byline {\n\n text-align: center;\n\n }\n\n dt-byline a,\n\n dt-article dt-byline a {\n\n text-decoration: none;\n\n border-bottom: none;\n\n }\n\n dt-article dt-byline a:hover {\n\n text-decoration: underline;\n\n border-bottom: none;\n\n }\n\n dt-byline .authors {\n\n text-align: left;\n\n }\n\n dt-byline .name {\n\n display: inline;\n\n text-transform: uppercase;\n\n }\n\n dt-byline .affiliation {\n\n display: inline;\n\n }\n\n dt-byline .date {\n\n display: block;\n\n text-align: left;\n\n }\n\n dt-byline .year, dt-byline .month {\n\n display: inline;\n\n }\n\n dt-byline .citation {\n\n display: block;\n\n text-align: left;\n\n }\n\n dt-byline .citation div {\n\n display: inline;\n\n }\n\n @media(min-width: 768px) {\n\n dt-byline {\n\n }\n\n }\n\n @media(min-width: 1080px) {\n\n dt-byline {\n\n border-bottom: none;\n\n margin-bottom: 70px;\n\n }\n\n dt-byline a:hover {\n\n color: rgba(0, 0, 0, 0.9);\n\n }\n\n dt-byline .authors {\n\n display: inline-block;\n\n }\n\n dt-byline .author {\n\n display: inline-block;\n\n margin-right: 12px;\n\n /\\*padding-left: 20px;\\*/\n\n /\\*border-left: 1px solid #ddd;\\*/\n\n }\n\n dt-byline .affiliation {\n\n display: block;\n\n }\n\n dt-byline .author:last-child {\n\n margin-right: 0;\n\n }\n\n dt-byline .name {\n\n display: block;\n\n }\n\n dt-byline .date {\n\n border-left: 1px solid rgba(0, 0, 0, 0.1);\n\n padding-left: 15px;\n\n margin-left: 15px;\n\n display: inline-block;\n\n }\n\n dt-byline .year, dt-byline .month {\n\n display: block;\n\n }\n\n dt-byline .citation {\n\n border-left: 1px solid rgba(0, 0, 0, 0.15);\n\n padding-left: 15px;\n\n margin-left: 15px;\n\n display: inline-block;\n\n }\n\n dt-byline .citation div {\n\n display: block;\n\n }\n\n }\n\nAugustus Odena\n\n[Google Brain](http://g.co/brain)\n\n[Vincent Dumoulin](http://vdumoulin.github.io/)\n\n[Université de Montréal](https://mila.umontreal.ca/en/)\n\n[Chris Olah](http://colah.github.io/)\n\n[Google Brain](http://g.co/brain)\n\nOct. 17\n\n2016\n\n[Citation:\n\nOdena, et al., 2016](#citation)\n\nWhen we look very closely at images generated by neural networks, \n\nwe often see a strange checkerboard pattern of artifacts. It’s more \n\nobvious in some cases than others, but a large fraction of recent models\n\n exhibit this behavior.\n\n.pattern-examples {\n\n width: 100%;\n\n}\n\n.pattern-examples > figure {\n\n display: -ms-flexbox;\n\n display: -webkit-flex;\n\n display: flex;\n\n position: relative;\n\n}\n\n.pattern-examples > figure:after {\n\n overflow: visible;\n\n content: \"\";\n\n background-image: url(\"assets/pointer.svg\");\n\n width: 27px;\n\n height: 27px;\n\n position: absolute;\n\n right: -51px;\n\n}\n\n.pattern-examples .example {\n\n width: 100%;\n\n margin-right: 2.5%;\n\n}\n\n.pattern-examples .example:last-of-type {\n\n margin-right: 0;\n\n}\n\n.pattern-examples .original {\n\n position: relative;\n\n}\n\n.pattern-examples .original:after {\n\n content: \"\";\n\n display: block;\n\n width: 100%;\n\n height: 150%;\n\n position: absolute;\n\n top: -20%;\n\n z-index: 1;\n\n}\n\n.pattern-examples .example .reticle {\n\n position: absolute;\n\n pointer-events: none;\n\n}\n\n.pattern-examples .example .reticle:after {\n\n box-sizing: border-box;\n\n position: relative;\n\n width: 100%;\n\n height: 100%;\n\n left: -50%;\n\n top: -50%;\n\n border-radius: 50%;\n\n border: 1px solid white;\n\n display: block;\n\n content: '';\n\n background-color: rgba(255, 255, 255, 0.2);\n\n box-shadow: 0 0 4px rgba(0, 0, 0, 0.5);\n\n}\n\n.pattern-examples img {\n\n display: block;\n\n width: 100%;\n\n -ms-interpolation-mode: nearest-neighbor;\n\n image-rendering: optimizeSpeed;\n\n image-rendering: pixelated;\n\n}\n\n.pattern-examples .closeup {\n\n display: block;\n\n width: 90%;\n\n border-radius: 50%;\n\n margin-bottom: 12px;\n\n position: relative;\n\n left: 5%;\n\n top: -5px;\n\n border: 2px solid white;\n\n overflow: hidden;\n\n box-shadow: 0 2px 8px rgba(0, 0, 0, 0.3);\n\n}\n\n.pattern-examples figcaption {\n\n width: 90%;\n\n margin-left: 5%;\n\n text-align: center;\n\n}\n\n.pattern-examples .closeup img {\n\n position: absolute;\n\n width: 800%;\n\n}\n\n.pattern-examples .closeup:after {\n\n padding-top: 100%;\n\n display: block;\n\n content: '';\n\n}\n\n![](Deconvolution%20and%20Checkerboard%20Artifacts_files/im_sample_dcgan.png)\n\nRadford, et al., 2015 [1]\n\n![](Deconvolution%20and%20Checkerboard%20Artifacts_files/im_sample_salimans.png)\n\n![](Deconvolution%20and%20Checkerboard%20Artifacts_files/im_sample_salimans.png)\n\nSalimans et al., 2016 [2]\n\n![](Deconvolution%20and%20Checkerboard%20Artifacts_files/im_sample_donahue.png)\n\n![](Deconvolution%20and%20Checkerboard%20Artifacts_files/im_sample_donahue.png)\n\nDonahue, et al., 2016 [3]\n\n![](Deconvolution%20and%20Checkerboard%20Artifacts_files/im_sample_dumoulin.png)\n\n![](Deconvolution%20and%20Checkerboard%20Artifacts_files/im_sample_dumoulin.png)\n\nDumoulin, et al., 2016 [4]\n\n(function() {\n\nvar html = d3.select(\".pattern-examples\");\n\nvar original = html.selectAll(\".example .original\");\n\nhtml\n\n .on(\"mouseleave\", () => {\n\n console.log(\"out\")\n\n original.each(resetReticle);\n\n });\n\noriginal\n\n .datum(function() {\n\n var focus = this.getAttribute(\"data-focus\").split(\",\")\n\n return {\n\n zoom: focus[0],\n\n x: focus[1],\n\n y: focus[2]\n\n };\n\n })\n\n .on(\"mouseleave\", resetReticle)\n\n .on(\"mousemove\", updateReticle)\n\n .each(resetReticle);\n\nfunction updateReticle(d) {\n\n original.each(resetReticle);\n\n var x = d3.event.offsetX / this.getBoundingClientRect().width;\n\n var y = d3.event.offsetY / this.getBoundingClientRect().height;\n\n setPosition(this, x, y, d.zoom, 100);\n\n}\n\nfunction resetReticle(d) {\n\n setPosition(this, d.x, d.y, d.zoom, 300);\n\n}\n\nfunction setPosition(element, x, y, zoom, duration) {\n\n var margin = 1 / zoom / 2;\n\n x = Math.min(Math.max(margin, x), 1 - margin)\n\n y = Math.min(Math.max(margin, y), 1 - margin)\n\n var parent = d3.select(element.parentElement);\n\n var reticle = parent.select(\".reticle\");\n\n var closeup = parent.select(\".closeup img\");\n\n console.log(x, y)\n\n reticle\n\n .style(\"width\", 100 / zoom + \"%\")\n\n .style(\"height\", 100 / zoom + \"%\")\n\n .transition()\n\n .ease(d3.ease(\"cubic-out\"))\n\n .duration(duration)\n\n .styleTween(\"left\", function (d, i, a) {\n\n var from = this.style.left,\n\n to = x \\* 100 + \"%\";\n\n return d3.interpolateString(from, to);\n\n })\n\n .styleTween(\"top\", function (d, i, a) {\n\n var from = this.style.top,\n\n to = y \\* 100 + \"%\";\n\n return d3.interpolateString(from, to);\n\n })\n\n closeup\n\n .style(\"width\", zoom \\* 100 + \"%\")\n\n .style(\"height\", zoom \\* 100 + \"%\")\n\n .transition()\n\n .ease(d3.ease(\"cubic-out\"))\n\n .duration(duration)\n\n .styleTween(\"left\", function (d, i, a) {\n\n var from = this.style.left,\n\n to = -x \\* 100 \\* zoom + 50 + \"%\";\n\n return d3.interpolateString(from, to);\n\n })\n\n .styleTween(\"top\", function (d, i, a) {\n\n var from = this.style.top,\n\n to = -y \\* 100 \\* zoom + 50 + \"%\";\n\n return d3.interpolateString(from, to);\n\n })\n\n}\n\n})();\n\nMysteriously, the checkerboard pattern tends to be most prominent \n\nin images with strong colors.\n\n What’s going on? Do neural networks hate bright colors? The actual \n\ncause of these artifacts is actually remarkably simple, as is a method \n\nfor avoiding them.\n\n---\n\nDeconvolution & Overlap\n\n-----------------------\n\nWhen we have neural networks generate images, we often have them build them up\n\n from low resolution, high-level descriptions.\n\n We generally do this with the *deconvolution* operation.\n\n Roughly, deconvolution layers allow the model to use every point\n\n in the small image to “paint” a square in the larger one.\n\n We use the name “deconvolution” in this article for brevity.\n\n For excellent discussion of deconvolution, see [5, 6].)\n\nUnfortunately, deconvolution can easily have “uneven overlap,”\n\n putting more of the metaphorical paint in some places than others [7].\n\n In particular, deconvolution has uneven overlap when the kernel size \n\n(the output window size) is not divisible by the stride (the spacing \n\nbetween points on the top).\n\n While the network could, in principle, carefully learn weights to \n\navoid this\n\n  — as we’ll discuss in more detail later —\n\n in practice neural networks struggle to avoid it completely.\n\nstride = 1size = 8stride = 2size = 3\n\ndeconv1d();\n\nThe overlap pattern also forms in two dimensions.\n\n The uneven overlaps on the two axes multiply together,\n\n creating a characteristic checkerboard-like pattern of varying magnitudes.\n\ndeconv2d();\n\nIn fact, the uneven overlap tends to be more extreme in two dimensions!\n\n Because the two patterns are multiplied together, the unevenness gets squared.\n\n but in two dimensions this becomes a factor of four.\n\n While it’s possible for these stacked deconvolutions to cancel out artifacts,\n\n they often compound, creating artifacts on a variety of scales.\n\nstride = 1size = 5stride = 2size = 3stride = 2size = 3\n\ndeconv1d\\_multi();\n\nStride 1 deconvolutions —\n\n which we often see as the last layer in successful models (eg. [2])\n\n  — are quite effective at dampening artifacts.\n\n They can remove artifacts of frequencies\n\n size. However, artifacts can still leak through, as seen in many recent models.\n\nIn addition to the high frequency checkerboard-like artifacts we observed above,\n\n early deconvolutions can create lower-frequency artifacts,\n\n which we’ll explore in more detail later.\n\nThese artifacts tend to be most prominent when outputting unusual colors.\n\n Since neural network layers typically have a bias\n\n (a learned value added to the output) it’s easy to output the average color.\n\n The further a color — like bright red — is away from the average color,\n\n the more deconvolution needs to contribute.\n\n---\n\nOverlap & Learning\n\n------------------\n\nThinking about things in terms of uneven overlap is — while a useful framing —\n\n is evenly balanced.\n\nIn fact, not only do models with uneven overlap not learn to avoid this,\n\n but models with even overlap often learn kernels that cause similar artifacts!\n\n While it isn’t their default behavior the way it is for uneven overlap,\n\n it’s still very easy for even overlap deconvolution to cause artifacts.\n\nCompletely avoiding artifacts is still a significant restriction on filters,\n\n (See [4],\n\n which uses stride 2 size 4 deconvolutions, as an example.)\n\nThere are probably a lot of factors at play here.\n\n For example, in the case of Generative Adversarial Networks (GANs), \n\none issue may be the discriminator and its gradients\n\n (we’ll discuss this more later).\n\n But a big part of the problem seems to be deconvolution.\n\n At best, deconvolution is fragile because it very easily represents \n\nartifact creating functions, even when the size is carefully chosen.\n\n At worst, creating artifacts is the default behavior of deconvolution.\n\nIs there a different way to upsample that is more resistant to artifacts?\n\n---\n\nBetter Upsampling\n\n-----------------\n\n Ideally, it would go further, and be biased against such artifacts.\n\n avoiding the overlap issue.\n\n This is equivalent to “sub-pixel convolution,” a technique which has recently\n\n had success in image super-resolution [8].\n\nBoth deconvolution and the different resize-convolution approaches \n\nare linear operations, and can be interpreted as matrices.\n\n This a helpful way to see the differences between them.\n\n Where deconvolution has a unique entry for each output window, \n\nresize-convolution is implicitly weight-tying in a way that discourages \n\nhigh frequency artifacts.\n\nWe’ve had our best results with nearest-neighbor interpolation, and\n\n had difficulty making bilinear resize work.\n\n This may simply mean that, for our models, the nearest-neighbor \n\nhappened to work well with hyper-parameters optimized for deconvolution.\n\n It might also point at trickier issues with naively using bilinear \n\ninterpolation, where it resists high-frequency image features too \n\nstrongly.\n\n We don’t necessarily think that either approach is the final solution \n\nto upsampling, but they do fix the checkerboard artifacts.\n\n### Code\n\n---\n\nImage Generation Results\n\n------------------------\n\nOne case where we’ve found this approach to help is Generative \n\nAdversarial Networks. Simply switching out the standard deconvolutional \n\nlayers for nearest-neighbor resize followed by convolution causes \n\nartifacts of different frequencies to disappear.\n\n.fix-examples > .row {\n\n display: -ms-flexbox;\n\n display: -webkit-flex;\n\n display: flex;\n\n /\\*align-items: center;\\*/\n\n margin-bottom: 10px;\n\n}\n\n.fix-examples > .row > .example {\n\n flex-grow: 1;\n\n width: 100%;\n\n margin-right: 10px;\n\n}\n\n.fix-examples > .row > .explanation {\n\n flex-grow: 2;\n\n width: 200%;\n\n margin-left: 10px;\n\n}\n\n.fix-examples img {\n\n display: block;\n\n width: 100%;\n\n -ms-interpolation-mode: nearest-neighbor;\n\n image-rendering: optimizeSpeed;\n\n image-rendering: pixelated;\n\n}\n\n Deconv in last two layers. \n\n Other layers use resize-convolution. \n\n*Artifacts of frequency 2 and 4.*\n\n Deconv only in last layer. \n\n Other layers use resize-convolution. \n\n*Artifacts of frequency 2.*\n\n All layers use resize-convolution. \n\n*No artifacts.*\n\nIn fact, the difference in artifacts can be seen before any training occurs.\n\n we can already see the artifacts:\n\n.fix-step0-examples > .row {\n\n display: -ms-flexbox;\n\n display: -webkit-flex;\n\n display: flex;\n\n /\\*align-items: center;\\*/\n\n margin-bottom: 10px;\n\n}\n\n.fix-step0-examples > .row > .exampleA {\n\n flex-grow: 1;\n\n width: 100%;\n\n margin-right: 10px;\n\n}\n\n.fix-step0-examples > .row > .exampleB {\n\n flex-grow: 1;\n\n width: 100%;\n\n margin-right: 20px;\n\n}\n\n.fix-step0-examples > .row > .explanation {\n\n flex-grow: 2;\n\n width: 200%;\n\n}\n\n.fix-step0-examples img {\n\n display: block;\n\n width: 100%;\n\n -ms-interpolation-mode: nearest-neighbor;\n\n image-rendering: optimizeSpeed;\n\n image-rendering: pixelated;\n\n}\n\n![](Deconvolution%20and%20Checkerboard%20Artifacts_files/im_rand_two_0.png)\n\n![](Deconvolution%20and%20Checkerboard%20Artifacts_files/im_rand_two_1.png)\n\n![](Deconvolution%20and%20Checkerboard%20Artifacts_files/im_rand_one_0.png)\n\n![](Deconvolution%20and%20Checkerboard%20Artifacts_files/im_rand_one_1.png)\n\n![](Deconvolution%20and%20Checkerboard%20Artifacts_files/im_rand_zero_1.png)\n\n![](Deconvolution%20and%20Checkerboard%20Artifacts_files/im_rand_zero_2.png)\n\n Deconvolution in last two layers. \n\n *Artifacts prior to any training.*\n\n Deconvolution only in last layer. \n\n*Artifacts prior to any training.*\n\n All layers use resize-convolution. \n\n*No artifacts before or after training.*\n\nThis suggests that the artifacts are due to this method of \n\ngenerating images, rather than adversarial training.\n\n (It also suggests that we might be able to learn a lot about good \n\ngenerator design without the slow feedback cycle of training models.)\n\nAnother reason to believe these artifacts aren’t GAN specific is \n\nthat we see them in other kinds of models, and have found that they also\n\n go away when we switch to resize-convolution upsampling.\n\n.style-examples > .row {\n\n display: -ms-flexbox;\n\n display: -webkit-flex;\n\n display: flex;\n\n /\\*align-items: center;\\*/\n\n margin-bottom: 10px;\n\n}\n\n.style-examples > .row > .example {\n\n flex-grow: 4;\n\n width: 400%;\n\n margin-right: 10px;\n\n}\n\n.style-examples > .row > .explanation {\n\n flex-grow: 2;\n\n width: 200%;\n\n margin-left: 10px;\n\n}\n\n.style-examples img {\n\n display: block;\n\n width: 100%;\n\n -ms-interpolation-mode: nearest-neighbor;\n\n image-rendering: optimizeSpeed;\n\n image-rendering: pixelated;\n\n}\n\n![](Deconvolution%20and%20Checkerboard%20Artifacts_files/style_artifacts.png)\n\n Using deconvolution. \n\n*Heavy checkerboard artifacts.*\n\n![](Deconvolution%20and%20Checkerboard%20Artifacts_files/style_clean.png)\n\n Using resize-convolution. \n\n*No checkerboard artifacts.*\n\nForthcoming papers from the Google Brain team will demonstrate the \n\nbenefits of this technique\n\n in more thorough experiments and state-of-the-art results.\n\n (We’ve chosen to present this technique separately because we felt it \n\nmerited more detailed discussion, and because it cut across multiple \n\npapers.)\n\n---\n\nArtifacts in Gradients\n\n----------------------\n\nWhenever we compute the gradients of a convolutional layer,\n\n we do deconvolution (transposed convolution) on the backward pass.\n\n This can cause checkerboard patterns in the gradient,\n\n just like when we use deconvolution to generate images.\n\nThe presence of high-frequency “noise” in image model gradients is\n\n Somehow, feature visualization methods must compensate for this noise.\n\nFor example, DeepDream [11]\n\n seems to cause destructive interference between artifacts in a number of ways,\n\n.deepdream-examples {\n\n position: relative;\n\n}\n\n.deepdream-examples::after {\n\n overflow: visible;\n\n content: \"\";\n\n background-image: url(\"assets/pointer.svg\");\n\n width: 27px;\n\n height: 27px;\n\n position: absolute;\n\n right: -51px;\n\n top: 0;\n\n}\n\n.deepdream-examples .example {\n\n position: relative;\n\n}\n\n.deepdream-examples .original {\n\n position: relative;\n\n width: 45%;\n\n}\n\n.deepdream-examples .original:after {\n\n content: \"\";\n\n display: block;\n\n width: 100%;\n\n height: 70%;\n\n position: absolute;\n\n top: -20%;\n\n z-index: 1;\n\n}\n\n.deepdream-examples .example .reticle {\n\n position: absolute;\n\n pointer-events: none;\n\n}\n\n.deepdream-examples .example .reticle:after {\n\n box-sizing: border-box;\n\n position: relative;\n\n width: 100%;\n\n height: 100%;\n\n left: -50%;\n\n top: -50%;\n\n border-radius: 50%;\n\n border: 1px solid white;\n\n display: block;\n\n content: '';\n\n background-color: rgba(255, 255, 255, 0.2);\n\n box-shadow: 0 0 4px rgba(0, 0, 0, 0.5);\n\n}\n\n.deepdream-examples img {\n\n display: block;\n\n width: 100%;\n\n -ms-interpolation-mode: nearest-neighbor;\n\n image-rendering: optimizeSpeed;\n\n image-rendering: pixelated;\n\n}\n\n.deepdream-examples .closeup img {\n\n -webkit-filter: contrast(150%);\n\n filter: contrast(150%);\n\n}\n\n.deepdream-examples .closeup {\n\n position: absolute;\n\n right: 34%;\n\n top: 0;\n\n width: 22.5%;\n\n border-radius: 50%;\n\n margin-bottom: 12px;\n\n border: 2px solid white;\n\n overflow: hidden;\n\n box-shadow: 0 2px 8px rgba(0, 0, 0, 0.3);\n\n}\n\n.deepdream-examples figcaption {\n\n position: absolute;\n\n right: 0%;\n\n top: 0;\n\n width: 30%;\n\n}\n\n.deepdream-examples .closeup img {\n\n position: absolute;\n\n width: 800%;\n\n}\n\n.deepdream-examples .closeup:after {\n\n padding-top: 100%;\n\n display: block;\n\n content: '';\n\n}\n\n DeepDream only applying the neural network to a fixed position. \n\n*Severe artifacts.*\n\n DeepDream applying the network to a different position each step. \n\n*Reduced artifacts.*\n\n(function() {\n\nvar html = d3.select(\".deepdream-examples\");\n\nvar original = html.selectAll(\".example .original\");\n\nhtml\n\n .on(\"mouseleave\", () => {\n\n console.log(\"out\")\n\n original.each(resetReticle);\n\n });\n\noriginal\n\n .datum(function() {\n\n var focus = this.getAttribute(\"data-focus\").split(\",\")\n\n return {\n\n zoom: focus[0],\n\n x: focus[1],\n\n y: focus[2]\n\n };\n\n })\n\n .on(\"mouseleave\", resetReticle)\n\n .on(\"mousemove\", updateReticle)\n\n .each(resetReticle);\n\nfunction updateReticle(d) {\n\n original.each(resetReticle);\n\n var x = d3.event.offsetX / this.getBoundingClientRect().width;\n\n var y = d3.event.offsetY / this.getBoundingClientRect().height;\n\n setPosition(this, x, y, d.zoom, 100);\n\n}\n\nfunction resetReticle(d) {\n\n setPosition(this, d.x, d.y, d.zoom, 300);\n\n}\n\nfunction setPosition(element, x, y, zoom, duration) {\n\n var marginX = 1 / zoom / 4;\n\n var marginY = 1 / zoom / 2;\n\n x = Math.min(Math.max(marginX, x), 1 - marginX)\n\n y = Math.min(Math.max(marginY, y), 1 - marginY)\n\n var parent = d3.select(element.parentElement);\n\n var reticle = parent.select(\".reticle\");\n\n var closeup = parent.select(\".closeup img\");\n\n reticle\n\n .style(\"width\", 50 / zoom + \"%\")\n\n .style(\"height\", 100 / zoom + \"%\")\n\n .transition()\n\n .ease(d3.ease(\"cubic-out\"))\n\n .duration(duration)\n\n .styleTween(\"left\", function (d, i, a) {\n\n var from = this.style.left,\n\n to = x \\* 100 + \"%\";\n\n return d3.interpolateString(from, to);\n\n })\n\n .styleTween(\"top\", function (d, i, a) {\n\n var from = this.style.top,\n\n to = y \\* 100 + \"%\";\n\n return d3.interpolateString(from, to);\n\n })\n\n closeup\n\n .style(\"width\", zoom \\* 200 + \"%\")\n\n .style(\"height\", zoom \\* 100 + \"%\")\n\n .transition()\n\n .ease(d3.ease(\"cubic-out\"))\n\n .duration(duration)\n\n .styleTween(\"left\", function (d, i, a) {\n\n var from = this.style.left,\n\n to = -x \\* 200 \\* zoom + 50 + \"%\";\n\n return d3.interpolateString(from, to);\n\n })\n\n .styleTween(\"top\", function (d, i, a) {\n\n var from = this.style.top,\n\n to = -y \\* 100 \\* zoom + 50 + \"%\";\n\n return d3.interpolateString(from, to);\n\n })\n\n}\n\n})();\n\n(While some of the artifacts are our standard checkerboard pattern,\n\n others are a less organized high-frequency pattern.\n\n We believe these to be caused by max pooling.\n\n Max pooling was previously linked to high-frequency artifacts in [12].)\n\nMore recent work in feature visualization (eg. [13]),\n\nDo these gradient artifacts affect GANs?\n\n If gradient artifacts can affect an image being optimized based on a \n\nneural networks gradients in feature visualization,\n\n we might also expect it to affect the family of images parameterized \n\nby the generator as they’re optimized by the discriminator in GANs.\n\nWe’ve found that this does happen in some cases.\n\n When the generator is neither biased for or against checkerboard patterns,\n\n strided convolutions in the discriminator can cause them.\n\n.discrim-examples > .row {\n\n display: -ms-flexbox;\n\n display: -webkit-flex;\n\n display: flex;\n\n /\\*align-items: center;\\*/\n\n margin-bottom: 10px;\n\n}\n\n.discrim-examples > .row > .exampleA {\n\n flex-grow: 1;\n\n width: 100%;\n\n margin-right: 10px;\n\n}\n\n.discrim-examples > .row > .exampleB {\n\n flex-grow: 1;\n\n width: 100%;\n\n margin-right: 20px;\n\n}\n\n.discrim-examples > .row > .explanationA {\n\n flex-grow: 3;\n\n width: 300%;\n\n padding-right: 20px;\n\n}\n\n.discrim-examples > .row > .explanationB {\n\n flex-grow: 3;\n\n width: 300%;\n\n padding-right: 10px;\n\n}\n\n.discrim-examples img {\n\n display: block;\n\n width: 100%;\n\n -ms-interpolation-mode: nearest-neighbor;\n\n image-rendering: optimizeSpeed;\n\n image-rendering: pixelated;\n\n}\n\n![](Deconvolution%20and%20Checkerboard%20Artifacts_files/discrim_stride2_0.png)\n\n![](Deconvolution%20and%20Checkerboard%20Artifacts_files/discrim_stride2_1.png)\n\n![](Deconvolution%20and%20Checkerboard%20Artifacts_files/discrim_stride2_2.png)\n\n![](Deconvolution%20and%20Checkerboard%20Artifacts_files/discrim_stride1_1.png)\n\n![](Deconvolution%20and%20Checkerboard%20Artifacts_files/discrim_stride1_2.png)\n\n![](Deconvolution%20and%20Checkerboard%20Artifacts_files/discrim_stride1_0.png)\n\n Discriminator has stride 2 convolution in first layer. \n\n *Strong frequency 2 artifacts.*\n\n Discriminator has regular convolution in first layer. \n\n *Very mild artifacts.*\n\nIt’s unclear what the broader implications of these gradient artifacts are.\n\n Neither of those sounds ideal.\n\nIt seems possible that having some pixels affect the network output\n\n much more than others may exaggerate adversarial counter-examples.\n\n Because the derivative is concentrated on small number of pixels,\n\n small perturbations of those pixels may have outsized effects.\n\n We have not investigated this.\n\n---\n\nConclusion\n\n----------\n\nThe standard approach of producing images with \n\ndeconvolution — despite its successes! — has some conceptually simple \n\nissues that lead to artifacts in produced images.\n\n Using a natural alternative without these issues causes the artifacts \n\nto go away\n\n (Analogous arguments suggest that standard strided convolutional \n\nlayers may also have issues).\n\nThis seems like an exciting opportunity to us!\n\n It suggests that there is low-hanging fruit to be found in carefully \n\nthinking through neural network architectures, even ones where we seem \n\nto have clean working solutions.\n\nIn the meantime, we’ve provided an easy to use solution that \n\nimproves the quality of many approaches to generating images with neural\n\n networks. We look forward to seeing what people do with it, and whether\n\n it helps in domains like audio, where high frequency artifacts would be\n\n particularly problematic.\n\n", "bibliography_bib": null, "filename": "Deconvolution and Checkerboard Artifacts.html", "id": "9d1de8039fcfdf17ef9b9ea0eea8df1e"} {"url": "n/a", "source": "distill", "source_type": "html", "converted_with": "python", "title": "A Discussion of 'Adversarial Examples Are Not Bugs, They Are Features': Robust Feature Leakage", "authors": ["Gabriel Goh"], "date_published": "2019-08-06", "abstract": " This article is part of a discussion of the Ilyas et al. paper “Adversarial examples are not bugs, they are features”. You can learn more in the main discussion article . ", "journal_ref": "distill-pub", "doi": "https://doi.org/10.23915/distill.00019.2", "text": "\n\n #rebuttal,\n\n .comment-info {\n\n background-color: hsl(54, 78%, 96%);\n\n border-left: solid hsl(54, 33%, 67%) 1px;\n\n padding: 1em;\n\n color: hsla(0, 0%, 0%, 0.67);\n\n }\n\n #header-info {\n\n margin-top: 0;\n\n margin-bottom: 1.5rem;\n\n display: grid;\n\n grid-template-columns: 65px max-content 1fr;\n\n grid-template-areas:\n\n \"icon explanation explanation\"\n\n \"icon back comment\";\n\n grid-column-gap: 1.5em;\n\n }\n\n #header-info .icon-multiple-pages {\n\n grid-area: icon;\n\n padding: 0.5em;\n\n content: url(images/multiple-pages.svg);\n\n }\n\n #header-info .explanation {\n\n grid-area: explanation;\n\n font-size: 85%;\n\n }\n\n #header-info .back {\n\n grid-area: back;\n\n }\n\n #header-info .back::before {\n\n content: \"←\";\n\n margin-right: 0.5em;\n\n }\n\n #header-info .comment {\n\n grid-area: comment;\n\n scroll-behavior: smooth;\n\n }\n\n #header-info .comment::before {\n\n content: \"↓\";\n\n margin-right: 0.5em;\n\n }\n\n #header-info a.back,\n\n #header-info a.comment {\n\n font-size: 80%;\n\n font-weight: 600;\n\n border-bottom: none;\n\n text-transform: uppercase;\n\n color: #2e6db7;\n\n display: block;\n\n margin-top: 0.25em;\n\n letter-spacing: 0.25px;\n\n }\n\n This article is part of a discussion of the Ilyas et al. paper\n\n *“Adversarial examples are not bugs, they are features”.*\n\n You can learn more in the\n\n [main discussion article](https://distill.pub/2019/advex-bugs-discussion/) .\n\n \n\n[Other Comments](https://distill.pub/2019/advex-bugs-discussion/#commentaries)\n\n Ilyas et al. report a surprising result: a model trained on\n\n \n\n \n\n### Lower Bounding Leakage\n\n Our technique for quantifying leakage consisting of two steps:\n\n \n\n specify.\n\n2. Next, we train a linear classifier as per ,\n\n Equation 3 on the datasets D^det\\hat{\\mathcal{D}}\\_{\\text{det}}D^det​ and\n\n D^rand\\hat{\\mathcal{D}}\\_{\\text{rand}}D^rand​ (Defined , Table 1) on\n\n these robust features *only*.\n\n Since Ilyas et al. only specify robustness in the two class\n\n setting:\n\n \n\n **Specification 1** \n\nFor at least one of the\n\n classes, the feature is γ\\gammaγ-robustly useful with\n\n \n\n **Specification 2** \n\n that remain static in a neighborhood of radius 0.25 on the L2L\\_2L2​ norm ball.\n\n \n\n \n\n \n\n CIFAR test set incurs an accuracy of **23.5%** (out of 88%). Doing the same on\n\n \n\n features, e.g. from a robust deep neural network.\n\n \n\n The results of D^det\\hat{\\mathcal{D}}\\_{\\text{det}}D^det​ in Table 1 of \n\n non-robust features, exactly the thesis of .\n\n \n\n To cite Ilyas et al.’s response, please cite their\n\n**Response Summary**: This\n\n is a valid concern that was actually one of our motivations for creating the\n\n D^det\\widehat{\\mathcal{D}}\\_{det}D\n\ndet​ dataset (which, as the comment notes, actually\n\n has *misleading* robust features). The provided experiment further\n\n improves our understanding of the underlying phenomenon. \n\n**Response**: This comment raises a valid concern which was in fact one of\n\n the primary reasons for designing the D^det\\widehat{\\mathcal{D}}\\_{det}D\n\ndet​ dataset.\n\nrand​\n\n dataset: assign each input a random target label and do PGD towards that label.\n\n Note that unlike the D^det\\widehat{\\mathcal{D}}\\_{det}D\n\ndet​ dataset (in which the\n\nrand​\n\n dataset allows for robust features to actually have a (small) positive\n\n correlation with the label. \n\nTo see how this can happen, consider the following simple setting: we have a\n\n (as in the dataset D^rand\\widehat{\\mathcal{D}}\\_{rand}D\n\nrand​) would make this feature\n\n targeted attack might in this case induce some correlation with the\n\n to correctly classify new inputs. \n\nIn other words, starting from a dataset with no features, one can encode\n\n robust features within small perturbations. In contrast, in the\n\n D^det\\widehat{\\mathcal{D}}\\_{det}D\n\ndet​ dataset, the robust features are *correlated\n\n with the original label* (since the labels are permuted) and since they are\n\n robust, they cannot be flipped to correlate with the newly assigned (wrong)\n\n label. Still, the D^rand\\widehat{\\mathcal{D}}\\_{rand}D\n\nrand​ dataset enables us to show\n\n that (a) PGD-based adversarial examples actually alter features in the data and\n\n (b) models can learn from human-meaningless/mislabeled training data. The\n\n D^det\\widehat{\\mathcal{D}}\\_{det}D\n\ndet​ dataset, on the other hand, illustrates that the\n\n non-robust features are actually sufficient for generalization and can be\n\n preferred over robust ones in natural settings.\n\nThe experiment put forth in the comment is a clever way of showing that such\n\n leakage is indeed possible. However, we want to stress (as the comment itself\n\n does) that robust feature leakage does *not* have an impact on our main\n\n thesis — the D^det\\widehat{\\mathcal{D}}\\_{det}D\n\ndet​ dataset explicitly controls\n\n for robust\n\n feature leakage (and in fact, allows us to quantify the models’ preference for\n\n robust features vs non-robust features — see Appendix D.6 in the\n\n [paper](https://arxiv.org/abs/1905.02175)).\n\n", "bibliography_bib": [{"title": "Adversarial examples are not bugs, they are features"}], "filename": "Robust Feature Leakage.html", "id": "60e9751b6694ebf3be2cf23197de21d1"} {"url": "n/a", "source": "distill", "source_type": "html", "converted_with": "python", "title": "The Building Blocks of Interpretability", "authors": ["Chris Olah", "Arvind Satyanarayan", "Ian Johnson", "Shan Carter", "Ludwig Schubert", "Katherine Ye", "Alexander Mordvintsev"], "date_published": "2018-03-06", "abstract": " With the growing success of neural networks, there is a corresponding need to be able to explain their decisions — including building confidence about how they will behave in the real-world, detecting model bias, and for scientific curiosity. In order to do so, we need to both construct deep abstractions and reify (or instantiate) them in rich interfaces . With a few exceptions , existing work on interpretability fails to do these in concert. ", "journal_ref": "distill-pub", "doi": "https://doi.org/10.23915/distill.00010", "text": "\n\n With the growing success of neural networks, there is a \n\ncorresponding need to be able to explain their decisions — including \n\nbuilding confidence about how they will behave in the real-world, \n\ndetecting model bias, and for scientific curiosity.\n\n In order to do so, we need to both construct deep abstractions and \n\nreify (or instantiate) them in rich interfaces .\n\n \n\n for reasoning about neural networks. \n\n However, these techniques have been studied as isolated threads of \n\nresearch, and the corresponding work of reifying them has been \n\nneglected.\n\n On the other hand, the human-computer interaction community has \n\nbegun to explore rich user interfaces for neural networks ,\n\n but they have not yet engaged deeply with these abstractions. \n\n To the extent these abstractions have been used, it has been in \n\nfairly standard ways.\n\n As a result, we have been left with impoverished interfaces (e.g., \n\nsaliency maps or correlating abstract neurons) that leave a lot of value\n\n on the table. \n\n Worse, many interpretability techniques have not been fully \n\nactualized into abstractions because there has not been pressure to make\n\n them generalizable or composable.\n\n \n\n In this article, we treat existing interpretability methods as \n\nfundamental and composable building blocks for rich user interfaces.\n\n We find that these disparate techniques now come together in a \n\nunified grammar, fulfilling complementary roles in the resulting \n\ninterfaces.\n\n Moreover, this grammar allows us to systematically explore the space\n\n of interpretability interfaces, enabling us to evaluate whether they \n\nmeet particular goals.\n\n For example, we will see how a network looking at a labrador \n\nretriever detects floppy ears and how that influences its \n\nclassification.\n\n \n\n Our interfaces are speculative and one might wonder how reliable they are. \n\n \n\n actively investigating why this is, and hope to uncover principles for \n\ndesigning interpretable models. In the meantime, while we demonstrate \n\nour techniques on GoogLeNet, we provide code for you to try them on \n\nother models.\n\n Although here we’ve made a specific choice of task and network, the \n\nbasic abstractions and patterns for combining them that we present can \n\nbe applied to neural networks in other domains.\n\n \n\nMaking Sense of Hidden Layers\n\n-----------------------------\n\n Much of the recent work on interpretability is concerned with a \n\nneural network’s input and output layers.\n\n Arguably, this focus is due to the clear meaning these layers have: \n\nin computer vision, the input layer represents values for the red, \n\ngreen, and blue color channels for every pixel in the input image, while\n\n the output layer consists of class labels and their associated \n\nprobabilities.\n\n \n\n However, the power of neural networks lies in their hidden \n\nlayers — at every layer, the network discovers a new representation of \n\nthe input.\n\n In computer vision, we use neural networks that run the same feature\n\n detectors at every position in the image.\n\n We can think of each layer’s learned representation as a \n\n \n\n \n\n \n\n![dog_cat](The%20Building%20Blocks%20of%20Interpretability_files/dog_cat.jpeg)\n\na4,1 = [0, 0, 0, 25.2, 164.1, 0, 42.7, 4.51, 115.0, 51.3, 0, 0, ...]\n\n{\n\n:\n\n886.\n\n,\n\n:\n\n599.\n\n,\n\n:\n\n328.\n\n,\n\n:\n\n303.\n\n,\n\n... }\n\nThere seem to be detectors for\n\n floppy ears,\n\n dog snouts,\n\n cat heads,\n\n furry legs,\n\n and\n\n grass.\n\n[Reproduce in a\n\n To make a semantic dictionary, we pair every neuron activation with a\n\n visualization of that neuron and sort them by the magnitude of the \n\nactivation.\n\n This marriage of activations and feature visualization changes our \n\nrelationship with the underlying mathematical object.\n\n Activations now map to iconic representations, instead of abstract \n\nindices, with many appearing to be similar to salient human ideas, such \n\nas “floppy ear,” “dog snout,” or “fur.”\n\n \n\n \n\n Semantic dictionaries are powerful not just because they move away \n\nfrom meaningless indices, but because they express a neural network’s \n\nlearned abstractions with canonical examples.\n\n \n\n With image classification, the neural network learns a set of visual\n\n abstractions and thus images are the most natural symbols to represent \n\nthem.\n\n Were we working with audio, the more natural symbols would most \n\nlikely be audio clips.\n\n This is important because when neurons appear to correspond to human\n\n ideas, it is tempting to reduce them to words.\n\n Doing so, however, is a lossy operation — even for familiar \n\nabstractions, the network may have learned a deeper nuance.\n\n For instance, GoogLeNet has multiple floppy ear detectors that \n\nappear to detect slightly different levels of droopiness, length, and \n\nsurrounding context to the ears.\n\n There also may exist abstractions which are visually familiar, yet \n\nthat we lack good natural language descriptions for: for example, take \n\nthe particular column of shimmering light where sun hits rippling water.\n\n Moreover, the network may learn new abstractions that appear alien \n\nto us — here, natural language would fail us entirely!\n\n In general, canonical examples are a more natural way to represent \n\nthe foreign abstractions that neural networks learn than native human \n\nlanguage.\n\n \n\n By bringing meaning to hidden layers, semantic dictionaries set the \n\nstage for our existing interpretability techniques to be composable \n\nbuilding blocks.\n\n As we shall see, just like their underlying vectors, we can apply \n\ndimensionality reduction to them.\n\n In other cases, semantic dictionaries allow us to push these \n\ntechniques further.\n\n For example, besides the one-way attribution that we currently \n\nperform with the input and output layers, semantic dictionaries allow us\n\n to attribute to-and-from specific hidden layers.\n\n In principle, this work could have been done without semantic \n\ndictionaries but it would have been unclear what the results meant.\n\n \n\n While we introduce semantic dictionaries in terms of neurons, they \n\ncan be used with any basis of activations. We will explore\n\n this more later.\n\n \n\nWhat Does the Network See?\n\n--------------------------\n\n![dog_cat](The%20Building%20Blocks%20of%20Interpretability_files/dog_cat.jpeg)\n\nActivation Vector\n\n=\n\n886.\n\n+\n\n599.\n\n+\n\n328.\n\n+\n\n303.\n\n+\n\n...\n\nChannels\n\n Applying this technique to all the activation vectors allows us to \n\nnot only see what the network detects at each position, but also what \n\nthe network understands of the input image as a whole.\n\n \n\n![](The%20Building%20Blocks%20of%20Interpretability_files/mixed4d.jpeg)\n\n[Reproduce in a\n\nmixed4d\n\n And, by working across layers (eg. “mixed3a”, “mixed4d”), we can \n\nobserve how the network’s understanding evolves: from detecting edges in\n\n earlier layers, to more sophisticated shapes and object parts in the \n\nlatter.\n\n \n\nZoom In\n\nZoom Out\n\n![](The%20Building%20Blocks%20of%20Interpretability_files/mixed3a.jpeg)\n\n#### mixed3a\n\n![](The%20Building%20Blocks%20of%20Interpretability_files/mixed4a.jpeg)\n\n#### mixed4a\n\n![](The%20Building%20Blocks%20of%20Interpretability_files/mixed4d.jpeg)\n\n#### mixed4d\n\n![](The%20Building%20Blocks%20of%20Interpretability_files/mixed5a.jpeg)\n\n#### mixed5a\n\n These visualizations, however, omit a crucial piece of information: \n\nthe magnitude of the activations.\n\n \n\n By scaling the area of each cell by the magnitude of the activation \n\nvector, we can indicate how strongly the network detected features at \n\nthat position:\n\n \n\n![](The%20Building%20Blocks%20of%20Interpretability_files/mixed3a-magnitude.png)\n\n#### mixed3a\n\n![](The%20Building%20Blocks%20of%20Interpretability_files/mixed4a-magnitude.png)\n\n#### mixed4a\n\n![](The%20Building%20Blocks%20of%20Interpretability_files/mixed4d-magnitude.png)\n\n#### mixed4d\n\n![](The%20Building%20Blocks%20of%20Interpretability_files/mixed5a-magnitude.png)\n\n#### mixed5a\n\nHow Are Concepts Assembled?\n\n---------------------------\n\n \n\n We think there’s a lot of important research to be done on \n\nattribution methods, but for the purposes of this article the exact \n\napproach taken to attribution doesn’t matter.\n\n We use a fairly simple method, linearly approximating the \n\nrelationshipWe do attribution by linear \n\napproximation in all of our interfaces. That is, we estimate the effect \n\nof a neuron on the output is its activation times the rate at which \n\nincreasing its activation increases the output. When we talk about a \n\nlinear combination of activations, the attribution can be thought of as \n\nthe linear combination of the attributions of the units, or equivalently\n\n as the dot product between the activation of that combination and the \n\ngradient. \n\n \n\n For spatial attribution, we do an additional trick. \n\nGoogLeNet’s strided max pooling introduces a lot of noise and \n\ncheckerboard patterns to it’s gradients.\n\n To avoid our interface demonstrations being dominated by this noise, we\n\n (a) do a relaxation of the gradient of max pooling, distributing \n\ngradient to inputs proportional to their activation instead of winner \n\ntakes all and (b) cancel out the checkerboard patterns. \n\n \n\n \n\n### Spatial Attribution with Saliency Maps\n\n We see two weaknesses with this current approach.\n\n \n\n First, it is not clear that individual pixels should be the primary \n\nunit of attribution.\n\n The meaning of each pixel is extremely entangled with other pixels, \n\nis not robust to simple visual transforms (e.g., brightness, contrast, \n\netc.), and is far-removed from high-level concepts like the output \n\nclass.\n\n Second, traditional saliency maps are a very limited type of \n\ninterface — they only display the attribution for a single class at a \n\ntime, and do not allow you to probe into individual points more deeply.\n\n As they do not explicitly deal with hidden layers, it has been \n\ndifficult to fully explore their design space.\n\n \n\n We instead treat attribution as another user interface building \n\nblock, and apply it to the hidden layers of a neural network. \n\n In doing so, we change the questions we can pose.\n\n Rather than asking whether the color of a particular pixel was \n\nimportant for the “labrador retriever” classification, we instead ask \n\n This approach is similar to what Class Activation Mapping (CAM) methods\n\n do but, because they interpret their results back onto the input image,\n\n they miss the opportunity to communicate in terms of the rich behavior \n\nof a network’s hidden layers.\n\n \n\n#### Input Image\n\n![](The%20Building%20Blocks%20of%20Interpretability_files/dog_cat.jpeg)\n\n#### Output Classes\n\n* Labrador retriever\n\n* golden retriever\n\n* tennis ball\n\n* Rhodesian ridgeback\n\n* Appenzeller\n\n#### Output Factors\n\n* Labrador retriever\n\n* golden retriever\n\n* beagle\n\n* kuvasz\n\n* redbone\n\n* tiger\n\n* tiger cat\n\n* lynx\n\n* collie\n\n* Border collie\n\n#### mixed3a\n\n#### mixed4a\n\n#### mixed4d\n\n#### mixed5a\n\n![](The%20Building%20Blocks%20of%20Interpretability_files/mixed3a-nmf.png)\n\n![](The%20Building%20Blocks%20of%20Interpretability_files/mixed3a-magnitude.png)\n\n![](The%20Building%20Blocks%20of%20Interpretability_files/mixed4a-nmf.png)\n\n![](The%20Building%20Blocks%20of%20Interpretability_files/mixed4a-magnitude.png)\n\n![](The%20Building%20Blocks%20of%20Interpretability_files/mixed4d-nmf.png)\n\n![](The%20Building%20Blocks%20of%20Interpretability_files/mixed4d-magnitude.png)\n\n![](The%20Building%20Blocks%20of%20Interpretability_files/mixed5a-nmf.png)\n\n![](The%20Building%20Blocks%20of%20Interpretability_files/mixed5a-magnitude.png)\n\nAttribution tends to be more meaningful in later layers.\n\n \n\n The \n\n floppy ear, \n\n dog snout, \n\n cat head, \n\n etc, do mostly what you expect.\n\n \n\n Surprisingly, the\n\n lower snout at mixed4d\n\n seems entangled with the idea of a tennis ball and\n\n supports \"tennis ball\" and \"granny smith apple.\"\n\n[Reproduce in a\n\n The above interface affords us a more flexible relationship with \n\nattribution.\n\n To start, we perform attribution from each spatial position of each \n\nhidden layer shown to all 1,000 output classes.\n\n In order to visualize this thousand-dimensional vector, we use \n\ndimensionality reduction to produce a multi-directional saliency map.\n\n Overlaying these saliency maps on our magnitude-sized activation \n\ngrids provides an information scent\n\n over attribution space.\n\n The activation grids allow us to anchor attribution to the visual \n\nvocabulary our semantic dictionaries first established.\n\n On hover, we update the legend to depict attribution to the output \n\nclasses (i.e., which classes does this spatial position most contribute \n\nto?).\n\n \n\n On hover, additional saliency maps mask the hidden layers, in a \n\nsense shining a light into their black boxes.\n\n This type of layer-to-layer attribution is a prime example of how \n\ncarefully considering interface design drives the generalization of our \n\nexisting abstractions for interpretability.\n\n \n\n With this diagram, we have begun to think of attribution in terms of\n\n higher-level concepts.\n\n However, at a particular position, many concepts are being detected \n\ntogether and this interface makes it difficult to split them apart. \n\n \n\n By continuing to focus on spatial positions, these concepts remain \n\nentangled.\n\n \n\n### Channel Attribution\n\n \n\n An alternate way to slice the cube is by channels instead of spatial locations.\n\n \n\n#### Input Image\n\n![](The%20Building%20Blocks%20of%20Interpretability_files/dog_cat.jpeg)\n\n#### Output Classes\n\n* Labrador retriever\n\n* golden retriever\n\n* tennis ball\n\n* Rhodesian ridgeback\n\n* Appenzeller\n\n#### Top Channels Supporting\n\n Labrador retriever\n\nmixed3b\n\n... \n\nShowing 3 of **480**mixed4a\n\n... \n\nShowing 3 of **508**mixed4b\n\n... \n\nShowing 3 of **512**mixed4c\n\n... \n\nShowing 3 of **512**mixed4d\n\n... \n\nShowing 3 of **528**\n\n This diagram is analogous to the previous one we saw: we conduct \n\nlayer-to-layer attribution but this time over channels rather than \n\nspatial positions.\n\n Once again, we use the icons from our semantic dictionary to \n\nrepresent the channels that most contribute to the final output \n\nclassification.\n\n Hovering over an individual channel displays a heatmap of its \n\nactivations overlaid on the input image.\n\n The legend also updates to show its attribution to the output \n\nclasses (i.e., what are the top classes this channel supports?).\n\n Clicking a channel allows us to drill into the layer-to-layer \n\nattributions, identifying the channels at lower layers that most \n\ncontributed as well as the channels at higher layers that are most \n\nsupported.\n\n \n\n \n\n Attribution to spatial locations and channels can reveal powerful \n\nthings about a model, especially when we combine them together.\n\n Unfortunately, this family of approaches is burdened by two \n\nsignificant problems.\n\n On the one hand, it is very easy to end up with an overwhelming \n\namount of information: it would take hours of human auditing to \n\nunderstand the long-tail of channels that slightly impact the output.\n\n On the other hand, both the aggregations we have explored are \n\nextremely lossy and can miss important parts of the story.\n\n And, while we could avoid lossy aggregation by working with \n\nindividual neurons, and not aggregating at all, this explodes the first \n\nproblem combinatorially.\n\n \n\nMaking Things Human-Scale\n\n-------------------------\n\n In previous sections, we’ve considered three ways of slicing the \n\ncube of activations: into spatial activations, channels, and individual \n\nneurons.\n\n Each of these has major downsides.\n\n If one only uses spatial activations or channels, they miss out on \n\nvery important parts of the story.\n\n For example it’s interesting that the floppy ear detector helped us \n\nclassify an image as a Labrador retriever, but it’s much more \n\ninteresting when that’s combined with the locations that fired to do so.\n\n One can try to drill down to the level of neurons to tell the whole \n\nstory, but the tens of thousands of neurons are simply too much \n\ninformation.\n\n Even the hundreds of channels, before being split into individual \n\nneurons, can be overwhelming to show users!\n\n \n\n If we want to make useful interfaces into neural networks, it isn’t \n\nenough to make things meaningful.\n\n We need to make them human scale, rather than overwhelming dumps of \n\ninformation.\n\n The key to doing so is finding more meaningful ways of breaking up \n\nour activations.\n\n There is good reason to believe that such decompositions exist.\n\n Often, many channels or spatial positions will work together in a \n\nhighly correlated way and are most useful to think of as one unit.\n\n Other channels or positions will have very little activity, and can \n\nbe ignored for a high-level overview.\n\n So, it seems like we ought to be able to find better decompositions \n\nif we had the right tools.\n\n \n\n There is an entire field of research, called matrix factorization, \n\nthat studies optimal strategies for breaking up matrices.\n\n By flattening our cube into a matrix of spatial locations and \n\nchannels, we can apply these techniques to get more meaningful groups of\n\n neurons.\n\n These groups will not align as naturally with the cube as the \n\ngroupings we previously looked at.\n\n Instead, they will be combinations of spatial locations and \n\nchannels.\n\n Moreover, these groups are constructed to explain the behavior of a \n\nnetwork on a particular image.\n\n It would not be effective to reuse the same groupings on another \n\nimage; each image requires calculating a unique set of groups.\n\n \n\n In addition to naturally slicing a hidden layer’s cube of \n\nactivations into neurons, spatial locations, or channels, we can also \n\nconsider more arbitrary groupings of locations and channels.\n\n \n\n The groups that come out of this factorization will be the atoms of \n\nthe interface a user works with. Unfortunately, any grouping is \n\ninherently a tradeoff between reducing things to human scale and, \n\nbecause any aggregation is lossy, preserving information. Matrix \n\nfactorization lets us pick what our groupings are optimized for, giving \n\nus a better tradeoff than the natural groupings we saw earlier.\n\n \n\n The goals of our user interface should influence what we optimize \n\nour matrix factorization to prioritize. For example, if we want to \n\nprioritize what the network detected, we would want the factorization to\n\n fully describe the activations. If we instead wanted to prioritize what\n\n would change the network’s behavior, we would want the factorization to\n\n fully describe the gradient. Finally, if we want to prioritize what \n\ncaused the present behavior, we would want the factorization to fully \n\ndescribe the attributions. Of course, we can strike a balance between \n\nthese three objectives rather than optimizing one to the exclusion of \n\nthe others.\n\n \n\n Most matrix factorization \n\nalgorithms and libraries are set up to minimize the mean squared error \n\nof the reconstruction of a matrix you give them. There are ways to hack \n\nsuch libraries to achieve more general objectives through clever \n\nmanipulations of the provided matrix, as we will see below. More \n\nbroadly, matrix factorization is an optimization problem, and with \n\ncustom tools you can achieve all sorts of custom factorizations.\n\n with non-negative matrix factorization\n\n As the name suggests, non-negative\n\n matrix factorization (NMF) constrains its factors to be positive. This \n\nis fine for the activations of a ReLU network, which must be positive as\n\n well. Our experience is that the groups we get from NMF seem more \n\nindependent and semantically meaningful than those without this \n\nconstraint. Because of this constraints, groups from NMF are a less \n\nefficient at representing the activations than they would be without, \n\nbut our experience is that they seem more independent and semantically \n\nmeaningful.\n\n  .\n\n Notice how the overwhelmingly large number of neurons has been \n\nreduced to a small set of groups, concisely summarizing the story of the\n\n neural network.\n\n \n\nBy\n\n using non-negative matrix factorization we can reduce the large number \n\nof neurons to a small set of groups that concisely summarize the story \n\nof the network.\n\n \n\n \n\n[Reproduce in a\n\n#### Input image\n\n#### Activations\n\n of neuron groups\n\n#### neuron groups based on matrix factorization of mixed4d layer\n\n6 groups\n\ncolor key\n\n hover to isolate\n\n \n\n#### Effect of neuron groups on output classes\n\nLabrador retrieverbeagletiger catlynxtennis ball\n\n2.249\n\n3.298\n\n-0.350\n\n0.111\n\n0.920\n\n3.755\n\n0.599\n\n-0.994\n\n-0.642\n\n1.336\n\n-1.193\n\n-0.110\n\n-1.607\n\n-0.057\n\n0.152\n\n-1.141\n\n-0.356\n\n0.116\n\n0.117\n\n-0.885\n\n1.117\n\n-0.133\n\n0.248\n\n1.120\n\n1.227\n\n-1.892\n\n-2.618\n\n0.205\n\n0.152\n\n-0.480\n\n This figure only focuses at a single layer but, as we saw earlier, \n\nit can be useful to look across multiple layers to understand how a \n\nneural network assembles together lower-level detectors into \n\nhigher-level concepts.\n\n \n\n The groups we constructed before were optimized to understand a \n\nsingle layer independent of the others. To understand multiple layers \n\ntogether, we would like each layer’s factorization to be \n\n“compatible” — to have the groups of earlier layers naturally compose \n\ninto the groups of later layers. This is also something we can optimize \n\nthe factorization for\n\n \n\n We formalize this “compatibility” in a manner described below, \n\nalthough we’re not confident it’s the best formalization and won’t be \n\nsurprised if it is superseded in future work. \n\n \n\n The basic idea is to split each entry in the activation matrix into *N*\n\n entries on the channel dimension, spreading the values proportional to \n\nthe absolute value of its attribution to the corresponding group.\n\n Any factorization of this matrix induces a factorization of the \n\noriginal matrix by collapsing the duplicated entries in the column \n\nfactors.\n\n However, the resulting factorization tries to create separate \n\nfactors when the activation of the same channel has different \n\nattributions in different places.\n\n \n\n  .\n\n \n\n#### Input Image\n\n![](The%20Building%20Blocks%20of%20Interpretability_files/dog_cat.jpeg)\n\nTo understand multiple layers together, we would like \n\neach layer's factorization to be \"compatible\"—to have the groups of\n\n earlier layers naturally compose into the groups of later \n\nlayers. This is also something we can optimize the factorization\n\n for.\n\n positive influence\n\n negative influence\n\n#### Attribution by Factorized Groups\n\n##### Mixed4a\n\n8 groups\n\n##### Mixed4d\n\n6 groups\n\n##### Output class\n\n Align layer factors\n\ntiger catbeagleLabrador retrieverlynxtennis ball\n\n In this section, we recognize that the way in which we break apart \n\nthe cube of activations is an important interface decision. Rather than \n\nresigning ourselves to the natural slices of the cube of activations, we\n\n construct more optimal groupings of neurons. These improved groupings \n\nare both more meaningful and more human-scale, making it less tedious \n\nfor users to understand the behavior of the network.\n\n \n\n Our visualizations have only begun to explore the potential of \n\nalternate bases in providing better atoms for understanding neural \n\nnetworks.\n\n For example, while we focus on creating smaller numbers of \n\ndirections to explain individual examples, there’s recently been \n\nThe Space of Interpretability Interfaces\n\n----------------------------------------\n\n \n\nWe can think of an interface as a union of individual elements.\n\n#### Layers\n\n* output\n\n* hidden\n\n* input\n\n#### Atoms\n\n* group\n\n* spatial\n\n* channel\n\n* neuron\n\n#### Content\n\n* activations\n\n* attribution\n\n#### Presentation\n\n* information visualization\n\n* feature visualization\n\n \n\n We think this is a powerful technique and present this grammar as an \n\ninitial exploration of how it might apply to interpretability \n\ninterfaces.\n\n \n\n \n\n `Id = Int\n\nAtom = Neuron | Spatial | Channel | Group | Whole\n\nLayer = Input | Hidden Int | Out\n\nSubstrate = Network Id Atom Layer\n\n| Dataset Id\n\n | Parameters Id\n\n Content = Substrate\n\n| Activation Substrate\n\n| Attribution Substrate Substrate\n\n| Transform Content *Content?*\n\nElement = InfoVis Content | FeatureVis Content\n\nInterface = [ Element ]`,\n\n but we find it helpful to think about the space visually.\n\n We can represent the network’s substrate (which layers we display, \n\nand how we break them apart) as a grid, with the content and style of \n\npresentation plotted on this grid as points and connections.\n\n \n\n![](The%20Building%20Blocks%20of%20Interpretability_files/empty.svg)\n\n For instance, let us consider our teaser figure again.\n\n \n\n![](The%20Building%20Blocks%20of%20Interpretability_files/teaser.svg)\n\n**1. Feature visualization**\n\n![](The%20Building%20Blocks%20of%20Interpretability_files/teaser-1.png)\n\n \n\n**2. Filter by output attribution**\n\n![](The%20Building%20Blocks%20of%20Interpretability_files/teaser-2.png)\n\n Next, we filter for specific classes by calculating the output attribution.\n\n \n\n**3. Drill down on hover**\n\n![](The%20Building%20Blocks%20of%20Interpretability_files/teaser-3.png)\n\n Hovering over channels, we get a heatmap of spatial activations.\n\n \n\n figure .teaser-thumb {\n\n width: 100%;\n\n border: 30px solid rgb(248, 248, 248);\n\n border-radius: 10px;\n\n box-sizing: border-box;\n\n margin: 12px 0 10px;\n\n }\n\n \n\n In this article, we have only scratched the surface of \n\npossibilities.\n\n There are lots of combinations of our building blocks left to \n\nexplore, and the design space gives us a way to do so systematically.\n\n \n\n Moreover, each building block represents a broad class of \n\ntechniques.\n\n Our interfaces take only one approach but, as we saw in each \n\nsection, there are a number of alternatives for feature visualization, \n\nattribution, and matrix factorization.\n\n An immediate next step would be to try using these alternate \n\ntechniques, and research ways to improve them.\n\n \n\n We can think of dataset examples as another substrate in our design \n\nspace, thus becoming another building block that fully composes with the\n\n others.\n\n In doing so, we can now imagine interfaces that not only allow us to\n\n inspect the influence of dataset examples on the final output \n\nclassification (as Koh & Liang proposed), but also how examples \n\ninfluence the features of hidden layers, and how they influence the \n\nrelationship between these features and the output.\n\n For example, if we consider our “Labrador retriever” image, we can \n\nnot only see which dataset examples most influenced the model to arrive \n\nat this classification, but also which dataset examples most caused the \n\n“floppy ear” detectors to fire, and which dataset examples most caused \n\nthese detectors to increase the “Labrador retriever” classification.\n\n \n\n![](The%20Building%20Blocks%20of%20Interpretability_files/dataset.svg)\n\n A new substrate.\n\n \n\n An interface showing how examples influence the channels of hidden layers.\n\n \n\n An interface for identifying which dataset examples most caused \n\nparticular detectors to increase the output classification.\n\n \n\n While most models today are trained to optimize simple objective \n\nfunctions that one can easily describe, many of the things we’d like \n\nmodels to do in the real world are subtle, nuanced, and hard to describe\n\n mathematically.\n\n \n\n An extreme example of the subtle \n\nobjective problem is something like “creating interesting art”, but much\n\n more mundane examples arise more or less whenever humans are involved.\n\n \n\n \n\n However, even with human feedback, it may still be hard to train \n\nmodels to behave the way we want if the problematic aspect of the model \n\ndoesn’t surface strongly in the training regime where humans are giving \n\nfeedback.\n\n \n\n \n\n There are lots of reasons why problematic behavior may not surface\n\n or may be hard for an evaluator to give feedback on.\n\n For example, discrimination and bias may be subtly present \n\nthroughout the model’s behavior, such that it’s hard for a human \n\nevaluator to critique.\n\n \n\n Or the model may be making a decision in a way that has \n\nproblematic consequences, but those consequences never play out in the \n\nproblems we’re training it on.\n\n \n\n \n\n Human feedback on the model’s decision making process, facilitated \n\nby interpretability interfaces, could be a powerful solution to these \n\nproblems.\n\n \n\n (There is however a danger here: we are optimizing our model to look\n\n the way we want in our interface — if we aren’t careful, this may lead \n\n \n\n Another exciting possibility is interfaces for comparing multiple \n\nmodels.\n\n For instance, we might want to see how a model evolves during \n\ntraining, or how it changes when you transfer it to a new task. \n\n Or, we might want to understand how a whole family of models \n\ncompares to each other.\n\n Existing work has primarily focused on comparing the output behavior\n\n One of the unique challenges of this work is that we may want to \n\nalign the atoms of each model; if we have completely different models, \n\ncan we find the most analogous neurons between them?\n\n Zooming out, can we develop interfaces that allow us to evaluate \n\nlarge spaces of models at once?\n\n \n\nHow Trustworthy Are These Interfaces?\n\n-------------------------------------\n\n In order for interpretability interfaces to be effective, we must \n\ntrust the story they are telling us. \n\n We perceive two concerns with the set of building blocks we \n\ncurrently use.\n\n First, do neurons have a relatively consistent meaning across \n\ndifferent input images, and is that meaning accurately reified by \n\nfeature visualization?\n\n Semantic dictionaries, and the interfaces that build on top of them,\n\n are premised off this question being true.\n\n Second, does attribution make sense and do we trust any of the \n\nattribution methods we presently have?\n\n \n\n validated this in a number of ways: we visualized them without a \n\ngenerative model prior, so that the content of the visualizations\n\n was causally linked to the neuron firing; we inspected the spectrum \n\nof examples that cause the neuron to fire; and used diversity\n\n visualizations to try to create different inputs that cause the \n\nneuron to fire. \n\n \n\nFor more details, see\n\n Besides these neurons, however, we also found many neurons that do \n\nnot have as clean a meaning including “poly-semantic” neurons that \n\nrespond to a mixture of salient ideas (e.g., “cat” and “car”).\n\n There are natural ways that interfaces could respond to this: we \n\ncould use diversity visualizations to reveal the variety of meanings the\n\n neuron can take, or rotate our semantic dictionaries so their \n\ncomponents are more disentangled.\n\n Of course, just like our models can be fooled, the features that \n\nmake them up can be too — including with adversarial examples .\n\n \n\n In fact, it can be interesting to identify when a detector misfires. \n\n \n\n One might even wonder if the idea is fundamentally flawed, since a \n\nfunction’s output could be the result of non-linear interactions between\n\n its inputs. \n\n One way these interactions can pan out is as attribution being \n\n“path-dependent”.\n\n A natural response to this would be for interfaces to explicitly \n\nsurface this information: how path-dependent is the attribution?\n\n A deeper concern, however, would be whether this path-dependency \n\ndominates the attribution. \n\n Clearly, this is not a concern for attribution between adjacent \n\nlayers because of the simple (essentially linear) mapping between them. \n\n \n\n While there may be technicalities about correlated inputs, we \n\nbelieve that attribution is on firm grounding here.\n\n And even with layers further apart, our experience has been that \n\nattribution between high-level features at the output is much more \n\nconsistent than attribution to the input — we believe that \n\npath-dependence is not a dominating concern here. \n\n \n\n Model behavior is extremely complex, and our current building blocks\n\n force us to show only specific aspects of it. \n\n \n\n An important direction for future interpretability research will be \n\ndeveloping techniques that achieve broader coverage of model behavior. \n\n \n\n But, even with such improvements, we anticipate that a key marker of\n\n trustworthiness will be interfaces that do not mislead. \n\n \n\n Interacting with the explicit information displayed should not cause\n\n users to implicitly draw incorrect assessments about the model (we see a\n\n similar principle articulated by Mackinlay for data visualization).\n\n Undoubtedly, the interfaces we present in this article have room to \n\nimprove in this regard. \n\n Fundamental research, at the intersection of machine learning and \n\nhuman-computer interaction, is necessary to resolve these issues.\n\n \n\n Trusting our interfaces is essential for many of the ways we want to\n\n use interpretability.\n\n This is both because the stakes can be high (as in safety and \n\nfairness) and also because ideas like training models with \n\ninterpretability feedback put our interpretability techniques in the \n\nmiddle of an adversarial setting.\n\n \n\nConclusion & Future Work\n\n------------------------\n\n There is a rich design space for interacting with enumerative \n\nalgorithms, and we believe an equally rich space exists for interacting \n\nwith neural networks.\n\n We have a lot of work left ahead of us to build powerful and \n\ntrustworthy interfaces for interpretability.\n\n But, if we succeed, interpretability promises to be a powerful tool \n\nin enabling meaningful human oversight and in building fair, safe, and \n\naligned AI systems.\n\n \n\n", "bibliography_bib": [{"title": "Thought as a Technology"}, {"title": "Visualizing Representations: Deep Learning and Human Beings"}, {"title": "Understanding neural networks through deep visualization"}, {"title": "Using Artificial Intelligence to Augment Human Intelligence"}, {"title": "Visualizing higher-layer features of a deep network"}, {"title": "Feature Visualization"}, {"title": "Deep inside convolutional networks: Visualising image classification models and saliency maps"}, {"title": "Deep neural networks are easily fooled: High confidence predictions for unrecognizable images"}, {"title": "Inceptionism: Going deeper into neural networks"}, {"title": "Plug & play generative networks: Conditional iterative generation of images in latent space"}, {"title": "Visualizing and understanding convolutional networks"}, {"title": "Striving for simplicity: The all convolutional net"}, {"title": "Grad-cam: Why did you say that? visual explanations from deep networks via gradient-based localization"}, {"title": "Interpretable Explanations of Black Boxes by Meaningful Perturbation"}, {"title": "PatternNet and PatternLRP--Improving the interpretability of neural networks"}, {"title": "The (Un)reliability of saliency methods"}, {"title": "Axiomatic attribution for deep networks"}, {"title": "Visualizing data using t-SNE"}, {"title": "LSTMVis: A tool for visual analysis of hidden state dynamics in recurrent neural networks"}, {"title": "ActiVis: Visual Exploration of Industry-Scale Deep Neural Network Models"}, {"title": "Do convolutional neural networks learn class hierarchy?"}, {"title": "Going deeper with convolutions"}, {"title": "Deconvolution and Checkerboard Artifacts"}, {"title": "Learning deep features for discriminative localization"}, {"title": "Information foraging"}, {"title": "TCAV: Relative concept importance testing with Linear Concept Activation Vectors"}, {"title": "SVCCA: Singular Vector Canonical Correlation Analysis for Deep Learning Dynamics and Interpretability"}, {"title": "Net2Vec: Quantifying and Explaining how Concepts are Encoded by Filters in Deep Neural Networks"}, {"title": "Understanding Black-box Predictions via Influence Functions"}, {"title": "Deep reinforcement learning from human preferences"}, {"title": "Interactive optimization for steering machine classification"}, {"title": "Interacting with predictions: Visual inspection of black-box machine learning models"}, {"title": "Modeltracker: Redesigning performance analysis tools for machine learning"}, {"title": "Network dissection: Quantifying interpretability of deep visual representations"}, {"title": "Intriguing properties of neural networks"}, {"title": "Efficient estimation of word representations in vector space"}, {"title": "Unsupervised representation learning with deep convolutional generative adversarial networks"}, {"title": "Adversarial manipulation of deep representations"}, {"title": "Automating the design of graphical presentations of relational information"}], "filename": "The Building Blocks of Interpretability.html", "id": "e6c01413bf7b543e9d9c039575d5eea3"} {"url": "n/a", "source": "distill", "source_type": "html", "converted_with": "python", "title": "Understanding Convolutions on Graphs", "authors": ["Ameya Daigavane", "Balaraman Ravindran", "Gaurav Aggarwal"], "date_published": "2021-09-02", "abstract": " This article is one of two Distill publications about graph neural networks. Take a look at A Gentle Introduction to Graph Neural Networks for a companion view on many things graph and neural network related. ", "journal_ref": "distill-pub", "doi": "https://doi.org/10.23915/distill.00032", "text": "\n\n### Contents\n\n[Introduction](#introduction)\n\n[The Challenges of Computation on Graphs](#challenges)\n\n* [Lack of Consistent Structure](#lack-of-consistent-structure)\n\n* [Node-Order Equivariance](#node-order)\n\n* [Scalability](#scalability)\n\n[Problem Setting and Notation](#problem-and-notation)\n\n[Extending Convolutions to Graphs](#extending)\n\n[Polynomial Filters on Graphs](#polynomial-filters)\n\n[Modern Graph Neural Networks](#modern-gnns)\n\n[Interactive Graph Neural Networks](#interactive)\n\n[From Local to Global Convolutions](#from-local-to-global)\n\n* [Spectral Convolutions](#spectral)\n\n* [Global Propagation via Graph Embeddings](#graph-embeddings)\n\n[Learning GNN Parameters](#learning)\n\n[Conclusions and Further Reading](#further-reading)\n\n* [GNNs in Practice](#practical-techniques)\n\n* [Different Kinds of Graphs](#different-kinds-of-graphs)\n\n* [Pooling](#pooling)\n\n[Supplementary Material](#supplementary)\n\n* [Reproducing Experiments](#experiments-notebooks)\n\n* [Recreating Visualizations](#visualizations-notebooks)\n\n*This article is one of two Distill publications about graph neural networks.\n\n Take a look at\n\n for a companion view on many things graph and neural network related.* \n\n Many systems and interactions - social networks, molecules, \n\norganizations, citations, physical models, transactions - can be \n\nrepresented quite naturally as graphs.\n\n How can we reason about and make predictions within these \n\nsystems?\n\n \n\n One idea is to look at tools that have worked well in other \n\ndomains: neural networks have shown immense predictive power in a \n\nvariety of learning tasks.\n\n However, neural networks have been traditionally used to operate\n\n on fixed-size and/or regular-structured inputs (such as sentences, \n\nimages and video).\n\n This makes them unable to elegantly process graph-structured \n\ndata.\n\n \n\n By extracting and utilizing features from the underlying graph,\n\n GNNs can make more informed predictions about entities in these interactions,\n\n as compared to models that consider individual entities in isolation.\n\n \n\n GNNs are not the only tools available to model graph-structured data:\n\n graph kernels \n\n and random-walk methods \n\n were some of the most popular ones.\n\n Today, however, GNNs have largely replaced these techniques\n\n because of their inherent flexibility to model the underlying systems\n\n better.\n\n \n\n In this article, we will illustrate\n\n the challenges of computing over graphs, \n\n describe the origin and design of graph neural networks,\n\n and explore the most popular GNN variants in recent times.\n\n Particularly, we will see that many of these variants\n\n are composed of similar building blocks.\n\n \n\n First, let’s discuss some of the complications that graphs come with.\n\n \n\n The Challenges of Computation on Graphs\n\n-----------------------------------------\n\n### \n\n Lack of Consistent Structure\n\n Consider the task of predicting whether a given chemical molecule is toxic  :\n\n \n\n**Left:** A non-toxic 1,2,6-trigalloyl-glucose molecule.\n\n**Right:** A toxic caramboxin molecule.\n\n Looking at a few examples, the following issues quickly become apparent:\n\n \n\n* Molecules may have different numbers of atoms.\n\n* The atoms in a molecule may be of different types.\n\n* Each of these atoms may have different number of connections.\n\n* These connections can have different strengths.\n\n Representing graphs in a format that can be computed over is non-trivial,\n\n \n\n### \n\n Node-Order Equivariance\n\n \n\n Representing the graph as one vector requires us to fix an order on the nodes.\n\n But what do we do when the nodes have no inherent order?\n\n **Above:** \n\n \n\n As a result, we would like our algorithms to be node-order equivariant:\n\n they should not depend on the ordering of the nodes of the graph.\n\n If we permute the nodes in some way, the resulting representations of \n\n \n\n### \n\n Scalability\n\n Operating on data this large is not easy.\n\n \n\n Luckily, most naturally occuring graphs are ‘sparse’:\n\n they tend to have their number of edges linear in their number of vertices.\n\n We will see that this allows the use of clever methods\n\n to efficiently compute representations of nodes within the graph.\n\n in comparison to the size of the graphs they operate on.\n\n \n\n Problem Setting and Notation\n\n------------------------------\n\n There are many useful problems that can be formulated over graphs:\n\n \n\n* **Node Classification:** Classifying individual nodes.\n\n* **Graph Classification:** Classifying entire graphs.\n\n* **Node Clustering:** Grouping together similar nodes based on connectivity.\n\n* **Link Prediction:** Predicting missing links.\n\n* **Influence Maximization:** Identifying influential nodes.\n\n Examples of problems that can be defined over graphs.\n\n This list is not exhaustive!\n\n \n\n \n\n \n\n Generally, however, GNNs compute node representations in an iterative process.\n\n \n\n For example, the ‘node features’ for a pixel in a color image\n\n would be the red, green and blue channel (RGB) values at that pixel.\n\n \n\n These kinds of graphs are called ‘homogeneous’.\n\n Many of the same ideas we will see here \n\n apply to other kinds of graphs:\n\n \n\n Sometimes we will need to denote a graph property by a matrix MMM,\n\n \n\n Extending Convolutions to Graphs\n\n----------------------------------\n\n \n\n they depend on the absolute positions of pixels.\n\n where the neighbourhood structure differs from node to node.\n\n \n\n The curious reader may wonder if performing some sort of padding and ordering\n\n This has been attempted with some success ,\n\n but the techniques we will look at here are more general and powerful.\n\n \n\nimage/svg+xml\n\nConvolution in CNNs\n\n1\n\n7\n\n6\n\n7\n\n1\n\n6\n\n4\n\n5\n\n6\n\n3\n\n Convolutions in CNNs are inherently localized.\n\n \n\nimage/svg+xml\n\n4\n\n6\n\n1\n\n2\n\n5\n\n4\n\n1\n\n7\n\n3\n\n6\n\n1\n\n7\n\n6\n\nLocalized Convolution in GNNs\n\n2\n\n GNNs can perform localized convolutions mimicking CNNs.\n\n Hover over a node to see its immediate neighbourhood highlighted on the left.\n\n The structure of this neighbourhood changes from node to node.\n\n \n\n import {Runtime, Inspector} from \"./observablehq-base/runtime.js\";\n\n import define from \"./notebooks/neighbourhoods-for-cnns-and-gnns.js\";\n\n setTimeout(() => {\n\n new Runtime().module(define, name => {\n\n });\n\n }, 200);\n\n \n\n much like how CNNs compute localized filters over neighbouring pixels.\n\n Finally, we will discuss alternative methods\n\n \n\n Polynomial Filters on Graphs\n\n------------------------------\n\n### \n\n The Graph Laplacian\n\n Given a graph GGG, let us fix an arbitrary ordering of the nnn nodes of GGG.\n\n \n\nDv=∑uAvu.\n\n D\\_v = \\sum\\_u A\\_{vu}.\n\n Dv​=u∑​Avu​.\n\n The degree of node vvv is the number of edges incident at vvv.\n\n \n\n in the matrix AAA. We will use this notation throughout this section.\n\n \n\n Then, the graph Laplacian LLL is the square n×nn \\times nn×n matrix defined as:\n\n L=D−A.\n\n L = D - A.\n\n L=D−A.\n\n![](Understanding%20Convolutions%20on%20Graphs_files/laplacian.svg)\n\n Zeros in LLL are not displayed above.\n\n \n\n The graph Laplacian gets its name from being the discrete analog of the\n\n [Laplacian operator](https://mathworld.wolfram.com/Laplacian.html)\n\n from calculus.\n\n \n\n Although it encodes precisely the same information as the adjacency matrix AAA\n\n ,\n\n the graph Laplacian has many interesting properties of its own.\n\n \n\n The graph Laplacian shows up in many mathematical problems involving graphs:\n\n [spectral clustering](https://arxiv.org/abs/0711.0189),\n\n and\n\n \n\n We will see some of these properties\n\n in [a later section](#spectral),\n\n but will instead point readers to\n\n for greater insight into the graph Laplacian.\n\n \n\n### \n\n Polynomials of the Laplacian\n\n Now that we have understood what the graph Laplacian is,\n\n we can build polynomials of the form:\n\n pw(L)=w0In+w1L+w2L2+…+wdLd=∑i=0dwiLi.\n\n pw​(L)=w0​In​+w1​L+w2​L2+…+wd​Ld=i=0∑d​wi​Li.\n\n Each polynomial of this form can alternately be represented by\n\n its vector of coefficients w=[w0,…,wd]w = [w\\_0, \\ldots, w\\_d]w=[w0​,…,wd​].\n\n \n\n These polynomials can be thought of as the equivalent of ‘filters’ in CNNs,\n\n and the coefficients www as the weights of the ‘filters’.\n\n \n\n each of the xvx\\_vxv​ for v∈Vv \\in Vv∈V is just a real number. \n\n \n\n Using the previously chosen ordering of the nodes,\n\n we can stack all of the node features xvx\\_vxv​\n\n to get a vector x∈Rnx \\in \\mathbb{R}^nx∈Rn.\n\n \n\n \n\n Once we have constructed the feature vector xxx,\n\n we can define its convolution with a polynomial filter pwp\\_wpw​ as:\n\n x′=pw(L) x\n\n x’ = p\\_w(L) \\ x\n\n x′=pw​(L) x\n\n To understand how the coefficients www affect the convolution,\n\n let us begin by considering the ‘simplest’ polynomial:\n\n when w0=1w\\_0 = 1w0​=1 and all of the other coefficients are 000.\n\n In this case, x′x’x′ is just xxx:\n\n x′=pw(L) x=∑i=0dwiLix=w0Inx=x.\n\n x’ = p\\_w(L) \\ x = \\sum\\_{i = 0}^d w\\_i L^ix = w\\_0 I\\_n x = x.\n\n x′=pw​(L) x=i=0∑d​wi​Lix=w0​In​x=x.\n\n Now, if we increase the degree, and consider the case where\n\n instead w1=1w\\_1 = 1w1​=1 and and all of the other coefficients are 000.\n\n Then, x′x’x′ is just LxLxLx, and so:\n\n xv′=(Lx)v=Lvx=∑u∈GLvuxu=∑u∈G(Dvu−Avu)xu=Dv xv−∑u∈N(v)xu\n\n \\begin{aligned}\n\n x’\\_v = (Lx)\\_v &= L\\_v x \\\\ \n\n &= \\sum\\_{u \\in G} L\\_{vu} x\\_u \\\\ \n\n &= \\sum\\_{u \\in G} (D\\_{vu} - A\\_{vu}) x\\_u \\\\ \n\n &= D\\_v \\ x\\_v - \\sum\\_{u \\in \\mathcal{N}(v)} x\\_u\n\n \\end{aligned}\n\n xv′​=(Lx)v​​=Lv​x=u∈G∑​Lvu​xu​=u∈G∑​(Dvu​−Avu​)xu​=Dv​ xv​−u∈N(v)∑​xu​​\n\n We see that the features at each node vvv are combined\n\n with the features of its immediate neighbours u∈N(v)u \\in \\mathcal{N}(v)u∈N(v).\n\n \n\n For readers familiar with\n\n this is the exact same idea. When xxx is an image, \n\n \n\n At this point, a natural question to ask is:\n\n Indeed, it is not too hard to show that:\n\n This is Lemma 5.2 from .\n\ndistG(v,u)>i⟹Lvui=0.\n\n \\text{dist}\\_G(v, u) > i \\quad \\Longrightarrow \\quad L\\_{vu}^i = 0.\n\n distG​(v,u)>i⟹Lvui​=0.\n\n \n\n \\begin{aligned}\n\n x’\\_v = (p\\_w(L)x)\\_v &= (p\\_w(L))\\_v x \\\\\n\n &= \\sum\\_{i = 0}^d w\\_i L\\_v^i x \\\\\n\n &= \\sum\\_{i = 0}^d w\\_i \\sum\\_{u \\in G} L\\_{vu}^i x\\_u \\\\\n\n \\end{aligned}\n\n \n\n the resulting pixel in x′x’x′.\n\n Note that even after adjusting for position,\n\n \n\n #button-container {\n\n text-align: center;\n\n }\n\n \n\nReset Grid\n\nRandomize Grid\n\n−2−1012Color Scale\n\nInput Grid\n\nx∈{0,1}25x \\in \\{0, 1\\}^{25}x∈{0,1}25\n\npw(L)p\\_w(L)pw​(L)\n\nConvolutional Kernel at Highlighted Pixel\n\nOutput Grid\n\nx′∈R25x' \\in \\mathbb{R}^{25}x′∈R25\n\n \n\n #poly-main-div {\n\n position: relative;\n\n height: 220px;\n\n }\n\n #poly-svg {\n\n position: absolute;\n\n left: 0px;\n\n top: 0px;\n\n }\n\n #figcaptions {\n\n position: relative;\n\n }\n\n #figcaptions figcaption {\n\n position: absolute;\n\n }\n\n #orig-nat-caption-1 {\n\n left: 170px;\n\n top: 140px;\n\n }\n\n #orig-nat-caption-2 {\n\n left: 170px;\n\n top: 160px;\n\n }\n\n #poly-caption {\n\n left: 460px;\n\n top: 90px;\n\n }\n\n #kernel-caption {\n\n left: 360px;\n\n top: 230px;\n\n }\n\n #upd-nat-caption-1 {\n\n left: 725px;\n\n top: 140px;\n\n }\n\n #upd-nat-caption-2 {\n\n left: 740px;\n\n top: 160px;\n\n }\n\n #poly\\_conv\\_sliders form {\n\n margin-left: 5%;\n\n margin-right: 5%;\n\n }\n\n \n\n #poly\\_conv\\_sliders input {\n\n width: 80%;\n\n }\n\n #poly\\_conv\\_sliders output {\n\n width: 3em;\n\n margin-left: 0 !important;\n\n }\n\n #poly\\_conv\\_sliders {\n\n display: flex;\n\n width: 60%;\n\n margin: 0 auto;\n\n text-align: center;\n\n align-items: center; \n\n justify-content: center; \n\n }\n\n \n\n input[type=\"range\"] {\n\n -webkit-appearance: none;\n\n grid-column: middle;\n\n height: 5px;\n\n border-radius: 1px; \n\n background: #d3d3d3;\n\n outline: none;\n\n opacity: 0.7;\n\n -webkit-transition: .2s;\n\n transition: opacity .2s;\n\n }\n\n input[type=\"range\"]::-webkit-slider-thumb {\n\n -webkit-appearance: none;\n\n appearance: none;\n\n width: 16px;\n\n height: 16px;\n\n border-radius: 50%; \n\n background: #575245;\n\n cursor: pointer;\n\n }\n\n input[type=\"range\"]::-moz-range-thumb {\n\n width: 16px;\n\n height: 16px;\n\n border-radius: 50%;\n\n background: #f5f2eb;\n\n cursor: pointer;\n\n }\n\n output[name=\"output\"] {\n\n width: 1em;\n\n display: inline-block;\n\n }\n\nConvolve\n\npw(L)=∑i=02wiLi=1I + 0.1L + 0L2.\n\n p\\_w(L)\n\n = \\sum\\_{i = 0}^2 w\\_i L^i\n\n = 1I \\ + \\ 0.1L \\ + \\ 0L^2.\n\npw​(L)=i=0∑2​wi​Li=1I + 0.1L + 0L2.\n\nw0{w\\_{0}}w0​\n\n1w1{w\\_{1}}w1​\n\n0.1w2{w\\_{2}}w2​\n\n0\n\nChoice of Laplacian\n\nUnnormalized L\\text{Unnormalized} \\ LUnnormalized L\n\nNormalized L~\\text{Normalized} \\ \\tilde{L}Normalized L~\n\n #button-container {\n\n text-align: center;\n\n }\n\n \n\nReset Coefficients\n\npoly\\_input\\_slider\\_watch = undefined\n\nhighlight\\_selected\\_cell = ƒ()\n\n import {Runtime, Inspector} from \"./observablehq-base/runtime.js\";\n\n setTimeout(() => {\n\n new Runtime().module(define, name => {\n\n });\n\n }, 200);\n\n \n\n Hover over a pixel in the input grid (left, representing xxx)\n\n to highlight it and see the equivalent convolutional kernel\n\n for that pixel under the arrow.\n\n The result x′x’x′ of the convolution is shown on the right:\n\n note that different convolutional kernels are applied at different pixels,\n\n depending on their location.\n\n \n\n Use the sliders at the bottom to change the coefficients www.\n\n To reset all coefficients www to 000, press ‘Reset Coefficients.’\n\n \n\n### \n\n ChebNet\n\n \n\npw(L)=∑i=1dwiTi(L~)\n\n p\\_w(L) = \\sum\\_{i = 1}^d w\\_i T\\_i(\\tilde{L})\n\n pw​(L)=i=1∑d​wi​Ti​(L~)\n\n where TiT\\_iTi​ is the degree-iii\n\n \n\n \n\nL~=2Lλmax(L)−In.\n\n \\tilde{L} = \\frac{2L}{\\lambda\\_{\\max}(L)} - I\\_n.\n\n L~=λmax​(L)2L​−In​.\n\n What is the motivation behind these choices?\n\n \n\n This prevents the entries of powers of L~\\tilde{L}L~ from blowing up.\n\n in order to show the result x′x’x′ on the same color scale.\n\n We won’t talk about this in more depth here,\n\n but will advise interested readers to take a look at as a definitive resource.\n\n### \n\n Polynomial Filters are Node-Order Equivariant\n\n Clearly, this sum does not depend on the order of the neighbours.\n\n A similar proof follows for higher degree polynomials:\n\n the entries in the powers of LLL are equivariant to the ordering of the nodes.\n\n \n\n**Details for the Interested Reader**\n\n As above, let’s assume an arbitrary node-order over the nnn nodes of our graph.\n\n We can represent any permutation by a\n\n [permutation matrix](https://en.wikipedia.org/wiki/Permutation_matrix) PPP.\n\n PPP will always be an orthogonal 0−10-10−1 matrix:\n\n PPT=PTP=In.\n\n PP^T = P^TP = I\\_n.\n\n PPT=PTP=In​.\n\n f(Px)=Pf(x).\n\n f(Px) = P f(x).\n\n f(Px)=Pf(x).\n\n When switching to the new node-order using the permutation PPP,\n\n the quantities below transform in the following way:\n\n x→PxL→PLPTLi→PLiPT\n\n \\begin{aligned}\n\n x &\\to Px \\\\\n\n L &\\to PLP^T \\\\\n\n L^i &\\to PL^iP^T\n\n \\end{aligned}\n\n xLLi​→Px→PLPT→PLiPT​\n\n and so, for the case of polynomial filters, we can see that:\n\n f(Px)=∑i=0dwi(PLiPT)(Px)=P∑i=0dwiLix′=Pf(x).\n\n \\begin{aligned}\n\n f(Px) & = \\sum\\_{i = 0}^d w\\_i (PL^iP^T) (Px) \\\\\n\n & = P \\sum\\_{i = 0}^d w\\_i L^i x’ \\\\\n\n & = P f(x).\n\n \\end{aligned}\n\n f(Px)​=i=0∑d​wi​(PLiPT)(Px)=Pi=0∑d​wi​Lix′=Pf(x).​ \n\n as needed.\n\n \n\n### \n\n Embedding Computation\n\n We now describe how we can build a graph neural network\n\n by stacking ChebNet (or any polynomial filter) layers\n\n one after the other with non-linearities,\n\n much like a standard CNN.\n\n In particular, if we have KKK different polynomial filter layers,\n\n the kthk^{\\text{th}}kth of which has its own learnable weights w(k)w^{(k)}w(k),\n\n we would perform the following computation:\n\n \n\n Start with the original features.\n\n \n\nh(0)=x\\quad {\\color{#FE6100} h^{(0)}} = xh(0)=x\n\n Then iterate, for k=1,2,…k = 1, 2, \\ldots k=1,2,… upto K KK:\n\n \n\np(k)=pw(k)(L)g(k)=p(k)×h(k−1)h(k)=σ(g(k))\n\n \\begin{aligned}\n\n \\quad {p^{(k)}} &= p\\_{\\color{#4D9DB5} w^{(k)}}(L) \\\\\n\n \\\\\n\n \\quad {g^{(k)}} &= p^{(k)} \\times {\\color{#FE6100} h^{(k - 1)}} \\\\\n\n \\\\\n\n \\quad {\\color{#FE6100} h^{(k)}} &= \\sigma \\left({g^{(k)}} \\right)\n\n \\end{aligned}\n\n p(k)g(k)h(k)​=pw(k)​(L)=p(k)×h(k−1)=σ(g(k))​\n\n \n\n \n\n \n\n Color Codes:\n\n * Computed node embeddings.\n\n* Learnable parameters.\n\n #spectral-conv div {\n\n margin: 0 auto 10px auto;\n\n display: block;\n\n }\n\n \n\n #cheb-conv {\n\n width: 800px;\n\n height: 310px;\n\n display: block;\n\n margin-top: 0;\n\n margin-bottom: 0;\n\n margin-left: auto;\n\n margin-right: auto;\n\n position: relative;\n\n }\n\n \n\n \n\n span.green {\n\n color: #177245;\n\n }\n\n span.blue {\n\n color: #0047AB;\n\n }\n\n /\\* Legend \\*/\n\n .legend {\n\n list-style: none;\n\n margin-top: 5px;\n\n }\n\n .legend li {\n\n margin-bottom: 5px;\n\n }\n\n .legend span.box {\n\n border: 1px solid #ccc;\n\n float: left;\n\n width: 12px;\n\n height: 12px;\n\n margin: 2px;\n\n margin-right: 5px;\n\n }\n\n .legend .main { background-color: #FE6100; }\n\n .legend .param { background-color: #4D9DB5; }\n\n \n\n import {Runtime, Inspector} from \"./observablehq-base/runtime.js\";\n\n import define from \"./notebooks/updated-chebnet-equations.js\";\n\n setTimeout(() => {\n\n new Runtime().module(define, name => {\n\n });\n\n }, 200);\n\n \n\n Note that these networks\n\n reuse the same filter weights across different nodes,\n\n exactly mimicking weight-sharing in Convolutional Neural Networks (CNNs)\n\n which reuse weights for convolutional filters across a grid.\n\n \n\n Modern Graph Neural Networks\n\n------------------------------\n\n ChebNet was a breakthrough in learning localized filters over graphs,\n\n \n\n focussing on a particular vertex vvv:\n\n \n\n(Lx)v=Lvx=∑u∈GLvuxu=∑u∈G(Dvu−Avu)xu=Dv xv−∑u∈N(v)xu\n\n \\begin{aligned}\n\n (Lx)\\_v &= L\\_v x \\\\ \n\n &= \\sum\\_{u \\in G} L\\_{vu} x\\_u \\\\ \n\n &= \\sum\\_{u \\in G} (D\\_{vu} - A\\_{vu}) x\\_u \\\\ \n\n &= D\\_v \\ x\\_v - \\sum\\_{u \\in \\mathcal{N}(v)} x\\_u\n\n \\end{aligned}\n\n (Lx)v​​=Lv​x=u∈G∑​Lvu​xu​=u∈G∑​(Dvu​−Avu​)xu​=Dv​ xv​−u∈N(v)∑​xu​​\n\n \n\n As we noted before, this is a 111-hop localized convolution.\n\n But more importantly, we can think of this convolution as arising of two steps:\n\n \n\n* Aggregating over immediate neighbour features xux\\_uxu​.\n\n* Combining with the node’s own feature xvx\\_vxv​.\n\n**Key Idea:**\n\n What if we consider different kinds of ‘aggregation’ and ‘combination’ steps,\n\n beyond what are possible using polynomial filters?\n\n \n\n By ensuring that the aggregation is node-order equivariant,\n\n the overall convolution becomes node-order equivariant.\n\n \n\n after each step, every node receives some ‘information’ from its neighbours.\n\n \n\n \n\n### \n\n Embedding Computation\n\n Message-passing forms the backbone of many GNN architectures today.\n\n We describe the most popular ones in depth below:\n\n \n\n* Graph Convolutional Networks (GCN)\n\n* Graph Attention Networks (GAT)\n\n* Graph Sample and Aggregate (GraphSAGE)\n\n* Graph Isomorphism Network (GIN)\n\n* GCN\n\n* GAT\n\n* GraphSAGE\n\n* GIN\n\nhv(0)\\color{#FE6100} h\\_v^{(0)}hv(0)​\n\n===\n\nxvx\\_vxv​\n\n Node vvv's initial embedding.\n\n \n\n ... is just node vvv's original features.\n\n \n\n for all v∈V.v \\in V.v∈V.\n\n and for k=1,2,…k = 1, 2, \\ldots k=1,2,… upto K KK:\n\n \n\nhv(k)\\color{#FE6100} h\\_v^{(k)}hv(k)​\n\n===\n\nf(k)(W(k)⋅∑u∈N(v)hu(k−1)∣N(v)∣+B(k)⋅hv(k−1)){\\color{#4D9DB5}\n\n f^{(k)}}\\left({\\color{#4D9DB5} W^{(k)}} \\cdot \\frac{\\sum\\limits\\_{u \n\n\\in \\mathcal{N}(v)} {\\color{#A95AA1} h\\_u^{(k - 1)}}}{|\\mathcal{N}(v)|} +\n\n {\\color{#4D9DB5} B^{(k)}} \\cdot {\\color{#FE6100} h\\_v^{(k - \n\n1)}}\\right)f(k)⎝⎜⎜⎜⎛​W(k)⋅∣N(v)∣u∈N(v)∑​hu(k−1)​​+B(k)⋅hv(k−1)​⎠⎟⎟⎟⎞​\n\n Node vvv's embedding at step kkk.\n\n \n\n Mean of vvv's neighbour's embeddings at step k−1k - 1k−1.\n\n \n\n Node vvv's embedding at step k−1k - 1k−1.\n\n \n\n for all v∈V.v \\in V.v∈V.\n\n Color Codes:\n\n * Embedding of node vvv.\n\n* Embedding of a neighbour of node vvv.\n\n* (Potentially) Learnable parameters.\n\nhv(0)\\color{#FE6100} h\\_v^{(0)}hv(0)​\n\n===\n\nxvx\\_vxv​\n\n Node vvv's initial embedding.\n\n \n\n ... is just node vvv's original features.\n\n \n\n for all v∈V.v \\in V.v∈V.\n\n and for k=1,2,…k = 1, 2, \\ldots k=1,2,… upto K KK:\n\n \n\nhv(k)\\color{#FE6100} h\\_v^{(k)}hv(k)​\n\n===\n\nf(k)(W(k)⋅[∑u∈N(v)αvu(k−1)hu(k−1)+αvv(k−1)hv(k−1)]){\\color{#4D9DB5}\n\n f^{(k)}}\\left({\\color{#4D9DB5} W^{(k)}} \\cdot \\left[\\sum\\limits\\_{u \n\n\\in \\mathcal{N}(v)} \\alpha\\_{vu}^{(k - 1)} {\\color{#A95AA1} h\\_u^{(k - \n\n1)}} + \\alpha\\_{vv}^{(k - 1)} {\\color{#FE6100} h\\_v^{(k - \n\n Node vvv's embedding at step kkk.\n\n \n\n Weighted mean of vvv's neighbour's embeddings at step k−1k - 1k−1.\n\n \n\n Node vvv's embedding at step k−1k - 1k−1.\n\n \n\n for all v∈V.v \\in V.v∈V.\n\n \n\nαvu(k)\\alpha\\_{vu}^{(k)}αvu(k)​\n\n===\n\nA(k)(hv(k),hu(k))∑w∈N(v)A(k)(hv(k),hw(k))\\frac{{\\color{#4D9DB5}\n\n A^{(k)}}({\\color{#FE6100} h\\_v^{(k)}}, {\\color{#A95AA1} \n\nh\\_u^{(k)}})}{\\sum\\limits\\_{w \\in \\mathcal{N}(v)}{\\color{#4D9DB5} \n\n for all (v,u)∈E.(v, u) \\in E.(v,u)∈E.\n\n Color Codes:\n\n * Embedding of node vvv.\n\n* Embedding of a neighbour of node vvv.\n\n* (Potentially) Learnable parameters.\n\nhv(0)\\color{#FE6100} h\\_v^{(0)}hv(0)​\n\n===\n\nxvx\\_vxv​\n\n Node vvv's initial embedding.\n\n \n\n ... is just node vvv's original features.\n\n \n\n for all v∈V.v \\in V.v∈V.\n\n and for k=1,2,…k = 1, 2, \\ldots k=1,2,… upto K KK:\n\n \n\nhv(k)\\color{#FE6100} h\\_v^{(k)}hv(k)​\n\n===\n\nf(k)(∑u∈N(v)hu(k−1)+(1+ϵ(k))⋅hv(k−1)){\\color{#4D9DB5}\n\n f^{(k)}}\\left(\\sum\\limits\\_{u \\in \\mathcal{N}(v)} {\\color{#A95AA1} \n\nh\\_u^{(k - 1)}} + (1 + {\\color{#4D9DB5} \\epsilon^{(k)}}) \\cdot \n\n Node vvv's embedding at step kkk.\n\n \n\n Sum of vvv's neighbour's embeddings at step k−1k - 1k−1.\n\n \n\n Node vvv's embedding at step k−1k - 1k−1.\n\n \n\n for all v∈V.v \\in V.v∈V.\n\n Color Codes:\n\n * Embedding of node vvv.\n\n* Embedding of a neighbour of node vvv.\n\n* (Potentially) Learnable parameters.\n\nhv(0)\\color{#FE6100} h\\_v^{(0)}hv(0)​\n\n===\n\nxvx\\_vxv​\n\n Node vvv's initial embedding.\n\n \n\n ... is just node vvv's original features.\n\n \n\n for all v∈V.v \\in V.v∈V.\n\n and for k=1,2,…k = 1, 2, \\ldots k=1,2,… upto K KK:\n\n \n\nhv(k)\\color{#FE6100} h\\_v^{(k)}hv(k)​\n\n===\n\nf(k)(W(k)⋅[AGGu∈N(v)({hu(k−1)}), hv(k−1)]){\\color{#4D9DB5}\n\n f^{(k)}}\\left({\\color{#4D9DB5} W^{(k)}} \\cdot \\left[\\underset{{u \\in \n\n\\mathcal{N}(v)}}{\\color{#4D9DB5} \\text{AGG}}(\\{ {\\color{#A95AA1} \n\n Node vvv's embedding at step kkk.\n\n \n\n Aggregation of vvv's neighbour's embeddings at step k−1k - 1k−1 ...\n\n \n\n ... concatenated with ...\n\n \n\n ... Node vvv's embedding at step k−1k - 1k−1.\n\n \n\n for all v∈V.v \\in V.v∈V.\n\n Color Codes:\n\n * Embedding of node vvv.\n\n* Embedding of a neighbour of node vvv.\n\n* (Potentially) Learnable parameters.\n\nPredictions can be made at each node by using the final computed embedding:\n\nyv^=PREDICT(hv(K))\n\n\\hat{y\\_v} = \\text{PREDICT}({\\color{#FE6100} h\\_v^{(K)}})\n\nyv​^​=PREDICT(hv(K)​)\n\nThe variant we discuss here is the 2-parameter model from the original paper ,\n\nf(W⋅∑u∈N(v)hu∣N(v)∣+B⋅hv)\n\n{\\color{#4D9DB5} f}\\left({\\color{#4D9DB5} W} \\cdot \\sum\\_{u \\in \n\n\\mathcal{N}(v)} \\frac{\\color{#A95AA1} h\\_u}{|\\mathcal{N}(v)|} + \n\ninstead of the normalization defined in the original paper: \n\nf(W⋅∑u∈N(v)hu∣N(u)∣∣N(v)∣+B⋅hv)\n\n{\\color{#4D9DB5} f}\\left({\\color{#4D9DB5} W} \\cdot \\sum\\_{u \\in \n\n\\mathcal{N}(v)} \\frac{\\color{#A95AA1} \n\nh\\_u}{\\sqrt{|\\mathcal{N}(u)||\\mathcal{N}(v)|}} + {\\color{#4D9DB5} B} \n\n\\cdot {\\color{#FE6100} h\\_v} \\right)f⎝⎜⎛​W⋅u∈N(v)∑​∣N(u)∣∣N(v)∣​hu​​+B⋅hv​⎠⎟⎞​\n\n for ease of exposition.\n\ninteractive\\_list = undefined\n\n #gnn-models ul.list {\n\n display: grid;\n\n grid-template-columns: 1fr 1fr 1fr 1fr;\n\n grid-column-gap: 4px;\n\n text-align: center;\n\n padding: 0;\n\n list-style: none;\n\n }\n\n #gnn-models ul.list li {\n\n margin: 0;\n\n background: #f3f3f3;\n\n border-bottom: 3px solid #e3e3e3;\n\n cursor: pointer;\n\n line-height: 2em;\n\n border-top-left-radius: 3px;\n\n border-top-right-radius: 3px;\n\n }\n\n #gnn-models ul.list li:hover {\n\n border-bottom-color: #D9D9D9;\n\n }\n\n #gnn-models ul.list li.selected {\n\n background: #D9D9D9;\n\n border-bottom-color: #D9D9D9;\n\n }\n\n #gnn-models div {\n\n margin: 0 auto 10px auto;\n\n display: none;\n\n }\n\n \n\n #gnn-models {\n\n display: block;\n\n margin-left: auto;\n\n margin-right: auto;\n\n position: relative;\n\n }\n\n \n\n #gnn-models[data-show='gcn'] .gnn-model-gcn,\n\n #gnn-models[data-show='gin'] .gnn-model-gin,\n\n #gnn-models[data-show='graphsage'] .gnn-model-graphsage,\n\n #gnn-models[data-show='gat'] .gnn-model-gat {\n\n display: block;\n\n }\n\n \n\n /\\* Legend \\*/\n\n .legend {\n\n list-style: none;\n\n margin-top: 5px;\n\n }\n\n .legend li {\n\n margin-bottom: 0;\n\n }\n\n .legend span.box {\n\n border: 1px solid #ccc;\n\n float: left;\n\n width: 12px;\n\n height: 12px;\n\n margin: 2px;\n\n margin-right: 5px;\n\n }\n\n .legend .main { background-color: #FE6100; }\n\n .legend .neigh { background-color: #A95AA1; }\n\n .legend .param { background-color: #4D9DB5; }\n\n import {Runtime, Inspector} from \"./observablehq-base/runtime.js\"; \n\n import define from \"./notebooks/interactive-gnn-equations.js\";\n\n setTimeout(() => {\n\n new Runtime().module(define, name => {\n\n });\n\n }, 200);\n\n \n\n### \n\n Thoughts\n\n demonstrates that aggregation functions indeed can be compared on how well\n\n they can uniquely preserve node neighbourhood features;\n\n \n\n Here, we’ve talk about GNNs where the computation only occurs at the nodes.\n\n More recent GNN models\n\n such as Message-Passing Neural Networks \n\n and Graph Networks \n\n perform computation over the edges as well;\n\n they compute edge embeddings together with node embeddings.\n\n This is an even more general framework -\n\n but the same ‘message passing’ ideas from this section apply.\n\n \n\n Interactive Graph Neural Networks\n\n-----------------------------------\n\n Below is an interactive visualization of these GNN models on small graphs.\n\n but the same equations hold when the node features are vectors.\n\n \n\n* GCN\n\n* GAT\n\n* GraphSAGE\n\n* GIN\n\n #button-container {\n\n text-align: center;\n\n }\n\n \n\nReset\n\nUndo Last Update\n\nUpdate All Nodes\n\nRandomize Graph\n\n #button-container {\n\n text-align: center;\n\n }\n\n \n\nReset\n\nUndo Last Update\n\nUpdate All Nodes\n\nRandomize Graph\n\n #button-container {\n\n text-align: center;\n\n }\n\n \n\nReset\n\nUndo Last Update\n\nUpdate All Nodes\n\nRandomize Graph\n\n #button-container {\n\n text-align: center;\n\n }\n\n \n\nReset\n\nUndo Last Update\n\nUpdate All Nodes\n\nRandomize Graph\n\n #main-div {\n\n display: flex;\n\n flex-direction: row;\n\n align-items: center;\n\n justify-content: center;\n\n }\n\n \n\n #labels-div form {\n\n margin-top: 2%;\n\n margin-bottom: 2%;\n\n }\n\n #labels-div {\n\n display: flex;\n\n flex-direction: column;\n\n align-items: center;\n\n justify-content: center;\n\n width: 65%;\n\n }\n\n \n\n #iteration {\n\n font-weight: bold;\n\n margin-bottom: 20%;\n\n }\n\n #sliders {\n\n display: flex;\n\n flex-direction: column;\n\n align-items: center;\n\n justify-content: center;\n\n }\n\n /\\* Nicer sliders. \\*/\n\n input[type=\"range\"] {\n\n -webkit-appearance: none;\n\n grid-column: middle;\n\n height: 5px;\n\n border-radius: 1px; \n\n background: #d3d3d3;\n\n outline: none;\n\n opacity: 0.7;\n\n -webkit-transition: .2s;\n\n transition: opacity .2s;\n\n }\n\n input[type=\"range\"]::-webkit-slider-thumb {\n\n -webkit-appearance: none;\n\n appearance: none;\n\n width: 16px;\n\n height: 16px;\n\n border-radius: 50%; \n\n background: #575245;\n\n cursor: pointer;\n\n }\n\n input[type=\"range\"]::-moz-range-thumb {\n\n width: 16px;\n\n height: 16px;\n\n border-radius: 50%;\n\n background: #f5f2eb;\n\n cursor: pointer;\n\n }\n\n output[name=\"output\"] {\n\n width: 1em;\n\n display: inline-block;\n\n }\n\n \n\n Initial Graph\n\n \n\n Parameters for Next Update\n\n \n\nW(1)W^{(1)}W(1)\n\n1\n\nB(1)B^{(1)}B(1)\n\n1\n\nA6B2C-10D1E3\n\n #main-div {\n\n display: flex;\n\n flex-direction: row;\n\n align-items: center;\n\n justify-content: center;\n\n }\n\n #labels-div {\n\n display: flex;\n\n flex-direction: column;\n\n align-items: center;\n\n justify-content: center;\n\n width: 65%;\n\n }\n\n \n\n #iteration {\n\n font-weight: bold;\n\n margin-bottom: 20%;\n\n }\n\n #labels-div form {\n\n margin-top: 2%;\n\n margin-bottom: 2%;\n\n }\n\n /\\* Nicer sliders. \\*/\n\n input[type=\"range\"] {\n\n -webkit-appearance: none;\n\n grid-column: middle;\n\n height: 5px;\n\n border-radius: 1px; \n\n background: #d3d3d3;\n\n outline: none;\n\n opacity: 0.7;\n\n -webkit-transition: .2s;\n\n transition: opacity .2s;\n\n }\n\n input[type=\"range\"]::-webkit-slider-thumb {\n\n -webkit-appearance: none;\n\n appearance: none;\n\n width: 16px;\n\n height: 16px;\n\n border-radius: 50%; \n\n background: #575245;\n\n cursor: pointer;\n\n }\n\n input[type=\"range\"]::-moz-range-thumb {\n\n width: 16px;\n\n height: 16px;\n\n border-radius: 50%;\n\n background: #f5f2eb;\n\n cursor: pointer;\n\n }\n\n output[name=\"output\"] {\n\n width: 1em;\n\n display: inline-block;\n\n }\n\n \n\n Initial Graph\n\n \n\n Parameters for Next Update\n\n \n\nW(1)W^{(1)}W(1)\n\n1\n\nAW(1)A\\_W^{(1)}AW(1)​\n\n1\n\nA6B2C-10D1E3\n\n #main-div {\n\n display: flex;\n\n flex-direction: row;\n\n align-items: center;\n\n justify-content: center;\n\n }\n\n \n\n #labels-div form {\n\n margin-top: 2%;\n\n margin-bottom: 2%;\n\n }\n\n #labels-div {\n\n display: flex;\n\n flex-direction: column;\n\n align-items: center;\n\n justify-content: center;\n\n width: 65%;\n\n }\n\n \n\n #iteration {\n\n font-weight: bold;\n\n margin-bottom: 20%;\n\n }\n\n #sliders {\n\n display: flex;\n\n flex-direction: column;\n\n align-items: center;\n\n justify-content: center;\n\n }\n\n /\\* Nicer sliders. \\*/\n\n input[type=\"range\"] {\n\n -webkit-appearance: none;\n\n grid-column: middle;\n\n height: 5px;\n\n border-radius: 1px; \n\n background: #d3d3d3;\n\n outline: none;\n\n opacity: 0.7;\n\n -webkit-transition: .2s;\n\n transition: opacity .2s;\n\n }\n\n input[type=\"range\"]::-webkit-slider-thumb {\n\n -webkit-appearance: none;\n\n appearance: none;\n\n width: 16px;\n\n height: 16px;\n\n border-radius: 50%; \n\n background: #575245;\n\n cursor: pointer;\n\n }\n\n input[type=\"range\"]::-moz-range-thumb {\n\n width: 16px;\n\n height: 16px;\n\n border-radius: 50%;\n\n background: #f5f2eb;\n\n cursor: pointer;\n\n }\n\n output[name=\"output\"] {\n\n width: 1em;\n\n display: inline-block;\n\n }\n\n \n\n Initial Graph\n\n \n\n Parameters for Next Update\n\n \n\nW(1)W^{(1)}W(1)\n\n1\n\nB(1)B^{(1)}B(1)\n\n1\n\nUs(1)U\\_s^{(1)}Us(1)​\n\n1\n\nUh(1)U\\_h^{(1)}Uh(1)​\n\n1\n\nA6B2C-10D1E3\n\n #main-div {\n\n display: flex;\n\n flex-direction: row;\n\n align-items: center;\n\n justify-content: center;\n\n }\n\n \n\n #labels-div form {\n\n margin-top: 2%;\n\n margin-bottom: 2%;\n\n }\n\n #labels-div {\n\n display: flex;\n\n flex-direction: column;\n\n align-items: center;\n\n justify-content: center;\n\n width: 65%;\n\n }\n\n \n\n #iteration {\n\n font-weight: bold;\n\n margin-bottom: 20%;\n\n }\n\n #sliders {\n\n display: flex;\n\n flex-direction: column;\n\n align-items: center;\n\n justify-content: center;\n\n }\n\n /\\* Nicer sliders. \\*/\n\n input[type=\"range\"] {\n\n -webkit-appearance: none;\n\n grid-column: middle;\n\n height: 5px;\n\n border-radius: 1px; \n\n background: #d3d3d3;\n\n outline: none;\n\n opacity: 0.7;\n\n -webkit-transition: .2s;\n\n transition: opacity .2s;\n\n }\n\n input[type=\"range\"]::-webkit-slider-thumb {\n\n -webkit-appearance: none;\n\n appearance: none;\n\n width: 16px;\n\n height: 16px;\n\n border-radius: 50%; \n\n background: #575245;\n\n cursor: pointer;\n\n }\n\n input[type=\"range\"]::-moz-range-thumb {\n\n width: 16px;\n\n height: 16px;\n\n border-radius: 50%;\n\n background: #f5f2eb;\n\n cursor: pointer;\n\n }\n\n output[name=\"output\"] {\n\n width: 1em;\n\n display: inline-block;\n\n }\n\n \n\n Initial Graph\n\n \n\n Parameters for Next Update\n\n \n\nϵ(1){\\epsilon}^{(1)}ϵ(1)\n\n0\n\nA6B2C-10D1E3\n\n **Next Update (Iteration 1):** \n\n Equation for Node AAA:\n\n \\begin{aligned}\n\n {\\color{#FE6100} h\\_{A}^{(1)}}\n\n &= {f} \\left(\n\n {\\color{#4D9DB5} B^{(1)}} \\times {\\color{#FE6100} h\\_{A}^{(0)}} \\right) \\\\ \n\n &= f \\left(\n\n {\\color{#4D9DB5} 1} \\times {\\color{#FE6100} 6} \\right) \\\\\n\n &= f \\left(\n\n -3.5 + \n\n 6 \\right) \\\\\n\n &= f \\left(2.5 \\right) \\\\\n\n &= \\text{ReLU} \\left(2.5\\right)\n\n = 2.5.\n\n **Next Update (Iteration 1):** \n\n Equation for Node AAA:\n\n \\begin{aligned}\n\n {\\color{#FE6100} h\\_{A}^{(1)}}\n\n &= f \\left({\\color{#4D9DB5} W^{(1)}} \\times \n\n\\left(\\alpha\\_{AC} \\times {\\color{#A95AA1} h\\_{C}^{(0)}} + \\alpha\\_{AE} \n\n\\times {\\color{#A95AA1} h\\_{E}^{(0)}} + \\alpha\\_{AA} \\times \n\n{\\color{#FE6100} h\\_{A}^{(0)}} \\right) \\right) \\\\ \n\n &= f \\left({\\color{#4D9DB5} W^{(1)}} \\times \\left(0 \n\n\\times {\\color{#A95AA1} h\\_{C}^{(0)}} + 0.05 \\times {\\color{#A95AA1} \n\nh\\_{E}^{(0)}} + 0.95 \\times {\\color{#FE6100} h\\_{A}^{(0)}} \\right) \\right)\n\n \\\\\n\n &= f \\left({\\color{#4D9DB5} W^{(1)}} \\times \\big(0 \n\n\\times {\\color{#A95AA1} -10} + 0.05 \\times {\\color{#A95AA1} 3} + 0.95 \n\n\\times {\\color{#FE6100} 6}\\big) \\right) \\\\\n\n &= f \\left({\\color{#4D9DB5} W^{(1)}} \\times \\big(0 + \n\n0.14 + 5.72 \\big) \\right) \\\\\n\n &= f \\left({\\color{#4D9DB5} 1} \\times \\left(5.86 \n\n\\right) \\right) \\\\\n\n &= f \\left(5.86\\right) \\\\\n\n &= \\text{ReLU} \\left(5.86\\right) \n\n = 5.86.\n\n with attention weights αA\\alpha\\_{A}αA​ computed as:\n\n \\begin{aligned}\n\n \\alpha\\_{AC} = \\frac{\\exp{(e\\_{AC})}}{\\exp{(e\\_{AC})} + \n\n\\exp{(e\\_{AE})} + \\exp{(e\\_{AA})}} &\\approx 0. \\\\ \\alpha\\_{AE} = \n\n\\frac{\\exp{(e\\_{AE})}}{\\exp{(e\\_{AC})} + \\exp{(e\\_{AE})} + \\exp{(e\\_{AA})}} \n\n&\\approx 0.05. \\\\ \\alpha\\_{AA} = \\frac{\\exp{(e\\_{AA})}}{\\exp{(e\\_{AC})}\n\n + \\exp{(e\\_{AE})} + \\exp{(e\\_{AA})}} &\\approx 0.95.\n\n \\end{aligned}\n\n where the unnormalized attention weights eAe\\_{A}eA​ are given by:\n\n \\begin{aligned}\n\n e\\_{AC} &= \\text{ReLU}\\left({\\color{#4D9DB5} A\\_{W}^{(1)}} \n\n\\left({\\color{#4D9DB5} W^{(1)}}{\\color{#FE6100} h\\_{A}^{(0)}} + \n\n{\\color{#4D9DB5} W^{(1)}}{\\color{#A95AA1} h\\_{C}^{(0)}} \\right)\\right) =\n\n \\text{ReLU}\\left({\\color{#4D9DB5} 1}\\left({\\color{#4D9DB5} 1}\\times \n\n{\\color{#FE6100} 6} + {\\color{#4D9DB5} 1} \\times {\\color{#A95AA1} \n\n-10}\\right)\\right) = \\text{ReLU}\\left(-4\\right) = 0. \\\\ e\\_{AE} &= \n\n\\text{ReLU}\\left({\\color{#4D9DB5} A\\_{W}^{(1)}} \\left({\\color{#4D9DB5} \n\nW^{(1)}}{\\color{#FE6100} h\\_{A}^{(0)}} + {\\color{#4D9DB5} \n\nW^{(1)}}{\\color{#A95AA1} h\\_{E}^{(0)}} \\right)\\right) = \n\n\\text{ReLU}\\left({\\color{#4D9DB5} 1}\\left({\\color{#4D9DB5} 1}\\times \n\n{\\color{#FE6100} 6} + {\\color{#4D9DB5} 1} \\times {\\color{#A95AA1} \n\n3}\\right)\\right) = \\text{ReLU}\\left(9\\right) = 9. \\\\ e\\_{AA} &= \n\n\\text{ReLU}\\left({\\color{#4D9DB5} A\\_{W}^{(1)}} \\left({\\color{#4D9DB5} \n\nW^{(1)}}{\\color{#FE6100} h\\_{A}^{(0)}} + {\\color{#4D9DB5} \n\nW^{(1)}}{\\color{#FE6100} h\\_{A}^{(0)}} \\right)\\right) = \n\n\\text{ReLU}\\left({\\color{#4D9DB5} 1}\\left({\\color{#4D9DB5} 1}\\times \n\n{\\color{#FE6100} 6} + {\\color{#4D9DB5} 1} \\times {\\color{#FE6100} \n\n6}\\right)\\right) = \\text{ReLU}\\left(12\\right) = 12. \\\\\n\n \\end{aligned}\n\n We have omitted the superscripts on the attention weights for clarity.\n\n **Next Update (Iteration 1):** \n\n Equation for Node AAA:\n\n hA(1)=f(W(1)×RNN[hC(0),hE(0)]+B(1)×hA(0))=f(1×3+1×6)=f(3+6)=f(9)=ReLU(9)=9.\n\n \\begin{aligned}\n\n {\\color{#FE6100} h\\_{A}^{(1)}}\n\n &= {f} \\left(\n\n {\\color{#4D9DB5} W^{(1)}} \\times \n\n\\text{RNN}\\left[{\\color{#A95AA1} h\\_{C}^{(0)}} , {\\color{#A95AA1} \n\nh\\_{E}^{(0)}}\\right] + \n\n {\\color{#4D9DB5} B^{(1)}} \\times {\\color{#FE6100} \n\nh\\_{A}^{(0)}} \\right) \\\\ \n\n &= f \\left(\n\n {\\color{#4D9DB5} 1} \\times {\\color{#A95AA1} 3} + \n\n {\\color{#4D9DB5} 1} \\times {\\color{#FE6100} 6} \n\n\\right) \\\\\n\n &= f \\left(\n\n 3 + \n\n 6 \\right) \\\\\n\n &= f \\left(9 \\right) \\\\\n\n &= \\text{ReLU} \\left(9\\right)\n\n = 9.\n\n \\begin{aligned}\n\n \\qquad s\\_0 &= 0. \\\\\n\n s\\_{1}&= \\text{ReLU}\\left({\\color{#4D9DB5} U\\_s} s\\_{0} + \n\n{\\color{#4D9DB5} U\\_h} {\\color{#A95AA1} h\\_{C}^{(0)}}\\right) = \n\n\\text{ReLU}\\left(\n\n {\\color{#4D9DB5} 1}\n\n \\times 0 + \n\n {\\color{#4D9DB5} 1} \n\n \\times {\\color{#A95AA1} -10} \\right) = \n\n\\text{ReLU}\\left(-10\\right) = 0. \\\\ s\\_{2}&= \n\n\\text{ReLU}\\left({\\color{#4D9DB5} U\\_s} s\\_{1} + {\\color{#4D9DB5} U\\_h} \n\n{\\color{#A95AA1} h\\_{E}^{(0)}}\\right) = \\text{ReLU}\\left(\n\n {\\color{#4D9DB5} 1}\n\n \\times 0 + \n\n {\\color{#4D9DB5} 1} \n\n \\times {\\color{#A95AA1} 3} \\right) = \\text{ReLU}\\left(3\\right) = \n\n3.\n\n \\end{aligned}\n\n Concisely, the initial RNN\\text{RNN}RNN state is s0=0s\\_0 = 0s0​=0, and \n\n the RNN\\text{RNN}RNN state update equation is:\n\n st=ReLU(Usst−1+Uhhut(0))=ReLU(1st−1+1hut(0)).\n\n \\begin{aligned}\n\n s\\_{t} &= \\text{ReLU}\\left({\\color{#4D9DB5} U\\_s} s\\_{t - 1} + \n\n{\\color{#4D9DB5} U\\_h} h\\_{u\\_t}^{(0)}\\right) \\\\ &= \n\n\\text{ReLU}\\left({\\color{#4D9DB5} 1} s\\_{t - 1} + {\\color{#4D9DB5} 1} \n\n h\\_{u\\_t}^{(0)}\\right).\n\n \\end{aligned}\n\n st​​=ReLU(Us​st−1​+Uh​hut​(0)​)=ReLU(1st−1​+1hut​(0)​).​\n\n RNN\\text{RNN}RNN to learn the same final aggregated value.\n\n This tries to make the RNN\\text{RNN}RNN node-order invariant.\n\n **Next Update (Iteration 1):** \n\n Equation for Node AAA:\n\n \\begin{aligned}\n\n {\\color{#FE6100} h\\_{A}^{(1)}}\n\n &= {f} \\left(\n\n \\left({\\color{#A95AA1} h\\_{C}^{(0)}} + {\\color{#A95AA1} h\\_{E}^{(0)}}\\right) +\n\n &= f \\left(\n\n \\left({\\color{#A95AA1} -10} + {\\color{#A95AA1} 3}\\right) + \n\n \\left(1 + {\\color{#4D9DB5} 0}\\right)\n\n \\times {\\color{#FE6100} 6} \\right) \\\\\n\n &= f \\left(\n\n \\left(-7\\right) + \n\n 6 \\right) \\\\\n\n &= f \\left(-1 \\right) \\\\\n\n &= \\text{ReLU} \\left(-1\\right)\n\n = 0.\n\n div.eqn {\n\n margin-left: 2%; \n\n }\n\n #gnn-models-viz ul {\n\n display: grid;\n\n grid-template-columns: 1fr 1fr 1fr 1fr;\n\n grid-column-gap: 4px;\n\n text-align: center;\n\n padding: 0;\n\n justify-content: center;\n\n align-content: center;\n\n list-style: none;\n\n max-width: 100%;\n\n }\n\n #gnn-models-viz ul li {\n\n margin: 0;\n\n background: #f3f3f3;\n\n border-bottom: 3px solid #e3e3e3;\n\n cursor: pointer;\n\n line-height: 2em;\n\n border-top-left-radius: 3px;\n\n border-top-right-radius: 3px;\n\n }\n\n #gnn-models-viz ul li:hover {\n\n border-bottom-color: #D9D9D9;\n\n }\n\n #gnn-models-viz ul li.selected {\n\n background: #D9D9D9;\n\n border-bottom-color: #D9D9D9;\n\n }\n\n #gnn-models-viz {\n\n }\n\n \n\n /\\* Nicer sliders. \\*/\n\n input[type=\"range\"] {\n\n -webkit-appearance: none;\n\n grid-column: middle;\n\n height: 5px;\n\n border-radius: 1px; \n\n background: #d3d3d3;\n\n outline: none;\n\n opacity: 0.7;\n\n -webkit-transition: .2s;\n\n transition: opacity .2s;\n\n }\n\n input[type=\"range\"]::-webkit-slider-thumb {\n\n -webkit-appearance: none;\n\n appearance: none;\n\n width: 16px;\n\n height: 16px;\n\n border-radius: 50%; \n\n background: #575245;\n\n cursor: pointer;\n\n }\n\n input[type=\"range\"]::-moz-range-thumb {\n\n width: 16px;\n\n height: 16px;\n\n border-radius: 50%;\n\n background: #f5f2eb;\n\n cursor: pointer;\n\n }\n\n output[name=\"output\"] {\n\n width: 1em;\n\n display: inline-block;\n\n }\n\n \n\n import {Runtime, Inspector} from \"./observablehq-base/runtime.js\";\n\n import define from \"./notebooks/interactive-gnn-visualizations.js\";\n\n setTimeout(() => {\n\n new Runtime().module(define, name => {\n\n });\n\n }, 200);\n\n \n\n Choose a GNN model using the tabs at the top. Click on a node \n\nto see the update equation at that node for the next iteration.\n\n Use the sliders on the left to change the weights for the \n\ncurrent iteration, and watch how the update equation changes. \n\n \n\n This ideology is followed by many popular Graph Neural Network libraries,\n\n \n\n \n\n \n\n From Local to Global Convolutions\n\n-----------------------------------\n\n The methods we’ve seen so far perform ‘local’ convolutions:\n\n \n\n While performing enough steps of message-passing will eventually ensure that\n\n information from all nodes in the graph is passed,\n\n one may wonder if there are more direct ways to perform ‘global’ convolutions.\n\n \n\n in the context of neural networks by ,\n\n much before any of the GNN models we looked at above.\n\n \n\n### \n\n Spectral Convolutions\n\n As before, we will focus on the case where nodes have one-dimensional features.\n\n ‘feature vector’ x∈Rnx \\in \\mathbb{R}^nx∈Rn.\n\n \n\n**Key Idea:**\n\n Given a feature vector xxx, \n\n the Laplacian LLL allows us to quantify how smooth xxx is, with respect to GGG.\n\n \n\n How?\n\n \n\n if we look at the following quantity involving LLL:\n\n \n\n \n\nRL(x)=xTLxxTx=∑(i,j)∈E(xi−xj)2∑ixi2=∑(i,j)∈E(xi−xj)2.\n\n R\\_L(x) = \\frac{x^T L x}{x^T x} = \\frac{\\sum\\_{(i, j) \\in E} \n\n(x\\_i - x\\_j)^2}{\\sum\\_i x\\_i^2} = \\sum\\_{(i, j) \\in E} (x\\_i - x\\_j)^2.\n\n RL​(x)=xTxxTLx​=∑i​xi2​∑(i,j)∈E​(xi​−xj​)2​=(i,j)∈E∑​(xi​−xj​)2.\n\n we immediately see that feature vectors xxx that assign similar values to \n\n \n\n \n\n An eigenvalue λ\\lambdaλ of a matrix AAA is a value\n\n For a friendly introduction to eigenvectors,\n\n please see [this tutorial](http://www.sosmath.com/matrix/eigen0/eigen0.html).\n\n \n\n uk1Tuk2={1 if k1=k2.0 if k1≠k2.\n\n u\\_{k\\_1}^T u\\_{k\\_2} =\n\n \\begin{cases}\n\n 1 \\quad \\text{ if } {k\\_1} = {k\\_2}. \\\\\n\n 0 \\quad \\text{ if } {k\\_1} \\neq {k\\_2}.\n\n \\end{cases}\n\n uk1​T​uk2​​={1 if k1​=k2​.0 if k1​≠k2​.​\n\nargminx, x⊥{u1,…,ui−1}RL(x)=ui.minx, x⊥{u1,…,ui−1}RL(x)=λi.\n\n \\qquad\n\n \\qquad\n\n \\qquad\n\n \\min\\_{x, \\ x \\perp \\{u\\_1, \\ldots, u\\_{i - 1}\\}} R\\_L(x) = \\lambda\\_i.\n\n x, x⊥{u1​,…,ui−1​}argmin​RL​(x)=ui​.x, x⊥{u1​,…,ui−1​}min​RL​(x)=λi​.\n\n The set of eigenvalues of LLL are called its ‘spectrum’, hence the name!\n\n We denote the ‘spectral’ decomposition of LLL as:\n\n L=UΛUT.\n\n L = U \\Lambda U^T.\n\n L=UΛUT.\n\n where Λ\\LambdaΛ is the diagonal matrix of sorted eigenvalues,\n\n Λ=[λ1⋱λn]U=[u1 ⋯ un].\n\n \\Lambda = \\begin{bmatrix}\n\n \\lambda\\_{1} & & \\\\\n\n & \\ddots & \\\\\n\n & & \\lambda\\_{n}\n\n \\end{bmatrix}\n\n \\qquad\n\n \\qquad\n\n \\qquad\n\n \\qquad\n\n U = \\begin{bmatrix} \\\\ u\\_1 \\ \\cdots \\ u\\_n \\\\ \\end{bmatrix}.\n\n Λ=⎣⎡​λ1​​⋱​λn​​⎦⎤​U=⎣⎡​u1​ ⋯ un​​⎦⎤​.\n\n As these nnn eigenvectors form a basis for Rn\\mathbb{R}^nRn,\n\n x=∑i=1nxi^ui=Ux^.\n\n x = \\sum\\_{i = 1}^n \\hat{x\\_i} u\\_i = U \\hat{x}.\n\n x=i=1∑n​xi​^​ui​=Ux^.\n\n We call x^\\hat{x}x^ as the spectral representation of the feature vector xxx.\n\n The orthonormality condition allows us to state:\n\n x=Ux^⟺UTx=x^.\n\n x = U \\hat{x} \\quad \\Longleftrightarrow \\quad U^T x = \\hat{x}.\n\n x=Ux^⟺UTx=x^.\n\n This pair of equations allows us to interconvert\n\n for any vector x∈Rnx \\in \\mathbb{R}^nx∈Rn.\n\n \n\n### \n\n Spectral Representations of Natural Images\n\n connected by edges to adjacent pixels.\n\n vector, indicating the values for the red, green and blue (RGB) channels.\n\n \n\n To shed some light on what the spectral representation actually encodes,\n\n \n\n* We first collect all pixel values across a channel into a feature vector xxx.\n\n* Then, we obtain its spectral representation x^\\hat{x}x^.\n\n x^=UTx\n\n \\hat{x} = U^T x\n\n x^=UTx\n\n* We truncate this to the first mmm components to get x^m\\hat{x}\\_mx^m​.\n\n x^m=Truncatem(x^)\n\n \\hat{x}\\_m = \\text{Truncate}\\_m(\\hat{x})\n\n x^m​=Truncatem​(x^)\n\n xm=Ux^m\n\n x\\_m = U \\hat{x}\\_m\n\n xm​=Ux^m​\n\n Finally, we stack the resulting channels back together to get back an image.\n\n We can now see how the resulting image changes with choices of mmm.\n\n as we can reconstruct each channel exactly.\n\n \n\nSample Image\n\n Chicken\n\n \n\n Fish\n\n \n\n Frog\n\n \n\n Spider\n\n \n\n Original Image\n\n xxx\n\nKeep First 200 Spectral Components\n\n Transformed Image\n\n x′x'x′\n\nNumber of Spectral Components (m)\n\n200\n\nupdateArrowCaption = undefined\n\ndrawArrow = undefined\n\ndrawCurrBaseImg = undefined\n\ndrawCurrUpdImg = undefined\n\n #specDecompBody {\n\n display: flex;\n\n justify-content: center;\n\n }\n\n #svgArrow {\n\n margin-top: 83px;\n\n margin-left: 2em;\n\n margin-right: 2em;\n\n }\n\n #specDecompAll {\n\n display: flex;\n\n flex-direction: column;\n\n text-align: center;\n\n height: 350px;\n\n justify-content: space-evenly;\n\n }\n\n #specDecompInputsRadio input {\n\n margin-top: 0.5em;\n\n vertical-align: unset !important;\n\n }\n\n #origCaption {\n\n margin-top: 1em;\n\n }\n\n #updCaption {\n\n margin-top: 1em;\n\n }\n\n import {Runtime, Inspector} from \"./observablehq-base/runtime.js\"; \n\n import define from \"./notebooks/spectral-decompositions-of-natural-images.js\";\n\n setTimeout(() => {\n\n new Runtime().module(define, name => {\n\n });\n\n }, 200);\n\n \n\n Use the radio buttons at the top to chose one of the four sample images.\n\n Each of these images has been taken from the ImageNet \n\n dataset and downsampled to 505050 pixels wide and 404040 pixels tall.\n\n images get progressively blurrier as the number of components decrease.\n\n \n\n As mmm decreases, we see that the output image xmx\\_mxm​ gets blurrier.\n\n We see that we do not need to keep all nnn components;\n\n We can relate this to the Fourier decomposition of images:\n\n \n\n To complement the visualization above,\n\n We change the coefficients of the first 101010 out of 646464 eigenvectors\n\n in the spectral representation\n\n and see how the resulting image changes:\n\n \n\nSpectral Representation\n\nx^=[x^1 x^2 x^3 x^4 x^5 x^6 x^7 x^8 x^9 x^10]\\hat{x}\n\n = \\left[\n\n \\hat{x}\\_{1} \\ \\hat{x}\\_{2} \\ \\hat{x}\\_{3} \\ \\hat{x}\\_{4} \\ \\hat{x}\\_{5} \\\n\n \\hat{x}\\_{6} \\ \\hat{x}\\_{7} \\ \\hat{x}\\_{8} \\ \\hat{x}\\_{9} \\ \\hat{x}\\_{10}\n\n \\right]x^=[x^1​ x^2​ x^3​ x^4​ x^5​ x^6​ x^7​ x^8​ x^9​ x^10​]\n\nRL(x)=1.77R\\_L(x) = 1.77RL​(x)=1.77\n\nNatural Representation\n\nx∈R64x \\in \\mathbb{R}^{64}x∈R64\n\n #spec-main-div {\n\n position: relative;\n\n height: 400px;\n\n }\n\n #spec-svg {\n\n position: absolute;\n\n left: 0px;\n\n top: 0px;\n\n }\n\n #figcaptions {\n\n position: relative;\n\n }\n\n #figcaptions figcaption {\n\n position: absolute;\n\n }\n\n #spec-caption-1 {\n\n left: 660px;\n\n top: 400px;\n\n }\n\n #spec-caption-2 {\n\n left: 620px;\n\n top: 420px;\n\n }\n\n #quot-caption-div {\n\n position: relative;\n\n width: 200px;\n\n left: 170px;\n\n top: 350px;\n\n }\n\n #figcaptions #quot-caption-1 {\n\n position: relative;\n\n text-align: center;\n\n font: inherit;\n\n color: inherit;\n\n }\n\n #nat-caption-1 {\n\n left: 205px;\n\n top: 400px;\n\n }\n\n #nat-caption-2 {\n\n left: 255px;\n\n top: 420px;\n\n }\n\n #spec-sliders form {\n\n margin-left: 2%;\n\n margin-right: 2%;\n\n }\n\n \n\n #spec-sliders input {\n\n width: 100%;\n\n }\n\n #spec-sliders output {\n\n width: 3em;\n\n margin-left: 0 !important;\n\n }\n\n #spec-sliders {\n\n display: flex;\n\n margin: 0 auto;\n\n text-align: center;\n\n }\n\n #subgrid-main-div {\n\n display: flex;\n\n margin: 0 auto;\n\n }\n\n input[type=\"range\"] {\n\n -webkit-appearance: none;\n\n grid-column: middle;\n\n height: 5px;\n\n border-radius: 1px; \n\n background: #d3d3d3;\n\n outline: none;\n\n opacity: 0.7;\n\n -webkit-transition: .2s;\n\n transition: opacity .2s;\n\n }\n\n input[type=\"range\"]::-webkit-slider-thumb {\n\n -webkit-appearance: none;\n\n appearance: none;\n\n width: 16px;\n\n height: 16px;\n\n border-radius: 50%; \n\n background: #575245;\n\n cursor: pointer;\n\n }\n\n input[type=\"range\"]::-moz-range-thumb {\n\n width: 16px;\n\n height: 16px;\n\n border-radius: 50%;\n\n background: #f5f2eb;\n\n cursor: pointer;\n\n }\n\n output[name=\"output\"] {\n\n width: 1em;\n\n display: inline-block;\n\n }\n\n−0.4−0.20.00.20.4Color Scale\n\nx^1{\\hat{x}\\_{1}}x^1​\n\n0x^2{\\hat{x}\\_{2}}x^2​\n\n-0.3x^3{\\hat{x}\\_{3}}x^3​\n\n-0.3x^4{\\hat{x}\\_{4}}x^4​\n\n0.6x^5{\\hat{x}\\_{5}}x^5​\n\n-0.7x^6{\\hat{x}\\_{6}}x^6​\n\n0.4x^7{\\hat{x}\\_{7}}x^7​\n\n0.4x^8{\\hat{x}\\_{8}}x^8​\n\n0.4x^9{\\hat{x}\\_{9}}x^9​\n\n0.4x^10{\\hat{x}\\_{10}}x^10​\n\n0.7\n\n #button-container {\n\n text-align: center;\n\n }\n\n \n\nReset Coefficients\n\nspec\\_input\\_slider\\_watch = undefined\n\n import {Runtime, Inspector} from \"./observablehq-base/runtime.js\"; \n\n import define from \"./notebooks/interactive-spectral-conversions.js\";\n\n setTimeout(() => {\n\n new Runtime().module(define, name => {\n\n });\n\n }, 200);\n\n \n\n Move the sliders to change the spectral representation x^\\hat{x}x^ (right),\n\n and see how xxx itself changes on the image (left).\n\n Note how the first eigenvectors are much ‘smoother’ than the later ones,\n\n and the many patterns we can make with only 101010 eigenvectors.\n\n \n\n and the smoothness correspondingly decreases as we consider later eigenvectors.\n\n \n\n For any image xxx, we can think of\n\n the initial entries of the spectral representation x^\\hat{x}x^\n\n \n\n### Embedding Computation\n\n We now have the background to understand spectral convolutions\n\n \n\n As before, the model we describe below has KKK layers:\n\n each layer kkk has learnable parameters w^(k)\\hat{w}^{(k)}w^(k),\n\n called the ‘filter weights’.\n\n eigenvectors used to compute the spectral representations.\n\n We had shown in the previous section that we can take m≪nm \\ll nm≪n\n\n and still not lose out on significant amounts of information.\n\n \n\n than just direct convolution in the natural domain.\n\n using spectral representations automatically enforces an inductive bias for\n\n neighbouring nodes to get similar representations.\n\n \n\n Assuming one-dimensional node features for now,\n\n the output of each layer is a vector of node representations h(k)h^{(k)}h(k),\n\n where each node’s representation corresponds to a row\n\n of the vector.\n\n \n\nh(k)=[h1(k)⋮hn(k)]\n\n h(k)=⎣⎢⎢⎢⎡​h1(k)​⋮hn(k)​​⎦⎥⎥⎥⎤​\n\n for each k=0,1,2,…k = 0, 1, 2, \\ldots k=0,1,2,… upto K KK.\n\n \n\n allowing us to compute UmU\\_mUm​.\n\n \n\n Start with the original features.\n\n \n\nh(0)=x\\quad {\\color{#FE6100} h^{(0)}} = xh(0)=x\n\n Then iterate, for k=1,2,…k = 1, 2, \\ldots k=1,2,… upto K KK:\n\n \n\nh^(k−1)=UmTh(k−1)g^(k)=w^(k)⊙h^(k−1)g(k)=Umg^(k)h(k)=σ(g(k))\n\n \\begin{aligned}\n\n {\\hat{h}^{(k - 1)}} &= U\\_m^T {\\color{#FE6100} h^{(k - 1)}} \\\\\n\n \\\\\n\n {\\hat{g}^{(k)}} &=\n\n {\\color{#4D9DB5} \\hat{w}^{(k)}} \\odot {\\color{#FE6100} \\hat{h}^{(k - 1)}} \\\\\n\n \\\\\n\n {g^{(k)}} &= U\\_m {\\hat{g}^{(k)}} \\\\\n\n \\\\\n\n {\\color{#FE6100} h^{(k)}} &= \\sigma \\left({g^{(k)}} \\right)\n\n \\end{aligned}\n\n h^(k−1)g^​(k)g(k)h(k)​=UmT​h(k−1)=w^(k)⊙h^(k−1)=Um​g^​(k)=σ(g(k))​\n\n \n\n⊙\\odot⊙ represents element-wise multiplication.\n\n \n\n \n\n \n\n Color Codes:\n\n * Computed node embeddings.\n\n* Learnable parameters.\n\n #spectral-conv div {\n\n margin: 0 auto 10px auto;\n\n display: block;\n\n }\n\n \n\n #spectral-conv {\n\n width: 800px;\n\n height: 380px;\n\n display: block;\n\n margin-top: 0;\n\n margin-bottom: 0;\n\n margin-left: auto;\n\n margin-right: auto;\n\n position: relative;\n\n }\n\n \n\n #spectral-conv-init {\n\n width: 800px;\n\n height: 100px;\n\n display: block;\n\n margin-top: 0;\n\n margin-left: auto;\n\n margin-right: auto;\n\n position: relative;\n\n }\n\n \n\n span.green {\n\n color: #177245;\n\n }\n\n span.blue {\n\n color: #0047AB;\n\n }\n\n /\\* Legend \\*/\n\n .legend {\n\n list-style: none;\n\n margin-top: 5px;\n\n }\n\n .legend li {\n\n margin-bottom: 5px;\n\n }\n\n .legend span.box {\n\n border: 1px solid #ccc;\n\n float: left;\n\n width: 12px;\n\n height: 12px;\n\n margin: 2px;\n\n margin-right: 5px;\n\n }\n\n .legend .main { background-color: #FE6100; }\n\n .legend .param { background-color: #4D9DB5; }\n\n \n\n import {Runtime, Inspector} from \"./observablehq-base/runtime.js\"; \n\n import define from \"./notebooks/spectral-convolutions-equation.js\";\n\n setTimeout(() => {\n\n new Runtime().module(define, name => {\n\n if (name === \"spec\\_figure\") return Inspector.into(\".spec\\_figure\")();\n\n if (name === \"style\") return Inspector.into(\".spec\\_figure\\_style\")();\n\n });\n\n }, 200);\n\n \n\n see for details.\n\n \n\n \n\n### \n\n Spectral Convolutions are Node-Order Equivariant\n\n as for Laplacian polynomial filters. \n\n \n\n**Details for the Interested Reader** \n\n As in [our proof before](#poly-filters-equivariance),\n\n let’s fix an arbitrary node-order.\n\n Then, any other node-order can be represented by a\n\n permutation of this original node-order.\n\n We can associate this permutation with its permutation matrix PPP.\n\n Under this new node-order,\n\n the quantities below transform in the following way:\n\n x→PxA→PAPTL→PLPTUm→PUm\n\n \\begin{aligned}\n\n x &\\to Px \\\\\n\n A &\\to PAP^T \\\\\n\n L &\\to PLP^T \\\\\n\n U\\_m &\\to PU\\_m\n\n \\end{aligned}\n\n xALUm​​→Px→PAPT→PLPT→PUm​​\n\n which implies that, in the embedding computation:\n\n x^→(PUm)T(Px)=UmTx=x^w^→(PUm)T(Pw)=UmTw=w^g^→g^g→(PUm)g^=P(Umg^)=Pg\n\n \\begin{aligned}\n\n \\hat{x} &\\to \\left(PU\\_m\\right)^T (Px) = U\\_m^T x = \\hat{x} \\\\\n\n \\hat{w} &\\to \\left(PU\\_m\\right)^T (Pw) = U\\_m^T w = \\hat{w} \\\\\n\n \\hat{g} &\\to \\hat{g} \\\\\n\n g &\\to (PU\\_m)\\hat{g} = P(U\\_m\\hat{g}) = Pg\n\n \\end{aligned}\n\n x^w^g^​g​→(PUm​)T(Px)=UmT​x=x^→(PUm​)T(Pw)=UmT​w=w^→g^​→(PUm​)g^​=P(Um​g^​)=Pg​\n\n Hence, as σ\\sigmaσ is applied elementwise:\n\n f(Px)=σ(Pg)=Pσ(g)=Pf(x)\n\n f(Px) = \\sigma(Pg) = P \\sigma(g) = P f(x)\n\n f(Px)=σ(Pg)=Pσ(g)=Pf(x)\n\n as required.\n\n are unchanged by permutations of the nodes.\n\n \n\n Formally, they are what we would call node-order invariant.\n\n \n\n The theory of spectral convolutions is mathematically well-grounded;\n\n however, there are some key disadvantages that we must talk about:\n\n \n\n because of the repeated\n\n multiplications with UmU\\_mUm​ and UmTU\\_m^TUmT​.\n\n* The learned filters are specific to the input graphs,\n\n as they are represented in terms\n\n of the spectral decomposition of input graph Laplacian LLL.\n\n This means they do not transfer well to new graphs\n\n which have significantly different structure (and hence, significantly\n\n different eigenvalues) .\n\n While spectral convolutions have largely been superseded by\n\n ‘local’ convolutions for the reasons discussed above,\n\n there is still much merit to understanding the ideas behind them.\n\n Indeed, a recently proposed GNN model called Directional Graph Networks\n\n \n\n actually uses the Laplacian eigenvectors\n\n and their mathematical properties\n\n extensively.\n\n \n\n### \n\n Global Propagation via Graph Embeddings\n\n A simpler way to incorporate graph-level information\n\n is to compute embeddings of the entire graph by pooling node\n\n (and possibly edge) embeddings,\n\n and then using the graph embedding to update node embeddings,\n\n following an iterative scheme similar to what we have looked at here.\n\n This is an approach used by Graph Networks\n\n .\n\n We will briefly discuss how graph-level embeddings\n\n can be constructed in [Pooling](#pooling).\n\n However, such approaches tend to ignore the underlying\n\n topology of the graph that spectral convolutions can capture.\n\n \n\n Learning GNN Parameters\n\n-------------------------\n\n once a suitable loss function L\\mathcal{L}L is defined:\n\n \n\n such as categorical cross-entropy when multiple classes are present:\n\n L(yv,yv^)=−∑cyvclogyvc^.\n\n \\mathcal{L}(y\\_v, \\hat{y\\_v}) = -\\sum\\_{c} y\\_{vc} \\log{\\hat{y\\_{vc}}}.\n\n L(yv​,yv​^​)=−c∑​yvc​logyvc​^​.\n\n LG=∑v∈Lab(G)L(yv,yv^)∣Lab(G)∣\n\n LG​=∣Lab(G)∣v∈Lab(G)∑​L(yv​,yv​^​)​\n\n where, we only compute losses over labelled nodes Lab(G)\\text{Lab}(G)Lab(G).\n\n* **Graph Classification**: By aggregating node representations,\n\n one can construct a vector representation of the entire graph.\n\n See [Pooling](#pooling) for how representations of graphs can be constructed.\n\n* **Link Prediction**: By sampling pairs of adjacent and non-adjacent nodes,\n\n L(yv,yu,evu)=−evulog(pvu)−(1−evu)log(1−pvu)pvu=σ(yvTyu)\n\n \\begin{aligned}\n\n p\\_{vu} &= \\sigma(y\\_v^Ty\\_u)\n\n \\end{aligned}\n\n L(yv​,yu​,evu​)pvu​​=−evu​log(pvu​)−(1−evu​)log(1−pvu​)=σ(yvT​yu​)​\n\n* **Node Clustering**: By simply clustering the learned node representations.\n\n The broad success of pre-training for natural language processing models\n\n such as ELMo and BERT \n\n has sparked interest in similar techniques for GNNs\n\n .\n\n The key idea in each of these papers is to train GNNs to predict\n\n local (eg. node degrees, clustering coefficient, masked node attributes)\n\n \n\n mimicking random-walk approaches such as node2vec and DeepWalk :\n\n \n\nLG=∑v∑u∈NR(v)logexpzvTzu∑u′expzu′Tzu.\n\n LG​=v∑​u∈NR​(v)∑​logu′∑​expzu′T​zu​expzvT​zu​​.\n\n techniques such as Noise Contrastive Estimation are especially useful.\n\n \n\n Conclusion and Further Reading\n\n--------------------------------\n\n While we have looked at many techniques and ideas in this article,\n\n the field of Graph Neural Networks is extremely vast.\n\n while still communicating the key ideas and design principles behind GNNs.\n\n We recommend the interested reader take a look at\n\n for a more comprehensive survey.\n\n \n\n \n\n### \n\n GNNs in Practice\n\n but we can still represent many GNN update equations using\n\n For example, the GCN variant discussed here can be represented as:\n\n h(k)=D−1A⋅h(k−1)W(k)T+h(k−1)B(k)T.\n\n h^{(k)} = D^{-1} A \\cdot h^{(k - 1)} {W^{(k)}}^T + h^{(k - 1)} {B^{(k)}}^T.\n\n h(k)=D−1A⋅h(k−1)W(k)T+h(k−1)B(k)T.\n\n such as GPUs.\n\n \n\n Regularization techniques for standard neural networks,\n\n such as Dropout ,\n\n can be applied in a straightforward manner to the parameters\n\n (for example, zero out entire rows of W(k)W^{(k)}W(k) above).\n\n However, there are graph-specific techniques such as DropEdge \n\n that removes entire edges at random from the graph,\n\n that also boost the performance of many GNN models.\n\n \n\n### \n\n Different Kinds of Graphs\n\n However, there are some simple variants of spatial convolutions for:\n\n \n\n* Temporal graphs: Aggregate across previous and/or future node features.\n\n see for more discussion.\n\n \n\n### \n\n Pooling\n\n This article discusses how GNNs compute useful representations\n\n of nodes.\n\n But what if we wanted to compute representations of graphs for\n\n graph-level tasks (for example, predicting the toxicity of a molecule)?\n\n \n\n hG=PREDICTG(AGGv∈G({hv}))\n\n hG​=PREDICTG​(AGGv∈G​({hv​}))\n\n \n\n* SortPool: Sort vertices \n\nof the graph to get a fixed-size node-order invariant representation of \n\nthe graph, and then apply any standard neural network architecture.\n\n* DiffPool: Learn to \n\ncluster vertices, build a coarser graph over clusters instead of nodes, \n\nthen apply a GNN over the coarser graph. Repeat until only one cluster \n\nis left.\n\n* SAGPool: Apply a GNN to \n\nlearn node scores, then keep only the nodes with the top scores, \n\nthrowing away the rest. Repeat until only one node is left.\n\n Supplementary Material\n\n------------------------\n\n### \n\n Reproducing Experiments\n\n The experiments from\n\n can be reproduced using the following\n\n \n\n### \n\n Recreating Visualizations\n\n To aid in the creation of future interactive articles,\n\n we have created ObservableHQ\n\n notebooks for each of the interactive visualizations here:\n\n \n\n which pulls together the following standalone notebooks:\n\n", "bibliography_bib": [{"title": "A Gentle Introduction to Graph Neural Networks"}, {"title": "Graph Kernels"}, {"title": "Node2vec: Scalable Feature Learning for Networks"}, {"title": "DeepWalk: Online Learning of Social Representations"}, {"title": "Convolutional Networks on Graphs for Learning Molecular Fingerprints"}, {"title": "Neural Message Passing for Quantum Chemistry"}, {"title": "Learning Convolutional Neural Networks for Graphs"}, {"title": "A Tutorial on Spectral Clustering"}, {"title": "Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering"}, {"title": "Wavelets on Graphs via Spectral Graph Theory"}, {"title": "Chebyshev Polynomials"}, {"title": "Semi-Supervised Classification with Graph Convolutional Networks"}, {"title": "Graph Attention Networks"}, {"title": "Inductive Representation Learning on Large Graphs"}, {"title": "How Powerful are Graph Neural Networks?"}, {"title": "Relational inductive biases, deep learning, and graph networks"}, {"title": "Spectral Networks and Locally Connected Networks on Graphs"}, {"title": "ImageNet: A Large-Scale Hierarchical Image Database"}, {"title": "On the Transferability of Spectral Graph Filters"}, {"title": "Directional Graph Networks"}, {"title": "Deep contextualized word representations"}, {"title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding"}, {"title": "Strategies for Pre-training Graph Neural Networks"}, {"title": "Multi-Stage Self-Supervised Learning for Graph Convolutional Networks on Graphs with Few Labeled Nodes"}, {"title": "When Does Self-Supervision Help Graph Convolutional Networks?"}, {"title": "Self-supervised Learning on Graphs: Deep Insights and New Direction"}, {"title": "Noise-Contrastive Estimation of Unnormalized Statistical Models, with Applications to Natural Image Statistics"}, {"title": "Learning word embeddings efficiently with noise-contrastive estimation"}, {"title": "A Comprehensive Survey on Graph Neural Networks"}, {"title": "Graph Neural Networks: A Review of Methods and Applications"}, {"title": "Dropout: A Simple Way to Prevent Neural Networks from Overfitting"}, {"title": "DropEdge: Towards Deep Graph Convolutional Networks on Node Classification"}, {"title": "An End-to-End Deep Learning Architecture for Graph Classification"}, {"title": "Hierarchical Graph Representation Learning with Differentiable Pooling"}, {"title": "Self-Attention Graph Pooling"}], "filename": "Understanding Convolutions on Graphs.html", "id": "96a02623ec7e1ca8d936844ccccc209f"} {"url": "n/a", "source": "distill", "source_type": "html", "converted_with": "python", "title": "Differentiable Image Parameterizations", "authors": ["Alexander Mordvintsev", "Nicola Pezzotti", "Ludwig Schubert", "Chris Olah"], "date_published": "2018-07-25", "abstract": " Neural networks trained to classify images have a remarkable — and surprising! — capacity to generate images. Techniques such as DeepDream , style transfer, and feature visualization leverage this capacity as a powerful tool for exploring the inner workings of neural networks, and to fuel a small artistic movement based on neural art. ", "journal_ref": "distill-pub", "doi": "https://doi.org/10.23915/distill.00012", "text": "\n\n Techniques such as DeepDream , style transfer, and feature visualization\n\n leverage this capacity as a powerful tool for exploring the inner \n\nworkings of neural networks, and to fuel a small artistic movement based\n\n on neural art.\n\n \n\n All these techniques work in roughly the same way.\n\n Neural networks used in computer vision have a rich internal \n\nrepresentation of the images they look at.\n\n We can use this representation to describe the properties we want an\n\n image to have (e.g. style), and then optimize the input image to have \n\nthose properties.\n\n This kind of optimization is possible because the networks are \n\ndifferentiable with respect to their inputs: we can slightly tweak the \n\nimage to better fit the desired properties, and then iteratively apply \n\nsuch tweaks in gradient descent.\n\n \n\n Typically, we parameterize the input image as the RGB values of each\n\n pixel, but that isn’t the only way.\n\n As long as the mapping from parameters to images is differentiable, \n\nwe can still optimize alternative parameterizations with gradient \n\ndescent.\n\n \n\n[1](#figure-differentiable-parameterizations):\n\n As long as an \n\nimage para­meter­ization\n\nis differ­entiable, we can back­propagate\n\n( )\n\nthrough it.\n\nMappingParametersimage/RGB spaceLossFunction\n\n Differentiable image parameterizations invite us to ask “what kind \n\nof image generation process can we backpropagate through?”\n\n The answer is quite a lot, and some of the more exotic possibilities\n\n can create a wide range of interesting effects, including 3D neural \n\nart, images with transparency, and aligned interpolation.\n\n Previous work using specific unusual image parameterizations \n\n has shown exciting results — we think that zooming out and looking at \n\nthis area as a whole suggests there’s even more potential.\n\n \n\n### Why Does Parameterization Matter?\n\n It may seem surprising that changing the parameterization of an \n\noptimization problem can significantly change the result, despite the \n\nobjective function that is actually being optimized remaining the same.\n\n We see four reasons why the choice of parameterization can have a \n\nsignificant effect:\n\n**(1) - Improved Optimization** -\n\nTransforming the input to make an optimization problem easier — a \n\ntechnique called “preconditioning” — is a staple of optimization.\n\n \n\n Preconditioning is most often presented as a transformation of the gradient\n\n (usually multiplying it by a positive definite “preconditioner” matrix).\n\n \n\n**(2) - Basins of Attraction** -\n\nWhen we optimize the input to a neural network, there are often many \n\ndifferent solutions, corresponding to different local minima.\n\n \n\n \n\nThe probability of our optimization process falling into any particular \n\nlocal minima is controlled by its basin of attraction (i.e., the region \n\nof the optimization landscape under the influence of the minimum).\n\nChanging the parameterization of an optimization problem is known to \n\nchange the sizes of different basins of attraction, influencing the \n\nlikely result.\n\n**(3) - Additional Constraints** -\n\nSome parameterizations cover only a subset of possible inputs, rather \n\nthan the entire space.\n\nAn optimizer working in such a parameterization will still find \n\nsolutions that minimize or maximize the objective function, but they’ll \n\nbe subject to the constraints of the parameterization.\n\nBy picking the right set of constraints, one can impose a variety of \n\nconstraints, ranging from simple constraints (e.g., the boundary of the \n\nimage must be black), to rich, subtle constraints.\n\n**(4) - Implicitly Optimizing other Objects** -\n\n A parameterization may internally use a different kind of object than \n\nthe one it outputs and we optimize for.\n\n For example, while the natural input to a vision network is an RGB \n\nimage, we can parameterize that image as a rendering of a 3D object and,\n\n by backpropagating through the rendering process, optimize that \n\ninstead.\n\n Because the 3D object has more degrees of freedom than the image, we \n\nIn the rest of the article we give concrete examples where such \n\napproaches are beneficial and lead to surprising and interesting visual \n\nresults.\n\n---\n\n[1](#section-aligned-interpolation)\n\n[Aligned Feature Visualization Interpolation](#section-aligned-interpolation)\n\n-----------------------------------------------------------------------------\n\n Feature visualization is most often used to visualize individual neurons,\n\n When we want to really understand the interaction between two neurons,\n\n we can go a step further and create multiple visualizations,\n\n Despite this, there is a small challenge: feature visualization is stochastic.\n\n If we make them naively, the resulting visualizations will be *unaligned*:\n\n visual landmarks such as eyes appear in different locations in each image.\n\n because they’re swamped by the much larger differences in layout.\n\n[2](#figure-aligned-interpolation-comparison)\n\n#### Unaligned Interpolation\n\nVisual landmarks, such as eyes, change position from one frame to the next.\n\npause\n\n#### Aligned Interpolation\n\nFrames are easier to compare because visual landmarks stay in place.\n\npause\n\n There are a number of possible approaches one could try\n\n \n\n For example, one could explicitly penalize differences between \n\nadjacent frames. Our final result and our colab notebook use this \n\ntechnique in combination with a shared parameterization.\n\n \n\n \n\n[3](#figure-aligned-interpolation-examples)\n\nWe start with independently parameterized frames.\n\nEach frame is then combined with a single, shared parameterization…\n\n…to create a visually aligned neuron interpolation.\n\n By partially sharing a parameterization between frames, we encourage\n\n the resulting visualizations to naturally align.\n\n Intuitively, the shared parameterization provides a common reference\n\n for the displacement of visual landmarks, while the unique one gives to\n\n each frame its own visual appeal based on its interpolation weights.\n\n \n\n \n\n \n\n Let’s consider randomly initializing xxx and yyy, and then optimizing them.\n\n Normally, the optimization problems are independent, so xxx and yyy\n\n are equally likely to come to unaligned solutions (where they have \n\ndifferent signs) as aligned ones.\n\n But if we add a shared parameterization, the problems become \n\ncoupled and the basin of attraction where they’re aligned becomes \n\nbigger. \n\n![](Differentiable%20Image%20Parameterizations_files/basin-alignment.png)\n\n This is an initial example of how differentiable parameterizations \n\nin general can be a useful additional tool in visualizing neural \n\nnetworks.\n\n \n\n---\n\n[2](#section-styletransfer)\n\n[Style Transfer with non-VGG architectures](#section-styletransfer)\n\n-------------------------------------------------------------------\n\n Neural style transfer has a mystery:\n\n This isn’t because no one is interested in doing style transfer on \n\nother architectures, but because attempts to do it on other \n\narchitectures consistently work poorly.\n\n \n\n \n\n Several hypotheses have been proposed to explain why VGG works so \n\nmuch better than other models.\n\n One suggested explanation is that VGG’s large size causes it to \n\ncapture information that other models discard.\n\n This extra information, the hypothesis goes, isn’t helpful for \n\nclassification, but it does cause the model to work better for style \n\ntransfer.\n\n An alternate hypothesis is that other models downsample more \n\naggressively than VGG, losing spatial information.\n\n We suspect that there may be another factor: most modern vision \n\n \n\n We find the same approach also improves style transfer, allowing us \n\nto use a model that did not otherwise produce visually appealing style \n\ntransfer results:\n\n \n\n#### Content Image\n\n#### Style Image\n\n#### Final image Optimization\n\nPixel Space\n\nDecorrelated Space\n\n[4](#figure-style-transfer-examples):\n\n Move the slider under “final image optimization” to compare \n\noptimization in pixel space with optimization in a decorrelated space. \n\nBoth images were created with the same objective and differ only in \n\ntheir parameterization.\n\n Let’s consider this change in a bit more detail. Style transfer \n\ninvolves three images: a content image, a style image, and the image we \n\noptimize.\n\n All three of these feed into the CNN, and the style transfer \n\nobjective is based on the differences \n\nin how these images activate the CNN.\n\n The only change we make is how we parameterize the optimized image. \n\nInstead of parameterizing it in terms of pixels (which are highly \n\ncorrelated with their neighbors), we use a scaled Fourier transform.\n\n \n\n[5](#figure-style-transfer-diagram)\n\n \n\n---\n\n[3](#section-xy2rgb)\n\n[Compositional Pattern Producing Networks](#section-xy2rgb)\n\n-----------------------------------------------------------\n\n So far, we’ve explored image parameterizations that are relatively \n\nclose to how we normally think of images, using pixels or Fourier \n\ncomponents.\n\n \n\n CPPNs are neural networks that map (x,y)(x,y)(x,y) positions to image colors:\n\n \n\n(x,y) →CPPN (r,g,b)(x,y) ~\\xrightarrow{\\tiny CPPN}~ (r,g,b)(x,y) CPPN​ (r,g,b)\n\n By applying the CPPN to a grid of positions, one can make arbitrary \n\nresolution images.\n\n The parameters of the CPPN network — the weights and \n\nbiases — determine what image is produced.\n\n Depending on the architecture chosen for the CPPN, pixels in the \n\nresulting image are constraint to share, up to a certain degree, the \n\ncolor of their neighbors.\n\n \n\n Often this is done by evolution ;\n\n here we explore the possibility to backpropagate some objective \n\nfunction, such a feature visualization objective.\n\n This is easily done since the CPPN network is differentiable as the \n\nconvolutional neural network and the objective function can be \n\npropagated also through the CPPN to update its parameters accordingly.\n\n That is to say, CPPNs are a differentiable image \n\nparameterization — a general tool for parameterizing images in any \n\nneural art or visualization task.\n\n \n\nWeightsChannelCNNCPPNimage/RGB\n\n[6](#figure-xy2rgb-diagram):\n\n CPPNs are a differentiable image parameterization. We can use them\n\n for neural art or visualization tasks by backpropagating past the \n\nimage, through the CPPN to its parameters.\n\n \n\n Using CPPNs as image parameterization can add an interesting \n\n is an artistic medium where images are created by manipulating colorful\n\n light beams with prisms and mirrors. Notable examples of this technique\n\n are the [work of Stephen Knapp](http://www.lightpaintings.com/). \n\n \n\nNote\n\n that light-painting metaphor here is rather fragile: for example light \n\ncomposition is an additive process, while CPPNs can have \n\nnegative-weighted connections between layers.\n\n \n\n[7](#figure-xy2rgb-examples):\n\n The visual quality of the generated images is heavily influenced by \n\nthe architecture of the chosen CPPN.\n\n Not only the shape of the network, i.e., the number of layers and \n\nfilters, plays a role, but also the chosen activation functions and \n\nnormalization. For example, deeper networks produce more fine grained \n\ndetails compared to shallow ones.\n\n We encourage readers to experiment in generating different images by \n\nchanging the architecture of the CPPN. This can be easily done by \n\nchanging the code in the supplementary notebook.\n\n \n\n The evolution of the patterns generated by the CPPN are artistic \n\nartifacts themselves.\n\n To maintain the metaphor of light-paintings, the optimization process \n\ncorrespond to an iterative adjustments of the beam directions and \n\nshapes.\n\n Because the iterative changes have a more global effect compared to, \n\nfor example, a pixel parameterization, at the beginning of the \n\noptimization only major patterns are visible.\n\n By iteratively adjusting the weights, our imaginary beams are \n\npositioned in such a way that fine details emerge.\n\n \n\n![](Differentiable%20Image%20Parameterizations_files/pointer.svg) \n\n[8](#figure-xy2rgb-training):\n\n Your browser does not support the video tag.\n\n Your browser does not support the video tag.\n\n Your browser does not support the video tag.\n\n Your browser does not support the video tag.\n\n Your browser does not support the video tag.\n\n Your browser does not support the video tag.\n\n By playing with this metaphor, we can also create a new kind of \n\nanimation that morph one of the above images into a different one.\n\n Intuitively, we start from one of the light-paintings and we move the \n\nbeams to create a different one.\n\n This result is in fact achieved by interpolating the weights of the \n\nCPPN representations of the two patterns. A number of intermediate \n\nframes are then generated by generating an image given the interpolated \n\nCPPN representation.\n\n As before, changes in the parameter have a global effect and create \n\nvisually appealing intermediate frames.\n\n \n\n![](Differentiable%20Image%20Parameterizations_files/pointer.svg) \n\n[9](#figure-xy2rgb-interpolation):\n\n Interpolating CPPN weights between two learned points.\n\n \n\n Your browser does not support the video tag.\n\n Your browser does not support the video tag.\n\n Your browser does not support the video tag.\n\n In this section we presented a parameterization that goes beyond a \n\nstandard image representation.\n\n Neural networks, a CPPN in this case, can be used to parameterize an \n\nimage that is optimized for a given objective function.\n\n More specifically, we combined a feature-visualization objective \n\nfunction with a CPPN parameterization to create infinite-resolution \n\nimages of distinctive visual style.\n\n \n\n---\n\n[4](#section-rgba)\n\n[Generation of Semi-Transparent Patterns](#section-rgba)\n\n--------------------------------------------------------\n\n of images instead of a single image, and then sampling one or a few \n\nimages from that family at each optimization step.\n\n This is important because many of the objects we’ll explore \n\noptimizing have more degrees of freedom than the images going into the \n\nnetwork.\n\n \n\n To be concrete, let’s consider the case of semi-transparent images. \n\nThese images have, in addition to the RGB channels, an alpha channel \n\nthat encodes each pixel’s opacity (in the range [0,1][0,1][0,1]).\n\n In order to feed such images into a neural network trained on RGB \n\nimages, we need to somehow collapse the alpha channel. One way to \n\n \n\n where IaI\\_aIa​ is the alpha channel of the image III.\n\n \n\n If we used a static background BGBGBG,\n\n such as black, the transparency would merely indicate pixel positions \n\nin which that background contributes directly to the optimization \n\nobjective.\n\n In fact, this is equivalent to optimizing an RGB image and making it\n\n transparent in areas where its color matches with the background!\n\n Intuitively, we’d like transparent areas to correspond to something \n\nlike “the content of this area could be anything.”\n\n Building on this intuition, we use a different random background at \n\nevery optimization step.\n\n \n\n We have tried both sampling from real images, and using different \n\ntypes of noise.\n\n As long as they were sufficiently randomized, the different \n\ndistributions did not meaningfully influence the resulting optimization.\n\n Thus, for simplicity, we use a smooth 2D gaussian noise.\n\n \n\nChannelCNNrandom noise\n\n[10](#figure-rgba-diagram):\n\n \n\n By default, optimizing our semi-transparent image will make the \n\nimage fully opaque, so the network can always get its optimal input.\n\n To avoid this, we need to change our objective with an objective \n\nthat encourages some transparency.\n\n We find it effective to replace the original objective with:\n\n \n\n \n\n with reducing the mean transparency.\n\n If the image becomes very transparent, it will focus on the original\n\n objective. If it becomes too opaque, it will temporarily stop caring \n\nabout the original objective and focus on decreasing the average \n\nopacity.\n\n \n\nbright\n\ndark\n\ncheckerboard\n\nAlpha mask\n\nmasked\n\nopaque\n\n[11](#figure-rgba-examples):\n\n \n\n It turns out that the generation of semi-transparent images is \n\nuseful in feature visualization.\n\n Feature visualization aims to understand what neurons in a vision \n\nmodel are looking for, by creating images that maximally activate them.\n\n Unfortunately, there is no way for these visualizations to \n\ndistinguish which areas of an image strongly influence a neuron’s \n\nactivation and those which only marginally do so.\n\n This issue does not occur when \n\noptimizing for the activation of entire channels, because in that case \n\nevery pixel has multiple neurons that are close to centered on it. As a \n\nconsequence, the entire input image gets filled with copies of what \n\nthose neurons care about strongly.\n\n Ideally, we would like a way for our visualizations to make this \n\ndistinction in importance — one natural way to represent that a part of \n\nthe image doesn’t matter is for it to be transparent.\n\n Thus, if we optimize an image with an alpha channel and encourage \n\nthe overall image to be transparent, parts of the image that are \n\nunimportant according to the feature visualization objective should \n\nbecome transparent.\n\n \n\n[12](#figure-rgba-interpretability-examples)\n\nfigure 12\n\nIntroducing transparency helps separate those areas.\n\n---\n\n[5](#section-featureviz-3d)\n\n[Efficient Texture Optimization through 3D Rendering](#section-featureviz-3d)\n\n-----------------------------------------------------------------------------\n\n We use a 3D rendering process to turn them into 2D RGB images that \n\ncan be fed into the network, and backpropagate through the rendering \n\nprocess to optimize the texture of the 3D object.\n\n \n\n Our technique is similar to the approach that Athalye et al.\n\n used for the creation of real-world adversarial examples, as we rely on\n\n the backpropagation of the objective function to randomly sampled views\n\n of the 3D model.\n\n We differ from existing approaches for artistic texture generation,\n\n as we do not modify the geometry of the object during back-propagation.\n\n By disentangling the generation of the texture from the position of \n\ntheir vertices, we can create very detailed texture for complex objects.\n\n \n\nBefore we can describe our approach, we first need to understand how a \n\n3D object is stored and rendered on screen. The object’s geometry is \n\n coordinate in the texture image. The model is then rendered, i.e. drawn\n\n on screen, by coloring every triangle with the region of the image that\n\n is delimited by the (u,v)(u,v)(u,v) coordinates of its vertices.\n\n \n\n A simple naive way to create the 3D object texture would be to \n\noptimize an image the normal way and then use it as a texture to paint \n\non the object.\n\n However, this approach generates a texture that does not consider \n\nthe underlying UV-mapping and, therefore, will create a variety of \n\nvisual artifacts in the rendered object.\n\n First, **seams** are visible on the rendered texture, because the\n\n optimization is not aware of the underlying UV-mapping and, therefore, \n\ndoes not optimize the texture consistently along the split patches of \n\nthe texture.\n\n Second, the generated patterns are **randomly oriented** on \n\ndifferent parts of the object (see, e.g., the vertical and wiggly \n\npatterns) because they are not consistently oriented in the underlying \n\nUV-mapping.\n\n#### Input Controls\n\n seams\n\n or\n\n wrongly oriented patterns.\n\nUnfold Texture\n\n#### Naïve Optimization\n\n#### Render-based Optimization\n\n[13](#figure-featureviz-3d-explanation):\n\n 3D model of the famous Stanford Bunny.\n\n You can interact with the model by rotating and zooming. Moreover, you \n\ncan unfold the object to its two-dimensional texture representation. \n\nThis unfolding reveals the UV mapping used to store the texture in the \n\ntexture image. Note how the render-based optimized texture is divided in\n\n several patches that allows for a complete and undistorted coverage of \n\nthe object.\n\n We take a different approach.\n\n The following diagram presents an overview of the proposed pipeline:\n\n \n\n[14](#figure-featureviz-3d-diagram):\n\n We optimize a texture by backpropagating through the rendering \n\nprocess. This is possible because we know how pixels in the rendered \n\nimage correspond to pixels in the texture.\n\n \n\nWe start the process by randomly initializing the texture with a Fourier\n\n parameterization.\n\nAt every training iteration we sample a random camera position, which is\n\n oriented towards the center of the bounding box of the object, and we \n\nrender the textured object as an image.\n\nWe then backpropagate the gradient of the desired objective function, \n\ni.e., the feature of interest in the neural network, to the rendered \n\nimage.\n\n \n\nHowever, an update of the rendered image does not correspond to an \n\nupdate to the texture that we aim at optimizing. Hence, we need to \n\nfurther propagate the changes to the object’s texture.\n\nThe propagation is easily implemented by applying a reverse UV-mapping, \n\nas for each pixel on screen we know its coordinate in the texture.\n\nBy modifying the texture, during the following optimization iterations, \n\nthe rendered image will incorporate the changes applied in the previous \n\niterations.\n\n \n\nUnfold Texture\n\n[15](#figure-featureviz-3d-examples):\n\nThe resulting textures are consistently optimized along the cuts, hence \n\nremoving the seams and enforcing an uniform orientation for the rendered\n\n object.\n\nMorever, since the function optimization is disentangled by the geometry\n\n of the object, the resolution of the texture can be arbitrary high.\n\nIn the next section we will se how this framework can be reused for \n\nperforming an artistic style transfer to the object’s texture.\n\n---\n\n[6](#section-style-transfer-3d)\n\n[Style Transfer for Textures through 3D Rendering](#section-style-transfer-3d)\n\n------------------------------------------------------------------------------\n\nNow that we have established a framework for efficient backpropagation \n\ninto the UV-mapped texture, we can use it to adapt existing style \n\ntransfer techniques for 3D objects.\n\nSimilarly to the 2D case, we aim at redrawing the original object’s \n\ntexture with the style of a user-provided image.\n\nThe following diagram presents an overview of the approach:\n\n \n\n[16](#figure-style-transfer-3d-diagram)\n\nThe algorithm works in similar way to the one presented in the previous \n\nsection, starting from a randomly initialized texture.\n\nAt each iteration, we sample a random view point oriented toward the \n\ncenter of the bounding box of the object and we render two images of it:\n\n \n\n \n\n#### Content Model\n\n#### Style Image\n\n#### Model with optimized texture\n\nUnfold Texture\n\n[17](#figure-style-transfer-3d-examples):\n\n Style Transfer onto various 3D models. Note that visual landmarks in\n\n the content texture, such as eyes, show up correctly in the generated \n\ntexture.\n\n \n\nBecause every view is optimized independently, the optimization is \n\nforced to try to add all the style’s elements at every iteration.\n\nFor example, if we use as style image the Van Gogh’s “Starry Night” \n\npainting, stars will be added in every single view.\n\nWe found we obtain more pleasing results, such as those presented above,\n\n by introducing a sort of “memory” of the style of\n\nprevious views. To this end, we maintain moving averages of \n\nstyle-representing Gram matrices\n\nover the recently sampled viewpoints. On each optimization iteration we \n\ncompute the style loss against those averaged matrices,\n\ninstead of the ones computed for that particular view.\n\n to the current Gram matrices. \n\n \n\n An alternative approach, such as the one employed by ,\n\n would require sampling multiple viewpoints of the scene at each step,\n\n increasing memory requirements. In contrast, our substitution trick can be also\n\n used to apply style transfer to high resolution (>10M pixels) images on a\n\n single consumer-grade GPU.\n\nTake as an example the model created by imposing Van Gogh’s starry night\n\n as style image.\n\nThe resulting texture contains the rythmic and vigorous brush strokes \n\nthat characterize Van Gogh’s work.\n\nHowever, despite the style image’s primarily cold tones, the resulting \n\nfur has a warm orange undertone as it is preserved from the original \n\ntexture.\n\nEven more interesting is how the eyes of the bunny are preserved when \n\ndifferent styles are transfered.\n\nFor example, when the style is obtained from the Van Gogh’s painting, \n\n \n\n![](Differentiable%20Image%20Parameterizations_files/printed_bunny_extended.jpg)\n\n[18](#figure-style-transfer-3d-picture):\n\n \n\n Textured models produced with the presented method can be easily used \n\nwith popular 3D modeling software or game engines. To show this, we 3D \n\n---\n\n[Conclusions](#conclusions)\n\n---------------------------\n\n For the creative artist or researcher, there’s a large space of ways\n\n to parameterize images for optimization.\n\n This opens up not only dramatically different image results, but \n\nalso animations and 3D objects!\n\n We think the possibilities explored in this article only scratch the\n\n surface.\n\n For example, one could explore extending the optimization of 3D \n\nobject textures to optimizing the material or reflectance — or even go \n\nthe direction of Kato et al. and optimize the mesh vertex positions.\n\n \n\n This article focused on *differentiable* image \n\nparameterizations, because they are easy to optimize and cover a wide \n\nrange of possible applications.\n\n But it’s certainly possible to optimize image parameterizations that\n\n aren’t differentiable, or are only partly differentiable, using \n\nreinforcement learning or evolutionary strategies .\n\n \n\n", "bibliography_bib": [{"title": "Inceptionism: Going deeper into neural networks"}, {"title": "A Neural Algorithm of Artistic Style"}, {"title": "Feature Visualization"}, {"title": "Deep neural networks are easily fooled: High confidence predictions for unrecognizable images"}, {"title": "Synthesizing robust adversarial examples"}, {"title": "The loss surfaces of multilayer networks"}, {"title": "Very deep convolutional networks for large-scale image recognition"}, {"title": "Deconvolution and checkerboard artifacts"}, {"title": "Deep Image Prior"}, {"title": "Compositional pattern producing networks: A novel abstraction of development"}, {"title": "Neural Network Generative Art in Javascript"}, {"title": "Image Regression"}, {"title": "Generating Large Images from Latent Vectors"}, {"title": "Artificial Evolution for Computer Graphics"}, {"title": "Neural 3D Mesh Renderer"}, {"title": "The Stanford Bunny"}, {"title": "Natural evolution strategies."}, {"title": "Evolution strategies as a scalable alternative to reinforcement learning"}, {"title": "Tensorflow: Large-scale machine learning on heterogeneous distributed systems"}], "filename": "Differentiable Image Parameterizations.html", "id": "707fc67c056bc4c74ed8984069ed88e2"} {"url": "n/a", "source": "distill", "source_type": "html", "converted_with": "python", "title": "A Discussion of 'Adversarial Examples Are Not Bugs, They Are Features': Learning from Incorrectly Labeled Data", "authors": ["Eric Wallace"], "date_published": "2019-08-06", "abstract": " This article is part of a discussion of the Ilyas et al. paper “Adversarial examples are not bugs, they are features”. You can learn more in the main discussion article . ", "journal_ref": "distill-pub", "doi": "https://doi.org/10.23915/distill.00019.6", "text": "\n\n #rebuttal,\n\n .comment-info {\n\n background-color: hsl(54, 78%, 96%);\n\n border-left: solid hsl(54, 33%, 67%) 1px;\n\n padding: 1em;\n\n color: hsla(0, 0%, 0%, 0.67);\n\n }\n\n #header-info {\n\n margin-top: 0;\n\n margin-bottom: 1.5rem;\n\n display: grid;\n\n grid-template-columns: 65px max-content 1fr;\n\n grid-template-areas:\n\n \"icon explanation explanation\"\n\n \"icon back comment\";\n\n grid-column-gap: 1.5em;\n\n }\n\n #header-info .icon-multiple-pages {\n\n grid-area: icon;\n\n padding: 0.5em;\n\n content: url(images/multiple-pages.svg);\n\n }\n\n #header-info .explanation {\n\n grid-area: explanation;\n\n font-size: 85%;\n\n }\n\n #header-info .back {\n\n grid-area: back;\n\n }\n\n #header-info .back::before {\n\n content: \"←\";\n\n margin-right: 0.5em;\n\n }\n\n #header-info .comment {\n\n grid-area: comment;\n\n scroll-behavior: smooth;\n\n }\n\n #header-info .comment::before {\n\n content: \"↓\";\n\n margin-right: 0.5em;\n\n }\n\n #header-info a.back,\n\n #header-info a.comment {\n\n font-size: 80%;\n\n font-weight: 600;\n\n border-bottom: none;\n\n text-transform: uppercase;\n\n color: #2e6db7;\n\n display: block;\n\n margin-top: 0.25em;\n\n letter-spacing: 0.25px;\n\n }\n\n This article is part of a discussion of the Ilyas et al. paper\n\n *“Adversarial examples are not bugs, they are features”.*\n\n You can learn more in the\n\n [main discussion article](https://distill.pub/2019/advex-bugs-discussion/) .\n\n \n\n[Other Comments](https://distill.pub/2019/advex-bugs-discussion/#commentaries)\n\n about the trained model is being “leaked” into the dataset.\n\n Nevertheless, we show that this intuition fails — a model *can* generalize.\n\n examples on the left of the Figure below:\n\n \n\n[1](https://distill.pub/2019/advex-bugs-discussion/response-6/#figure-1)\n\n incorrectly labeled, unperturbed images but can still non-trivially generalize.\n\n \n\nThis is Model Distillation Using Incorrect Predictions\n\n------------------------------------------------------\n\n of model distillation — training on this dataset allows a new\n\n model to somewhat recover the features of the original model.\n\n \n\n another task.\n\n \n\n### Two-dimensional Illustration of Model Distillation\n\n (panel (a) in the Figure below).\n\n \n\n[2](https://distill.pub/2019/advex-bugs-discussion/response-6/#figure-2)\n\n panel (c).\n\n \n\n predictions.\n\n \n\n### \n\n Other Peculiar Forms of Distillation\n\n we learn about the original model? Can we use only *out-of-domain* data?\n\n \n\n labeled as an “8″.\n\n \n\n[3](https://distill.pub/2019/advex-bugs-discussion/response-6/#figure-3)\n\n training on erroneous predictions.\n\n \n\n### \n\n Summary\n\n Ilyas et al. (2019) are not necessary to enable learning.\n\n \n\n To cite Ilyas et al.’s response, please cite their\n\n**Response\n\n Summary**: Note that since our experiments work across different architectures,\n\n “distillation” in weight space does not occur. The only distillation that can\n\n have been able to “distill” a useful model from them. (In fact, one might think\n\n of normal model training as just “feature distillation” of the humans that\n\n labeled the dataset.) Furthermore, the hypothesis that all we need is enough\n\n model-consistent points in order to recover a model, seems to be disproven by\n\n and other (e.g. ) settings. \n\n**Response**: Since our experiments work across different architectures,\n\n “distillation” in weight space cannot arise. Thus, from what we understand, the\n\n “distillation” hypothesis suggested here is referring to “feature distillation”\n\n (i.e. getting models which use the same features as the original), which is\n\n actually precisely our hypothesis too. Notably, this feature distillation would\n\n are good for classification (see [World\n\n 1](https://distill.pub/2019/advex-bugs-discussion/original-authors/#world1) and\n\n model would only use features that generalize poorly, and would thus generalize\n\n poorly itself. \n\n Moreover, we would argue that in the experiments presented (learning from\n\n mislabeled data), the same kind of distillation is happening. For instance, a\n\n moderately accurate model might associate “green background” with “frog” thus\n\n labeling “green” images as “frogs” (e.g., the horse in the comment’s figure).\n\n Training a new model on this dataset will thus associate “green” with “frog”\n\n from Fashion-MNIST” experiment in the comment). This corresponds exactly to\n\n learning features from labels, akin to how deep networks “distill” a good\n\n decision boundary from human annotators. In fact, we find these experiments\n\n a very interesting illustration of feature distillation that complements\n\n our findings. \n\n We also note that an analogy to logistic regression here is only possible\n\n due to the low VC-dimension of linear classifiers (namely, these classifiers\n\n have dimension ddd). In particular, given any classifier with VC-dimension\n\n networks have been shown to have extremely large VC-dimension (in particular,\n\n bigger than the size of the training set ). So even though\n\n labelling d+1d+1d+1 random\n\n points model-consistently is sufficient to recover a linear model, it is not\n\n necessarily sufficient to recover a deep neural network. For instance, Milli et\n\n al. are not able to reconstruct a ResNet-18\n\n using only its predictions on random Gaussian inputs. (Note that we are using a\n\n ResNet-50 in our experiments.) \n\n Finally, it seems that the only potentially problematic explanation for\n\n our experiments (namely, that enough model-consistent points can recover a\n\n In particular, Preetum is able to design a\n\n dataset where training on mislabeled inputs *that are model-consistent*\n\n does not at all recover the decision boundary of the original model. More\n\n generally, the “model distillation” perspective raised here is unable to\n\n distinguish between the dataset created by Preetum below, and those created\n\n with standard PGD (as in our D^det\\widehat{\\mathcal{D}}\\_{det}D\n\ndet​ and\n\n D^rand\\widehat{\\mathcal{D}}\\_{rand}D\n\nrand​ datasets).\n\n \n\n", "bibliography_bib": [{"title": "Distilling the Knowledge in a Neural Network"}, {"title": "Model reconstruction from model explanations"}, {"title": "Understanding deep learning requires rethinking generalization"}], "filename": "A Discussion of 'Adversarial Examples Are Not Bugs, They Are Features'_ Learning from Incorrectly Labeled Data.html", "id": "8c814d5944e1f25994a31742b9358e9c"} {"url": "n/a", "source": "distill", "source_type": "html", "converted_with": "python", "title": "A Discussion of 'Adversarial Examples Are Not Bugs, They Are Features': Adversarially Robust Neural Style Transfer", "authors": ["Reiichiro Nakano"], "date_published": "2019-08-06", "abstract": " This article is part of a discussion of the Ilyas et al. paper “Adversarial examples are not bugs, they are features”. You can learn more in the main discussion article . ", "journal_ref": "distill-pub", "doi": "https://doi.org/10.23915/distill.00019.4", "text": "\n\n #rebuttal,\n\n .comment-info {\n\n background-color: hsl(54, 78%, 96%);\n\n border-left: solid hsl(54, 33%, 67%) 1px;\n\n padding: 1em;\n\n color: hsla(0, 0%, 0%, 0.67);\n\n }\n\n #header-info {\n\n margin-top: 0;\n\n margin-bottom: 1.5rem;\n\n display: grid;\n\n grid-template-columns: 65px max-content 1fr;\n\n grid-template-areas:\n\n \"icon explanation explanation\"\n\n \"icon back comment\";\n\n grid-column-gap: 1.5em;\n\n }\n\n #header-info .icon-multiple-pages {\n\n grid-area: icon;\n\n padding: 0.5em;\n\n content: url(images/multiple-pages.svg);\n\n }\n\n #header-info .explanation {\n\n grid-area: explanation;\n\n font-size: 85%;\n\n }\n\n #header-info .back {\n\n grid-area: back;\n\n }\n\n #header-info .back::before {\n\n content: \"←\";\n\n margin-right: 0.5em;\n\n }\n\n #header-info .comment {\n\n grid-area: comment;\n\n scroll-behavior: smooth;\n\n }\n\n #header-info .comment::before {\n\n content: \"↓\";\n\n margin-right: 0.5em;\n\n }\n\n #header-info a.back,\n\n #header-info a.comment {\n\n font-size: 80%;\n\n font-weight: 600;\n\n border-bottom: none;\n\n text-transform: uppercase;\n\n color: #2e6db7;\n\n display: block;\n\n margin-top: 0.25em;\n\n letter-spacing: 0.25px;\n\n }\n\n This article is part of a discussion of the Ilyas et al. paper\n\n *“Adversarial examples are not bugs, they are features”.*\n\n You can learn more in the\n\n [main discussion article](https://distill.pub/2019/advex-bugs-discussion/) .\n\n \n\n[Other Comments](https://distill.pub/2019/advex-bugs-discussion/#commentaries)\n\n A figure in Ilyas, et. al. that struck me as particularly\n\n interesting\n\n their\n\n tendency to learn similar non-robust features.\n\n \n\n non-robust features.\n\n \n\n non-robust features in an image.\n\n \n\n Notice how far back VGG is compared to the other models.\n\n \n\n known to not work very well This phenomenon is discussed at length in [this\n\n transfer actually look more correct to humans!**\n\n make sense either.\n\n simple technique previously established in feature visualization.\n\n through the neural network.\n\n \n\n \n\n non-robust features.\n\n \n\nA quick experiment\n\n------------------\n\n Testing our hypothesis is fairly straightforward:\n\n Use an adversarially robust classifier for neural style transfer and see\n\n what happens.\n\n \n\n al. on their performance on neural style transfer.\n\n For comparison, I performed the same algorithm with a regular VGG-19\n\n  .\n\n \n\n Further details can be read in a footnote\n\n \n\n L-BFGS was used for optimization as it showed faster convergence\n\n over Adam.\n\n relu4\\_2relu4\\\\_2relu4\\_2.\n\n \n\n or observed in the accompanying Colaboratory notebook.\n\n \n\n The results of this experiment can be explored in the diagram below.\n\n \n\n #style-transfer-slider.juxtapose {\n\n max-height: 512px;\n\n max-width: 512px;\n\n }\n\n \n\n**Content image**\n\n**Style image**\n\n  Compare VGG or Robust\n\n ResNet\n\n'use strict';\n\n(function () {\n\n // Initialize slider\n\n var currentContent = 'ben';\n\n var currentStyle = 'scream';\n\n var currentLeft = 'nonrobust';\n\n var compareVGGCheck = document.getElementById(\"check-compare-vgg\");\n\n var styleTransferSliderDiv = document.getElementById(\"style-transfer-slider\");\n\n function refreshSlider() {\n\n while (styleTransferSliderDiv.firstChild) {\n\n styleTransferSliderDiv.removeChild(styleTransferSliderDiv.firstChild);\n\n }\n\n new juxtapose.JXSlider('#style-transfer-slider', [{\n\n src: imgPath1, // TODO: Might need to use absolute\\_url?\n\n label: currentLeft === 'nonrobust' ? 'Non-robust ResNet50' : 'VGG'\n\n }, {\n\n src: imgPath2,\n\n label: 'Robust ResNet50'\n\n }], {\n\n animate: true,\n\n showLabels: true,\n\n showCredits: false,\n\n startingPosition: \"50%\",\n\n makeResponsive: true\n\n });\n\n }\n\n refreshSlider();\n\n compareVGGCheck.onclick = function (evt) {\n\n currentLeft = evt.target.checked ? 'vgg' : 'nonrobust';\n\n refreshSlider();\n\n };\n\n // Initialize selector\n\n $(\"#content-select\").imagepicker({\n\n changed: function changed(oldVal, newVal, event) {\n\n currentContent = newVal;\n\n refreshSlider();\n\n }\n\n });\n\n $(\"#style-select\").imagepicker({\n\n changed: function changed(oldVal, newVal, event) {\n\n currentStyle = newVal;\n\n refreshSlider();\n\n }\n\n });\n\n})();\n\n Success!\n\n The robust ResNet shows drastic improvement over the regular ResNet.\n\n exactly the same!\n\n \n\n A more interesting comparison can be done between VGG-19 and the robust ResNet.\n\n At first glance, the robust ResNet’s outputs seem on par with VGG-19.\n\n Gaussian noise..\n\n \n\n Texture synthesized with VGG. \n\n*Mild artifacts.*\n\n Texture synthesized with robust ResNet. \n\n*Severe artifacts.*\n\n A comparison of artifacts between textures synthesized by VGG and ResNet.\n\n Interact by hovering around the images.\n\n This diagram was repurposed from\n\n by Odena, et. al.\n\n \n\n It is currently unclear exactly what causes these artifacts.\n\n One theory is that they are checkerboard artifacts\n\n caused by\n\n non-divisible kernel size and stride in the convolution layers.\n\n They could also be artifacts caused by the presence of max pooling layers\n\n in ResNet.\n\n problem that\n\n adversarial robustness solves in neural style transfer.\n\n \n\nVGG remains a mystery\n\n---------------------\n\n nets, it\n\n did not provide an explanation for this phenomenon.\n\n the box\n\n naturally\n\n more robust than other architectures.\n\n \n\n A few papers\n\n indeed show\n\n that VGG architectures are slightly more robust than ResNet.\n\n However, they also show that AlexNet, not known to work well\n\n for\n\n neural style transferAs shown by Dávid Komorowicz\n\n in\n\n this [blog post](https://dawars.me/neural-style-transfer-deep-learning/).\n\n , is\n\n *above* VGG in terms of this “natural robustness”.\n\n \n\n architectures fail at style transfer (or other similar algorithms\n\n \n\n optimization\n\n maximization works on robust classifiers *without*\n\n enforcing\n\n by\n\n previous work. In a recent chat with Chris\n\n Olah, he\n\n *without* these priors, just like style transfer!\n\n \n\n future\n\n work.\n\n \n\n To cite Ilyas et al.’s response, please cite their\n\n**Response Summary**: Very interesting\n\n results, highlighting the effect of non-robust features and the utility of\n\n robust models for downstream tasks. We’re excited to see what kind of impact\n\n robustly trained models will have in neural network art! We were also really\n\n intrigued by the mysteriousness of VGG in the context of style transfer\n\n. As such, we took a\n\n deeper dive which found some interesting links between robustness and style\n\n transfer that suggest that perhaps robustness does indeed play a role here. \n\n**Response**: These experiments are really cool! It is interesting that\n\n preventing the reliance of a model on non-robust features improves performance\n\n on style transfer, even without an explicit task-related objective (i.e. we\n\n didn’t train the networks to be better for style transfer). \n\n We also found the discussion of VGG as a “mysterious network” really\n\n performance more generally. Though not a complete answer, we made a couple of\n\n observations while investigating further: \n\n*Style transfer does work with AlexNet:* One wrinkle in the idea that\n\n the most naturally robust network — AlexNet is. However, based on our own\n\n testing, style transfer does seem to work with AlexNet out-of-the-box, as\n\n long as we use a few early layers in the network (in a similar manner to\n\n VGG): \n\n Style transfer using AlexNet, using conv\\_1 through conv\\_4.\n\n \n\n Observe that even though style transfer still works, there are checkerboard\n\n patterns emerging — this seems to be a similar phenomenon to the one noticed\n\n in the comment in the context of robust models.\n\n This might be another indication that these two phenomena (checkerboard\n\n patterns and style transfer working) are not as intertwined as previously\n\n thought.\n\n \n\n*From prediction robustness to layer robustness:* Another\n\n potential wrinkle here is that both AlexNet and VGG are not that\n\n much more robust than ResNets (for which style transfer completely fails),\n\n and yet seem to have dramatically better performance. To try to\n\n explain this, recall that style transfer is implemented as a minimization of a\n\n combined objective consisting of a style loss and a content loss. We found,\n\n however, that the network we use to compute the\n\n style loss is far more important\n\n than the one for the content loss. The following demo illustrates this — we can\n\n actually use a non-robust ResNet for the content loss and everything works just\n\n fine:\n\nStyle transfer seems to be rather\n\n invariant to the choice of content network used, and very sensitive\n\n to the style network used.\n\nTherefore, from now on, we use a fixed ResNet-50 for the content loss as a\n\n control, and only worry about the style loss. \n\nNow, note that the way that style loss works is by using the first few\n\n layers of the relevant network. Thus, perhaps it is not about the robustness of\n\n for style transfer? \n\n To test this hypothesis, we measure the robustness of a layer fff as:\n\n \n\nR(f)=Ex1∼D[maxx′∥f(x′)−f(x1)∥2]Ex1,x2∼D[∥f(x1)−f(x2)∥2]\n\n {\\mathbb{E}\\_{x\\_1, x\\_2 \\sim D}\\left[\\|f(x\\_1) - f(x\\_2)\\|\\_2\\right]}\n\n R(f)=Ex1​,x2​∼D​[∥f(x1​)−f(x2​)∥2​]Ex1​∼D​[maxx′​∥f(x′)−f(x1​)∥2​]​\n\n Essentially, this quantity tells us how much we can change the\n\n representations are between images in general. We’ve plotted this value for\n\n the first few layers in a couple of different networks below: \n\nThe robustness R(f)R(f)R(f) of the first\n\n four layers of VGG16, AlexNet, and robust/standard ResNet-50\n\n trained on ImageNet.\n\n Here, it becomes clear that, the first few layers of VGG and AlexNet are\n\n actually almost as robust as the first few layers of the robust ResNet!\n\n This is perhaps a more convincing indication that robustness might have\n\n something to with VGG’s success in style transfer after all.\n\n \n\n Finally, suppose we restrict style transfer to only use a single layer of\n\n the network when computing the style lossUsually style transfer uses\n\n actually confers some style onto the image).. Again, the more\n\n robust layers seem to indeed work better for style transfer! Since all of the\n\n layers in the robust ResNet are robust, style transfer yields non-trivial\n\n results even using the last layer alone. Conversely, VGG and AlexNet seem to\n\n excel in the earlier layers (where they are non-trivially robust) but fail when\n\n using exclusively later (non-robust) layers: \n\nStyle transfer using a single layer. The\n\n names of the layers and their robustness R(f)R(f)R(f) are printed below\n\n each style transfer result. We find that for both networks, the robust\n\n layers seem to work (for the robust ResNet, every layer is robust).\n\n Of course, there is much more work to be done here, but we are excited\n\n to see further work into understanding the role of both robustness and the VGG\n\n in network-based image manipulation. \n\n", "bibliography_bib": [{"title": "Adversarial examples are not bugs, they are features"}, {"title": "Very deep convolutional networks for large-scale image recognition"}, {"title": "A Neural Algorithm of Artistic Style"}, {"title": "Differentiable Image Parameterizations"}, {"title": "Feature Visualization"}, {"title": "Learning Perceptually-Aligned Representations via Adversarial Robustness"}, {"title": "On the limited memory BFGS method for large scale optimization"}, {"title": "Deconvolution and checkerboard artifacts"}, {"title": "Geodesics of learned representations"}, {"title": "Batch Normalization is a Cause of Adversarial Vulnerability"}, {"title": "Benchmarking Neural Network Robustness to Common Corruptions and Perturbations"}, {"title": "Is Robustness the Cost of Accuracy? - A Comprehensive Study on the Robustness of 18 Deep Image Classification Models"}, {"title": "ImageNet Classification with Deep Convolutional Neural Networks"}, {"title": "Neural Style transfer with Deep Learning"}, {"title": "The Building Blocks of Interpretability"}], "filename": "A Discussion of 'Adversarial Examples Are Not Bugs, They Are Features'_ Adversarially Robust Neural Style Transfer.html", "id": "7523bc35c63466d5a0c5aaad18469848"} {"url": "n/a", "source": "distill", "source_type": "html", "converted_with": "python", "title": "Open Questions about Generative Adversarial Networks", "authors": ["Augustus Odena"], "date_published": "2019-04-09", "abstract": " By some metrics, research on Generative Adversarial Networks (GANs) has progressed substantially in the past 2 years. Practical improvements to image synthesis models are being made almost too quickly to keep up with: ", "journal_ref": "distill-pub", "doi": "https://doi.org/10.23915/distill.00018", "text": "\n\n \n\nOdena et al., 2016\n\nMiyato et al., 2017\n\nZhang et al., 2018\n\nBrock et al., 2018\n\n However, by other metrics, less has happened. For instance, there is\n\n still widespread disagreement about how GANs should be evaluated.\n\n Given that current image synthesis benchmarks seem somewhat \n\nsaturated, we think now is a good time to reflect on research goals for \n\nthis sub-field.\n\n \n\n Lists of open problems have helped other fields with this.\n\n We also believe that writing this article has clarified our thinking about\n\n We assume a fair amount of background (or willingness to look things up)\n\n because we reference too many results to explain all those results in detail.\n\n \n\nWhat are the Trade-Offs Between GANs and other Generative Models?\n\n-----------------------------------------------------------------\n\n \n\n This statement shouldn’t be taken too literally.\n\n that aren’t easy to describe as belonging to just one of those clusters.\n\n I’ve also left out VAEs entirely;\n\n they’re arguably no longer considered state-of-the-art at any tasks of record.\n\n .\n\n Roughly speaking, Flow Models apply a\n\n stack of invertible transformations to a sample from a prior\n\n so that exact log-likelihoods of observations can be computed.\n\n On the other hand, Autoregressive Models factorize the\n\n distribution over observations into conditional distributions\n\n at a time.)\n\n Recent research suggests that these models have different\n\n performance characteristics and trade-offs.\n\n to the model families is an interesting open question.\n\n \n\n At first glance, Flow Models seem like they might make GANs unnecessary.\n\n Flow Models allow for exact log-likelihood computation and exact inference,\n\n A lot of effort is spent on training GANs,\n\n so it seems like we should care about whether Flow Models make GANs obsolete\n\n \n\n image-to-image translation.\n\n .\n\n \n\n The GLOW model is trained to generate 256x256 celebrity faces using\n\n 40 GPUs for 2 weeks and about 200 million parameters.\n\n In contrast, progressive GANs are trained on a similar face dataset\n\n to generate images with 16 times fewer pixels.\n\n This comparison isn’t perfect,\n\n \n\n For instance, it’s possible that the progressive growing\n\n technique could be applied to Flow Models as well.\n\n \n\n but it gives you a sense of things.\n\n \n\n Why are the Flow Models less efficient?\n\n We see two possible reasons:\n\n you will be penalized infinitely harshly!\n\n and this penalty is less harsh.\n\n Section 6.1 of does some small experiments on expressivity, but at\n\n present we’re not aware of any in-depth analysis of this question.\n\n \n\n It turns out that Autoregressive Models can be expressed as Flow Models\n\n (because they are both reversible) that are not parallelizable.\n\n \n\n Parallelizable is somewhat imprecise in this context.\n\n There may be ways around this limitation though.\n\n \n\n Thus, GANs are parallel and efficient but not reversible,\n\n Flow Models are reversible and parallel but not efficient, and\n\n Autoregressive models are reversible and efficient, but not parallel.\n\n \n\n| | Parallel | Efficient | Reversible |\n\n| --- | --- | --- | --- |\n\n| GANs | Yes | Yes | No |\n\n| Flow Models | Yes | No | Yes |\n\n| Autoregressive Models | No | Yes | Yes |\n\n This brings us to our first open problem:\n\n \n\nProblem 1\n\nWhat are the fundamental trade-offs between GANs and other generative models?\n\n This has been considered for hybrid GAN/Flow Models, but we think that\n\n this approach is still underexplored.\n\n \n\n to do better than chance if the generator does this.\n\n \n\n \n\nWhat Sorts of Distributions Can GANs Model?\n\n-------------------------------------------\n\n Most GAN research focuses on image synthesis.\n\n MNIST,\n\n CIFAR-10,\n\n STL-10,\n\n CelebA,\n\n and Imagenet.\n\n There is some folklore about which of these datasets is ‘easiest’ to model.\n\n ‘extremely regular’.\n\n Others have noted that ‘a high number of classes is what makes\n\n ImageNet synthesis difficult for GANs’.\n\n synthesis model on CelebA generates images that seem\n\n Imagenet.\n\n \n\n trying to train GANs on ever larger and more complicated datasets.\n\n In particular, we’ve mostly studied how GANs perform on the datasets that\n\n happened to be laying around for object recognition.\n\n \n\n training a generative model, and then say something like ‘this dataset will be\n\n easy for a GAN to model, but not a VAE’.\n\n There has been some progress on this topic,\n\n but we feel that more can be done.\n\n We can now state the problem:\n\n \n\nProblem 2\n\n We might ask the following related questions as well:\n\n What do we mean by ‘model the distribution’? Are we satisfied with a\n\n low-support representation, or do we want a true density model?\n\n Are there distributions that a GAN can never learn to model?\n\n Are there distributions that are learnable for a GAN in principle, \n\nbut are not\n\n efficiently learnable, for some reasonable model of \n\nresource-consumption?\n\n Are the answers to these questions actually any different for GANs \n\nthan they are for other\n\n generative models?\n\n \n\n We propose two strategies for answering these questions:\n\n \n\n For example, in the authors create a dataset of synthetic triangles.\n\n We feel that this\n\n angle is under-explored.\n\n allowing for systematic study.\n\n modify the assumptions to account\n\n for different properties of the dataset.\n\n what happens to them when the data distribution becomes multi-modal.\n\nHow Can we Scale GANs Beyond Image Synthesis?\n\n---------------------------------------------\n\n Aside from applications like image-to-image\n\n translation\n\n and domain-adaptation\n\n most GAN successes have been in image synthesis.\n\n Attempts to use GANs beyond images have focused on three domains:\n\n \n\n* **Text** -\n\n The discrete nature of text makes it difficult to apply GANs.\n\n This is because GANs rely on backpropagating a signal from the \n\ndiscriminator through the generated content into the generator.\n\n There are two approaches to addressing this difficulty.\n\n The first is to have the GAN act only on continuous\n\n representations of the discrete data, as in.\n\n The second is use an actual discrete model and attempt to train the GAN using\n\n gradient estimation as in.\n\n Other, more sophisticated treatments exist,\n\n with likelihood-based language models.\n\n* **Structured Data** -\n\n What about other non-euclidean structured data, like graphs?\n\n The study of this type of data is called geometric deep\n\n learning.\n\n GANs have had limited success here, but so have other deep learning techniques,\n\n so it’s hard to tell how much the GAN aspect matters.\n\n We’re aware of one attempt to use GANs in this space,\n\n which has the generator produce (and the discriminator ‘critique’) random walks\n\n that are meant to resemble those sampled from a source graph.\n\n* **Audio** -\n\n Audio is the domain in which GANs are closest to achieving the success\n\n they’ve enjoyed with images.\n\n The first serious attempt at applying GANs to unsupervised audio synthesis\n\n is, in which the authors\n\n More recent work suggests GANs can even\n\n outperform autoregressive models on some perceptual metrics.\n\n Despite these attempts, images are clearly the easiest domain for GANs.\n\n This leads us to the statement of the problem:\n\n \n\nProblem 3\n\nHow can GANs be made to perform well on non-image data?\n\nDoes scaling GANs to other domains require new training techniques,\n\n or does it simply require better implicit priors for each domain?\n\n but that it will require better implicit priors.\n\n in a given domain.\n\n \n\n For structured data or data that is not continuous, we’re less sure.\n\n One approach might be to make both the generator and discriminator\n\n large-scale computational resources.\n\n Finally, this problem may just require fundamental research progress.\n\n \n\nWhat can we Say About the Global Convergence of GAN Training?\n\n-------------------------------------------------------------\n\n the generator and discriminator for opposing objectives.\n\n Under certain assumptions\n\n \n\n These assumptions are very strict.\n\n The referenced paper assumes (roughly speaking) that\n\n the equilibrium we are looking for exists and that\n\n we are already very close to it.\n\n ,\n\n this simultaneous optimization\n\n is locally asymptotically stable.\n\n \n\n But all neural networks have this problem!\n\n This brings us to our question:\n\n \n\nProblem 4\n\nWhen can we prove that GANs are globally convergent?\n\nWhich neural network convergence results can be applied to GANs?\n\n There has been nontrivial progress on this question.\n\n Broadly speaking, there are 3 existing techniques, all of which have generated\n\n promising results but none of which have been studied to completion:\n\n \n\n* **Simplifying Assumptions** -\n\n For example, the simplified LGQ GAN —\n\n linear generator, Gaussian data, and quadratic discriminator — can be \n\nshown to be globally convergent, if optimized with a special technique\n\n and some additional assumptions.\n\n \n\n As another example, show under different simplifying assumptions that\n\n It seems promising to gradually relax those assumptions to see what happens.\n\n For example, we could move away from unimodal distributions.\n\n* **Use Techniques from Normal Neural Networks** -\n\n to answer questions about convergence of GANs.\n\n For instance, it’s argued in that the non-convexity\n\n of deep neural networks isn’t a problem,\n\n \n\n A fact that practitioners already kind of suspected.\n\n \n\n because low-quality local minima of the loss function\n\n become exponentially rare as the network gets larger.\n\n Can this analysis be ‘lifted into GAN space’?\n\n In fact, it seems like a generally useful heuristic to take analyses\n\n of deep neural networks used as classifiers and see if they apply to \n\nGANs.\n\n* **Game Theory** -\n\n The final strategy is to model GAN training using notions from game theory.\n\n but do so using unreasonably large resource constraints.\n\nHow Should we Evaluate GANs and When Should we Use Them?\n\n--------------------------------------------------------\n\n Suggestions include:\n\n \n\n* **Inception Score and FID** -\n\n Both these scores\n\n use a pre-trained image classifier and both have\n\n known issues .\n\n A common criticism is that these scores measure\n\n ‘sample quality’ and don’t really capture ‘sample diversity’.\n\n* **MS-SSIM** -\n\n propose using MS-SSIM to\n\n* **AIS** -\n\n propose putting a Gaussian observation model on the outputs\n\n of a GAN and using annealed importance sampling to estimate\n\n the log likelihood under this model, but show that\n\n \n\n .\n\n* **Geometry Score** -\n\n suggest computing geometric properties of the generated data manifold\n\n and comparing those properties to the real data.\n\n* **Precision and Recall** -\n\n attempt to measure both the ‘precision’ and ‘recall’ of GANs.\n\n* **Skill Rating** -\n\n have shown that trained GAN discriminators can contain useful information\n\n with which evaluation can be performed.\n\n Those are just a small fraction of the proposed GAN evaluation schemes.\n\n *when to use GANs*.\n\n Thus, we have bundled those two questions into one:\n\n \n\nProblem 5\n\nWhen should we use GANs instead of other generative models?\n\nHow should we evaluate performance in those contexts?\n\n What should we use GANs for?\n\n If you want an actual density model, GANs probably aren’t the best choice.\n\n , which means there may be substantial parts of the test\n\n set to which a GAN (implicitly) assigns zero likelihood.\n\n \n\n Rather than worrying too much about this,\n\n Though trying to fix this issue is a valid research agenda as well.\n\n \n\n GANs are likely to be well-suited to tasks with a perceptual flavor.\n\n all fall under this umbrella.\n\n \n\n How should we evaluate GANs on these perceptual tasks?\n\n Ideally, we would just use a human judge, but this is expensive.\n\n This is called a classifier two-sample test (C2STs)\n\n .\n\n (e.g., ) this will dominate the evaluation.\n\n \n\n One approach might be to make a critic that is blind to the dominant defect.\n\n creating an ordered list of the most important defects and\n\n critics that ignore them.\n\n more and more of the higher variance components.\n\n \n\n Finally, we could evaluate on humans despite the expense.\n\n This would allow us to measure the thing that we actually care about.\n\n the prediction is uncertain.\n\n \n\nHow does GAN Training Scale with Batch Size?\n\n--------------------------------------------\n\n \n\n At first glance, it seems like the answer should be yes — after all,\n\n the discriminator in most GANs is just an image classifier.\n\n Larger batches can accelerate training if it is bottlenecked on \n\ngradient noise.\n\n However, GANs have a separate bottleneck that classifiers don’t: the\n\n training procedure can diverge.\n\n Thus, we can state our problem:\n\n \n\nProblem 6\n\nHow does GAN training scale with batch size?\n\nHow big a role does gradient noise play in GAN training?\n\nCan GAN training be modified so that it scales better with batch size?\n\n \n\n Can alternate training procedures make better use of large batches?\n\n \n\n asynchronous SGD interacts in a special way with GAN training.\n\n \n\nWhat is the Relationship Between GANs and Adversarial Examples?\n\n---------------------------------------------------------------\n\n It’s well known that image classifiers suffer from adversarial examples:\n\n but are exponentially hard to learn robustly.\n\n \n\n Despite the large bodies of literature on GANs and adversarial examples,\n\n there doesn’t seem to be much work on how they relate.\n\n \n\n \n\n Thus, we can ask the question:\n\n \n\nProblem 7\n\nHow does the adversarial robustness of the discriminator affect GAN training?\n\n How can we begin to think about this problem?\n\n Consider a fixed discriminator **D**.\n\n An adversarial example for **D** would exist if\n\n there were a generator sample **G(z)** correctly classified as fake and\n\n a small perturbation **p** such that **G(z) + p** is classified as real.\n\n a new generator **G’** where **G’(z) = G(z) + p**.\n\n \n\n Is this concern realistic?\n\n shows that deliberate attacks on generative models can work,\n\n but we are more worried about something you might call an ‘accidental attack’.\n\n There are reasons to believe that these accidental attacks are less likely.\n\n First, the generator is only allowed to make one gradient update before\n\n the discriminator is updated again.\n\n for every gradient step.\n\n We think this is a fruitful topic for further exploration.\n\n \n\n", "bibliography_bib": [{"title": "Conditional Image Synthesis With Auxiliary Classifier GANs"}, {"title": "Self-Attention Generative Adversarial Networks"}, {"title": "Spectral Normalization for Generative Adversarial Networks"}, {"title": "Large Scale GAN Training for High Fidelity Natural Image Synthesis"}, {"title": "A style-based generator architecture for generative adversarial networks"}, {"title": "Open Questions in Physics"}, {"title": "Not especially famous, long-open problems which anyone can understand"}, {"title": "Hilbert's Problems"}, {"title": "Smale's Problems"}, {"title": "Auto-Encoding Variational Bayes"}, {"title": "NICE: Non-linear Independent Components Estimation"}, {"title": "Density estimation using Real NVP"}, {"title": "Glow: Generative Flow with Invertible 1x1 Convolutions"}, {"title": "Normalizing Flows Tutorial"}, {"title": "Pixel Recurrent Neural Networks"}, {"title": "Conditional Image Generation with PixelCNN Decoders"}, {"title": "PixelCNN++: Improving the PixelCNN with Discretized Logistic Mixture Likelihood and Other Modifications"}, {"title": "WaveNet: A Generative Model for Raw Audio"}, {"title": "Progressive Growing of GANs for Improved Quality, Stability, and Variation"}, {"title": "Variational Inference with Normalizing Flows"}, {"title": "Parallel WaveNet: Fast High-Fidelity Speech Synthesis"}, {"title": "Brewer's conjecture and the feasibility of consistent, available, partition-tolerant web services"}, {"title": "Flow-GAN: Bridging implicit and prescribed learning in generative models"}, {"title": "Comparison of maximum likelihood and gan-based training of real nvps"}, {"title": "Is Generator Conditioning Causally Related to GAN Performance?"}, {"title": "Gradient-Based Learning Applied to Document Recognition"}, {"title": "Learning Multiple Layers of Features from Tiny Images"}, {"title": "An analysis of single-layer networks in unsupervised feature learning"}, {"title": "Deep Learning Face Attributes in the Wild"}, {"title": "ImageNet Large Scale Visual Recognition Challenge"}, {"title": "PSA"}, {"title": "Are GANs Created Equal? A Large-Scale Study"}, {"title": "Disconnected Manifold Learning for Generative Adversarial Networks"}, {"title": "Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks"}, {"title": "Unsupervised Pixel-Level Domain Adaptation with Generative Adversarial Networks"}, {"title": "Improved training of wasserstein gans"}, {"title": "SeqGAN: Sequence Generative Adversarial Nets with Policy Gradient"}, {"title": "MaskGAN: Better Text Generation via Filling in the ______"}, {"title": "Long short-term memory"}, {"title": "Geometric deep learning: going beyond Euclidean data"}, {"title": "NetGAN: Generating Graphs via Random Walks"}, {"title": "Synthesizing Audio with Generative Adversarial Networks"}, {"title": "GANSynth: Adversarial Neural Audio Synthesis"}, {"title": "OpenAI Five"}, {"title": "Gradient descent GAN optimization is locally stable"}, {"title": "Which Training Methods for GANs do actually Converge?"}, {"title": "Understanding GANs: the LQG Setting"}, {"title": "Global Convergence to the Equilibrium of GANs using Variational Inequalities"}, {"title": "The Mechanics of n-Player Differentiable Games"}, {"title": "Approximation and Convergence Properties of Generative Adversarial Learning"}, {"title": "The Inductive Bias of Restricted f-GANs"}, {"title": "On the Limitations of First-Order Approximation in GAN Dynamics"}, {"title": "The Loss Surface of Multilayer Networks"}, {"title": "GANGs: Generative Adversarial Network Games"}, {"title": "Beyond Local Nash Equilibria for Adversarial Networks"}, {"title": "An Online Learning Approach to Generative Adversarial Networks"}, {"title": "Improved Techniques for Training GANs"}, {"title": "GANs Trained by a Two Time-Scale Update Rule Converge to a Nash Equilibrium"}, {"title": "A Note on the Inception Score"}, {"title": "Towards GAN Benchmarks Which Require Generalization"}, {"title": "Multiscale structural similarity for image quality assessment"}, {"title": "On the Quantitative Analysis of Decoder-Based Generative Models"}, {"title": "Annealed importance sampling"}, {"title": "Geometry Score: A Method For Comparing Generative Adversarial Networks"}, {"title": "Assessing Generative Models via Precision and Recall"}, {"title": "Skill Rating for Generative Models"}, {"title": "Discriminator Rejection Sampling"}, {"title": "Revisiting Classifier Two-Sample Tests"}, {"title": "Parametric Adversarial Divergences are Good Task Losses for Generative Modeling"}, {"title": "Deconvolution and Checkerboard Artifacts"}, {"title": "Svcca: Singular vector canonical correlation analysis for deep learning dynamics and interpretability"}, {"title": "Deep reinforcement learning from human preferences"}, {"title": "Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour"}, {"title": "Scaling sgd batch size to 32k for imagenet training"}, {"title": "Train longer, generalize better: closing the generalization gap in large batch training of neural networks"}, {"title": "Don't Decay the Learning Rate, Increase the Batch Size"}, {"title": "An Empirical Model of Large-Batch Training"}, {"title": "Science and research policy at the end of Moore’s law"}, {"title": "In-datacenter performance analysis of a tensor processing unit"}, {"title": "Improving GANs using optimal transport"}, {"title": "Large Scale Distributed Deep Networks"}, {"title": "Deep learning with Elastic Averaging SGD"}, {"title": "Staleness-aware Async-SGD for Distributed Deep Learning"}, {"title": "Faster Asynchronous SGD"}, {"title": "The Unusual Effectiveness of Averaging in GAN Training"}, {"title": "Intriguing properties of neural networks"}, {"title": "Adversarial examples from computational constraints"}, {"title": "Efficient noise-tolerant learning from statistical queries"}, {"title": "Adversarial examples for generative models"}, {"title": "Towards deep learning models resistant to adversarial attacks"}], "filename": "Open Questions about Generative Adversarial Networks.html", "id": "d88f7c435956af20b6a519faedfa41da"} {"url": "n/a", "source": "distill", "source_type": "html", "converted_with": "python", "title": "A Discussion of 'Adversarial Examples Are Not Bugs, They Are Features': Adversarially Robust Neural Style Transfer", "authors": ["Reiichiro Nakano"], "date_published": "2019-08-06", "abstract": " This article is part of a discussion of the Ilyas et al. paper “Adversarial examples are not bugs, they are features”. You can learn more in the main discussion article . ", "journal_ref": "distill-pub", "doi": "https://doi.org/10.23915/distill.00019.4", "text": "\n\n #rebuttal,\n\n .comment-info {\n\n background-color: hsl(54, 78%, 96%);\n\n border-left: solid hsl(54, 33%, 67%) 1px;\n\n padding: 1em;\n\n color: hsla(0, 0%, 0%, 0.67);\n\n }\n\n #header-info {\n\n margin-top: 0;\n\n margin-bottom: 1.5rem;\n\n display: grid;\n\n grid-template-columns: 65px max-content 1fr;\n\n grid-template-areas:\n\n \"icon explanation explanation\"\n\n \"icon back comment\";\n\n grid-column-gap: 1.5em;\n\n }\n\n #header-info .icon-multiple-pages {\n\n grid-area: icon;\n\n padding: 0.5em;\n\n content: url(images/multiple-pages.svg);\n\n }\n\n #header-info .explanation {\n\n grid-area: explanation;\n\n font-size: 85%;\n\n }\n\n #header-info .back {\n\n grid-area: back;\n\n }\n\n #header-info .back::before {\n\n content: \"←\";\n\n margin-right: 0.5em;\n\n }\n\n #header-info .comment {\n\n grid-area: comment;\n\n scroll-behavior: smooth;\n\n }\n\n #header-info .comment::before {\n\n content: \"↓\";\n\n margin-right: 0.5em;\n\n }\n\n #header-info a.back,\n\n #header-info a.comment {\n\n font-size: 80%;\n\n font-weight: 600;\n\n border-bottom: none;\n\n text-transform: uppercase;\n\n color: #2e6db7;\n\n display: block;\n\n margin-top: 0.25em;\n\n letter-spacing: 0.25px;\n\n }\n\n This article is part of a discussion of the Ilyas et al. paper\n\n *“Adversarial examples are not bugs, they are features”.*\n\n You can learn more in the\n\n [main discussion article](https://distill.pub/2019/advex-bugs-discussion/) .\n\n \n\n[Other Comments](https://distill.pub/2019/advex-bugs-discussion/#commentaries)\n\n A figure in Ilyas, et. al. that struck me as particularly\n\n interesting\n\n their\n\n tendency to learn similar non-robust features.\n\n \n\n![](./Adversarially Robust Neural Style Transfer_files/transferability.png)\n\n non-robust features.\n\n \n\n non-robust features in an image.\n\n \n\n Notice how far back VGG is compared to the other models.\n\n \n\n known to not work very well This phenomenon is discussed at length in [this\n\n transfer actually look more correct to humans!**\n\n make sense either.\n\n simple technique previously established in feature visualization.\n\n through the neural network.\n\n \n\n \n\n non-robust features.\n\n \n\nA quick experiment\n\n------------------\n\n Testing our hypothesis is fairly straightforward:\n\n Use an adversarially robust classifier for neural style transfer and see\n\n what happens.\n\n \n\n al. on their performance on neural style transfer.\n\n For comparison, I performed the same algorithm with a regular VGG-19\n\n  .\n\n \n\n Further details can be read in a footnote\n\n \n\n L-BFGS was used for optimization as it showed faster convergence\n\n over Adam.\n\n relu4\\_2relu4\\\\_2relu4\\_2.\n\n \n\n or observed in the accompanying Colaboratory notebook.\n\n \n\n The results of this experiment can be explored in the diagram below.\n\n \n\n #style-transfer-slider.juxtapose {\n\n max-height: 512px;\n\n max-width: 512px;\n\n }\n\n \n\n**Content image**\n\n* ![](./Adversarially Robust Neural Style Transfer_files/ben.jpg)\n\n* ![](./Adversarially Robust Neural Style Transfer_files/dancing.jpg)\n\n* ![](./Adversarially Robust Neural Style Transfer_files/tubingen.jpg)\n\n* ![](./Adversarially Robust Neural Style Transfer_files/stata.jpg)\n\n**Style image**\n\n* ![](./Adversarially Robust Neural Style Transfer_files/scream.jpg)\n\n* ![](./Adversarially Robust Neural Style Transfer_files/starrynight.jpg)\n\n* ![](./Adversarially Robust Neural Style Transfer_files/woman.jpg)\n\n* ![](./Adversarially Robust Neural Style Transfer_files/picasso.jpg)\n\n  Compare VGG or Robust\n\n ResNet\n\n'use strict';\n\n(function () {\n\n // Initialize slider\n\n var currentContent = 'ben';\n\n var currentStyle = 'scream';\n\n var currentLeft = 'nonrobust';\n\n var compareVGGCheck = document.getElementById(\"check-compare-vgg\");\n\n var styleTransferSliderDiv = document.getElementById(\"style-transfer-slider\");\n\n function refreshSlider() {\n\n while (styleTransferSliderDiv.firstChild) {\n\n styleTransferSliderDiv.removeChild(styleTransferSliderDiv.firstChild);\n\n }\n\n new juxtapose.JXSlider('#style-transfer-slider', [{\n\n src: imgPath1, // TODO: Might need to use absolute\\_url?\n\n label: currentLeft === 'nonrobust' ? 'Non-robust ResNet50' : 'VGG'\n\n }, {\n\n src: imgPath2,\n\n label: 'Robust ResNet50'\n\n }], {\n\n animate: true,\n\n showLabels: true,\n\n showCredits: false,\n\n startingPosition: \"50%\",\n\n makeResponsive: true\n\n });\n\n }\n\n refreshSlider();\n\n compareVGGCheck.onclick = function (evt) {\n\n currentLeft = evt.target.checked ? 'vgg' : 'nonrobust';\n\n refreshSlider();\n\n };\n\n // Initialize selector\n\n $(\"#content-select\").imagepicker({\n\n changed: function changed(oldVal, newVal, event) {\n\n currentContent = newVal;\n\n refreshSlider();\n\n }\n\n });\n\n $(\"#style-select\").imagepicker({\n\n changed: function changed(oldVal, newVal, event) {\n\n currentStyle = newVal;\n\n refreshSlider();\n\n }\n\n });\n\n})();\n\n Success!\n\n The robust ResNet shows drastic improvement over the regular ResNet.\n\n exactly the same!\n\n \n\n A more interesting comparison can be done between VGG-19 and the robust ResNet.\n\n At first glance, the robust ResNet’s outputs seem on par with VGG-19.\n\n Gaussian noise..\n\n \n\n![](./Adversarially Robust Neural Style Transfer_files/vgg_texture.jpg)\n\n![](./Adversarially Robust Neural Style Transfer_files/vgg_texture.jpg)\n\n Texture synthesized with VGG. \n\n*Mild artifacts.*\n\n![](./Adversarially Robust Neural Style Transfer_files/resnet_texture.jpg)\n\n![](./Adversarially Robust Neural Style Transfer_files/resnet_texture.jpg)\n\n Texture synthesized with robust ResNet. \n\n*Severe artifacts.*\n\n A comparison of artifacts between textures synthesized by VGG and ResNet.\n\n Interact by hovering around the images.\n\n This diagram was repurposed from\n\n by Odena, et. al.\n\n \n\n It is currently unclear exactly what causes these artifacts.\n\n One theory is that they are checkerboard artifacts\n\n caused by\n\n non-divisible kernel size and stride in the convolution layers.\n\n They could also be artifacts caused by the presence of max pooling layers\n\n in ResNet.\n\n problem that\n\n adversarial robustness solves in neural style transfer.\n\n \n\nVGG remains a mystery\n\n---------------------\n\n nets, it\n\n did not provide an explanation for this phenomenon.\n\n the box\n\n naturally\n\n more robust than other architectures.\n\n \n\n A few papers\n\n indeed show\n\n that VGG architectures are slightly more robust than ResNet.\n\n However, they also show that AlexNet, not known to work well\n\n for\n\n neural style transferAs shown by Dávid Komorowicz\n\n in\n\n this [blog post](https://dawars.me/neural-style-transfer-deep-learning/).\n\n , is\n\n *above* VGG in terms of this “natural robustness”.\n\n \n\n architectures fail at style transfer (or other similar algorithms\n\n \n\n optimization\n\n maximization works on robust classifiers *without*\n\n enforcing\n\n by\n\n previous work. In a recent chat with Chris\n\n Olah, he\n\n *without* these priors, just like style transfer!\n\n \n\n future\n\n work.\n\n \n\n To cite Ilyas et al.’s response, please cite their\n\n**Response Summary**: Very interesting\n\n results, highlighting the effect of non-robust features and the utility of\n\n robust models for downstream tasks. We’re excited to see what kind of impact\n\n robustly trained models will have in neural network art! We were also really\n\n intrigued by the mysteriousness of VGG in the context of style transfer\n\n. As such, we took a\n\n deeper dive which found some interesting links between robustness and style\n\n transfer that suggest that perhaps robustness does indeed play a role here. \n\n**Response**: These experiments are really cool! It is interesting that\n\n preventing the reliance of a model on non-robust features improves performance\n\n on style transfer, even without an explicit task-related objective (i.e. we\n\n didn’t train the networks to be better for style transfer). \n\n We also found the discussion of VGG as a “mysterious network” really\n\n performance more generally. Though not a complete answer, we made a couple of\n\n observations while investigating further: \n\n*Style transfer does work with AlexNet:* One wrinkle in the idea that\n\n the most naturally robust network — AlexNet is. However, based on our own\n\n testing, style transfer does seem to work with AlexNet out-of-the-box, as\n\n long as we use a few early layers in the network (in a similar manner to\n\n VGG): \n\n![](./Adversarially Robust Neural Style Transfer_files/alexnetworks.png)\n\n Style transfer using AlexNet, using conv\\_1 through conv\\_4.\n\n \n\n Observe that even though style transfer still works, there are checkerboard\n\n patterns emerging — this seems to be a similar phenomenon to the one noticed\n\n in the comment in the context of robust models.\n\n This might be another indication that these two phenomena (checkerboard\n\n patterns and style transfer working) are not as intertwined as previously\n\n thought.\n\n \n\n*From prediction robustness to layer robustness:* Another\n\n potential wrinkle here is that both AlexNet and VGG are not that\n\n much more robust than ResNets (for which style transfer completely fails),\n\n and yet seem to have dramatically better performance. To try to\n\n explain this, recall that style transfer is implemented as a minimization of a\n\n combined objective consisting of a style loss and a content loss. We found,\n\n however, that the network we use to compute the\n\n style loss is far more important\n\n than the one for the content loss. The following demo illustrates this — we can\n\n actually use a non-robust ResNet for the content loss and everything works just\n\n fine:\n\n![](./Adversarially Robust Neural Style Transfer_files/stylematters.png)\n\nStyle transfer seems to be rather\n\n invariant to the choice of content network used, and very sensitive\n\n to the style network used.\n\nTherefore, from now on, we use a fixed ResNet-50 for the content loss as a\n\n control, and only worry about the style loss. \n\nNow, note that the way that style loss works is by using the first few\n\n layers of the relevant network. Thus, perhaps it is not about the robustness of\n\n for style transfer? \n\n To test this hypothesis, we measure the robustness of a layer fff as:\n\n \n\nR(f)=Ex1∼D[maxx′∥f(x′)−f(x1)∥2]Ex1,x2∼D[∥f(x1)−f(x2)∥2]\n\n {\\mathbb{E}\\_{x\\_1, x\\_2 \\sim D}\\left[\\|f(x\\_1) - f(x\\_2)\\|\\_2\\right]}\n\n R(f)=Ex1​,x2​∼D​[∥f(x1​)−f(x2​)∥2​]Ex1​∼D​[maxx′​∥f(x′)−f(x1​)∥2​]​\n\n Essentially, this quantity tells us how much we can change the\n\n representations are between images in general. We’ve plotted this value for\n\n the first few layers in a couple of different networks below: \n\n![](./Adversarially Robust Neural Style Transfer_files/robustnesses.png)\n\nThe robustness R(f)R(f)R(f) of the first\n\n four layers of VGG16, AlexNet, and robust/standard ResNet-50\n\n trained on ImageNet.\n\n Here, it becomes clear that, the first few layers of VGG and AlexNet are\n\n actually almost as robust as the first few layers of the robust ResNet!\n\n This is perhaps a more convincing indication that robustness might have\n\n something to with VGG’s success in style transfer after all.\n\n \n\n Finally, suppose we restrict style transfer to only use a single layer of\n\n the network when computing the style lossUsually style transfer uses\n\n actually confers some style onto the image).. Again, the more\n\n robust layers seem to indeed work better for style transfer! Since all of the\n\n layers in the robust ResNet are robust, style transfer yields non-trivial\n\n results even using the last layer alone. Conversely, VGG and AlexNet seem to\n\n excel in the earlier layers (where they are non-trivially robust) but fail when\n\n using exclusively later (non-robust) layers: \n\n![](./Adversarially Robust Neural Style Transfer_files/styletransfer.png)\n\nStyle transfer using a single layer. The\n\n names of the layers and their robustness R(f)R(f)R(f) are printed below\n\n each style transfer result. We find that for both networks, the robust\n\n layers seem to work (for the robust ResNet, every layer is robust).\n\n Of course, there is much more work to be done here, but we are excited\n\n to see further work into understanding the role of both robustness and the VGG\n\n in network-based image manipulation. \n\n", "bibliography_bib": [{"title": "Adversarial examples are not bugs, they are features"}, {"title": "Very deep convolutional networks for large-scale image recognition"}, {"title": "A Neural Algorithm of Artistic Style"}, {"title": "Differentiable Image Parameterizations"}, {"title": "Feature Visualization"}, {"title": "Learning Perceptually-Aligned Representations via Adversarial Robustness"}, {"title": "On the limited memory BFGS method for large scale optimization"}, {"title": "Deconvolution and checkerboard artifacts"}, {"title": "Geodesics of learned representations"}, {"title": "Batch Normalization is a Cause of Adversarial Vulnerability"}, {"title": "Benchmarking Neural Network Robustness to Common Corruptions and Perturbations"}, {"title": "Is Robustness the Cost of Accuracy? - A Comprehensive Study on the Robustness of 18 Deep Image Classification Models"}, {"title": "ImageNet Classification with Deep Convolutional Neural Networks"}, {"title": "Neural Style transfer with Deep Learning"}, {"title": "The Building Blocks of Interpretability"}], "filename": "Adversarially Robust Neural Style Transfer.html", "id": "3da7adb62a9b13c632b54d6e3a83da1d"} {"url": "n/a", "source": "distill", "source_type": "html", "converted_with": "python", "title": "Visualizing Neural Networks with the Grand Tour", "authors": ["Mingwei Li", "Zhenge Zhao", "Carlos Scheidegger"], "date_published": "2020-03-16", "abstract": " The Grand Tour is a classic visualization technique for high-dimensional point clouds that projects a high-dimensional dataset into two dimensions. Over time, the Grand Tour smoothly animates its projection so that every possible view of the dataset is (eventually) presented to the viewer. Unlike modern nonlinear projection methods such as t-SNE and UMAP, the Grand Tour is fundamentally a linear method. In this article, we show how to leverage the linearity of the Grand Tour to enable a number of capabilities that are uniquely useful to visualize the behavior of neural networks. Concretely, we present three use cases of interest: visualizing the training process as the network weights change, visualizing the layer-to-layer behavior as the data goes through the network and visualizing both how adversarial examples are crafted and how they fool a neural network. ", "journal_ref": "distill-pub", "doi": "https://doi.org/10.23915/distill.00025", "text": "\n\n a high-dimensional dataset into two dimensions.\n\n Over time, the Grand Tour smoothly animates its projection so that \n\nevery possible view of the dataset is (eventually) presented to the \n\nviewer.\n\n method.\n\n In this article, we show how to leverage the linearity of the Grand \n\nTour to enable a number of capabilities that are uniquely useful to \n\nvisualize the behavior of neural networks.\n\n \n\n Concretely, we present three use cases of interest: visualizing the \n\ntraining process as the network weights change, visualizing the \n\nlayer-to-layer behavior as the data goes through the network and \n\nIntroduction\n\n------------\n\n Deep neural networks often achieve best-in-class performance in \n\nsupervised learning contests such as the ImageNet Large Scale Visual \n\nRecognition Challenge (ILSVRC).\n\n \n\n \n\n In this article, we present a method to visualize the responses of a \n\nneural network which leverages properties of deep neural networks and \n\nproperties of the *Grand Tour*.\n\n As we will show, this data-visual correspondence is central to the \n\nmethod we present, especially when compared to other non-linear \n\nprojection methods like UMAP and t-SNE.\n\n \n\n These kinds of visualizations are useful to elucidate the activation \n\npatterns of a neural network for a single example, but they might offer \n\nless insight about the relationship between different examples, \n\ndifferent states of the network as it’s being trained, or how the data \n\nin the example flows through the different layers of a single network.\n\n \n\n Therefore, we instead aim to enable visualizations of the *context around*\n\n our objects of interest: what is the difference between the present \n\ntraining epoch and the next one? How does the classification of a \n\nnetwork converge (or diverge) as the image is fed through the network?\n\n Linear methods are attractive because they are particularly easy to \n\nreason about.\n\n The Grand Tour works by generating a random, smoothly changing \n\nrotation of the dataset, and then projecting the data to the \n\ntwo-dimensional screen: both are linear processes.\n\n Although deep neural networks are clearly not linear processes, they \n\noften confine their nonlinearity to a small set of operations, enabling \n\nus to still reason about their behavior.\n\n Our proposed method better preserves context by providing more\n\n consistency: it should be possible to know *how the visualization\n\n would change, if the data had been different in a particular\n\n way*.\n\nWorking Examples\n\n----------------\n\n To illustrate the technique we will present, we trained deep neural\n\n network models (DNNs) with 3 common image classification datasets:\n\n MNIST\n\n \n\n MNIST contains grayscale images of 10 handwritten digits\n\n Image credit to \n\n,\n\n fashion-MNIST\n\n \n\n Fashion-MNIST contains grayscale images of 10 types of fashion items:\n\n and CIFAR-10\n\n \n\n CIFAR-10 contains RGB images of 10 classes of objects\n\n Image credit to \n\n. \n\n While our architecture is simpler and smaller than current DNNs, it’s \n\nstill indicative of modern networks, and is complex enough to \n\ndemonstrate both our proposed techniques and shortcomings of typical \n\napproaches.\n\n The following figure presents a simple functional diagram of the \n\nneural network we will use throughout the article. The neural network is\n\n a sequence of linear (both convolutional\n\n A convolution calculates weighted sums of regions in the input. \n\n For example\n\n ![](Visualizing%20Neural%20Networks%20with%20the%20Grand%20Tour_files/conv.gif)\n\n and fully-connected\n\n A fully-connected layer computes output neurons as weighted sum of \n\ninput neurons. In matrix form, it is a matrix that linearly transforms \n\nthe input vector into the output vector.\n\n ), max-pooling, and ReLU\n\n Image credit to \n\n layers, culminating in a softmax\n\n layer.\n\n \n\nNeural network opened. The colored blocks are building-block\n\n functions (i.e. neural network layers), the gray-scale heatmaps are \n\neither the input image or intermediate activation vectors after some \n\nlayers.\n\n Even though neural networks are capable of incredible feats of \n\nclassification, deep down, they really are just pipelines of relatively \n\nsimple functions.\n\n For images, the input is a 2D array of scalar values for gray scale \n\nimages or RGB triples for colored images.\n\n -dimensional vector.\n\n Similarly, the intermediate values after any one of the functions in \n\ncomposition, or activations of neurons after a layer, can also be seen \n\nas vectors in Rn\\mathbb{R}^nRn, where nnn\n\n is the number of neurons in the layer. \n\n The softmax, for example, can be seen as a 10-vector whose values are \n\npositive real numbers that sum up to 1.\n\n This vector view of data in neural network not only allows us \n\nrepresent complex data in a mathematically compact form, but also hints \n\nus on how to visualize them in a better way.\n\n Most of the simple functions fall into two categories: they are either\n\n linear transformations of their inputs (like fully-connected layers or \n\nconvolutional layers), or relatively simple non-linear functions that \n\nwork component-wise (like sigmoid activations\n\n Image credit to \n\n \n\n or ReLU activations).\n\n Some operations, notably max-pooling\n\n Max-pooling calculates maximum of a region in the input. For example\n\n \n\n The above figure helps us look at a single image at a time; however, \n\nit does not provide much context to understand the relationship between \n\nlayers, between different examples, or between different class labels. \n\nFor that, researchers often turn to more sophisticated visualizations.\n\nUsing Visualization to Understand DNNs\n\n--------------------------------------\n\n Let’s start by considering the problem of visualizing the training \n\nprocess of a DNN.\n\n When training neural networks, we optimize parameters in the function \n\nto minimize a scalar-valued loss function, typically through some form \n\nof gradient descent.\n\n We want the loss to keep decreasing, so we monitor the whole history \n\nof training and testing losses over rounds of training (or “epochs”), to\n\n make sure that the loss decreases over time. \n\n The following figure shows a line plot of the training loss for the \n\nMNIST classifier.\n\n014215075990.10.512Training EpochLoss\n\n Although its general trend meets our expectation as the loss steadily \n\ndecreases, we see something strange around epochs 14 and 21: the curve \n\ngoes almost flat before starting to drop again.\n\n What happened? What caused that?\n\n014215075990.050.10.5123Training EpochLoss\n\n loss like above, we see that the two drops were caused by the classses 1\n\n and 7; the model learns different classes at very different times in \n\nthe training process. \n\n Although the network learns to recognize digits 0, 2, 3, 4, 5, 6, 8 \n\nand 9 early on, it is not until epoch 14 that it starts successfully \n\nrecognizing digit 1, or until epoch 21 that it recognizes digit 7.\n\n If we knew ahead of time to be looking for class-specific error rates,\n\n then this chart works well. But what if we didn’t really know what to \n\nlook for?\n\n examples at once, looking\n\n to find patterns like class-specific behavior, and other patterns \n\nbesides.\n\n Should there be only two neurons in that layer, a simple \n\ntwo-dimensional scatter plot would work.\n\n However, the points in the softmax layer for our example datasets are \n\n10 dimensional (and in larger-scale classification problems this number \n\ncan be much larger).\n\n We need to either show two dimensions at a time (which does not scale \n\nwell as the number of possible charts grows quadratically),\n\n### The State-of-the-art Dimensionality Reduction is Non-linear\n\n Modern dimensionality reduction techniques such as t-SNE and UMAP are \n\ncapable of impressive feats of summarization, providing two-dimensional \n\nimages where similar points tend to be clustered together very \n\neffectively.\n\n However, these methods are not particularly good to understand the \n\nbehavior of neuron activations at a fine scale.\n\n Consider the aforementioned intriguing feature about the different \n\nlearning rate that the MNIST classifier has on digit 1 and 7: the \n\nnetwork did not learn to recognize digit 1 until epoch 14, digit 7 until\n\n epoch 21.\n\n We compute t-SNE, Dynamic t-SNE,\n\n and UMAP projections of the epochs where the phenomenon we described \n\nhappens.\n\n Consider now the task of identifying this class-specific behavior \n\nduring training. As a reminder, in this case, the strange behavior \n\nhappens with digits 1 and 7, around epochs 14 and 21 respectively.\n\n While the behavior is not particularly subtle&emdash;digit goes \n\nfrom misclassified to correctly classified&emdash; it is quite hard \n\nto notice it in any of the plots below. \n\n Only on careful inspection we can notice that (for example) in the \n\nUMAP plot, the digit 1 which clustered in the bottom in epoch 13 becomes\n\n a new tentacle-like feature in epoch 14. \n\n // let sm1 = createSmallMultiple('#smallmultiple1', \n\n // [13,14,15, 20,21,22], ['t-SNE', 'Dynamic t-SNE', 'UMAP'], \n\n // 'mnist', true, highlight\\_digits);\n\n let sm1 = createSmallMultiple('#smallmultiple1', \n\n [13,14,15, 20,21,22], ['t-SNE', 'Dynamic t-SNE', 'UMAP'], \n\n 'mnist', true);\n\nSoftmax activations of the MNIST classifier with non-linear \n\ndimensionality reduction. Use the buttons on the right to highlight \n\ndigits 1 and 7 in the plot, or drag rectangles around the charts to \n\nselect particular point subsets to highlight in the other charts.\n\n One reason that non-linear embeddings fail in elucidating this \n\nphenomenon is that, for the particular change in the data, the fail the \n\nprinciple of *data-visual correspondence* .\n\n More concretely, the principle states that specific visualization tasks\n\n should be modeled as functions that change the data; the visualization \n\nsends this change from data to visuals, and\n\n we can study the extent to which the visualization changes are easily \n\nperceptible.\n\n Ideally, we want the changes in data and visualization to *match in magnitude*:\n\n a barely noticeable change in visualization should be due to the \n\nsmallest possible change in data, and a salient change in visualization \n\nshould reflect a significant one in data.\n\n points in the visualization move dramatically.\n\n For both UMAP and t-SNE, the position of each single point depends \n\nnon-trivially on the whole data distribution in such embedding \n\nalgorithms.\n\n This property is not ideal for visualization because it fails the \n\n Non-linear embeddings that have non-convex objectives also tend to be \n\nsensitive to initial conditions.\n\n For example, in MNIST, although the neural network starts to stabilize\n\n on epoch 30, t-SNE and UMAP still generate quite different projections \n\nbetween epochs 30, 31 and 32 (in fact, all the way to 99).\n\n Temporal regularization techniques (such as Dynamic t-SNE) mitigate \n\nthese consistency issues, but still suffer from other interpretability \n\nissues. \n\n Now, let’s consider another task, that of identifying classes which \n\nthe neural network tends to confuse.\n\n For this example, we will use the Fashion-MNIST dataset and \n\nclassifier, and consider the confusion among sandals, sneakers and ankle\n\n boots.\n\n If we know ahead of time that these three classes are likely to \n\nconfuse the classifier, then we can directly design an appropriate \n\nlinear projection, as can be seen in the last row of the following \n\nfigure (we found this particular projection using both the Grand Tour \n\nand the direct manipulation technique we later describe). The pattern in\n\n this case is quite salient, forming a triangle.\n\n T-SNE, in contrast, incorrectly separates the class clusters (possibly\n\n because of an inappropriately-chosen hyperparameter).\n\n UMAP successfully isolates the three classes, but even in this case \n\nit’s not possible to distinguish between three-way confusion for the \n\nclassifier in epochs 5 and 10 (portrayed in a linear method by the \n\npresence of points near the center of the triangle), and multiple \n\ntwo-way confusions in later epochs (evidences by an “empty” center).\n\n sm3 = createSmallMultiple('#smallmultiple3', \n\n [2,5,10,20,50,99], ['t-SNE', 'UMAP', 'Linear'], \n\n 'fashion-mnist', true, \n\n highlight\\_shoes\\_button, \n\n highlight\\_shoes,\n\n );\n\n \n\nHighlight shoes\n\nThree-way confusion in fashion-MNIST. Notice that in \n\ncontrast to non-linear methods, a carefully-constructed linear \n\nprojection can offer a better visualization of the classifier behavior.\n\nLinear Methods to the Rescue\n\n----------------------------\n\n When given the chance, then, we should prefer methods for which \n\nchanges in the data produce predictable, visually salient changes in the\n\n result, and linear dimensionality reductions often have this property.\n\n Here, we revisit the linear projections described above in an \n\ninterface where the user can easily navigate between different training \n\nepochs.\n\n In addition, we introduce another useful capability which is only \n\navailable to linear methods, that of direct manipulation.\n\n space will be projected to.\n\n In the context of projecting the final classification layer, this is \n\nespecially simple to interpret: they are the destinations of an input \n\nthat is classified with 100% confidence to any one particular class.\n\n If we provide the user with the ability to change these vectors by \n\ndragging around user-interface handles, then users can intuitively set \n\nup new linear projections.\n\n \n\n This setup provides additional nice properties that explain the \n\nsalient patterns in the previous illustrations.\n\n For example, because projections are linear and the coefficients of \n\nvectors in the classification layer sum to one, classification outputs \n\nthat are halfway confident between two classes are projected to vectors \n\nthat are halfway between the class handles.\n\n From\n\n this linear projection, we can easily identify the learning of \n\n digit 1 on \n\n epoch 14 and \n\n digit 7 on \n\n epoch 21.\n\n This linear projection clearly shows model’s confusion among\n\n sandals,\n\n sneakers, and\n\n ankle boots.\n\n Similarly, this projection shows the true three-way confusion about\n\n pullovers,\n\n coats, and\n\n shirts.\n\n (The shirts are also get confused with \n\n t-shirts/tops. )\n\n Both projections are found by direct manipulations.\n\n \n\n Examples falling between classes indicate that the model has trouble \n\ndistinguishing the two, such as sandals vs. sneakers, and sneakers vs. \n\nankle boot classes. \n\n Note, however, that this does not happen as much for sandals vs. ankle\n\n boots: not many examples fall between these two classes. \n\n Moreover, most data points are projected close to the edge of the \n\ntriangle. \n\n This tells us that most confusions happen between two out of the three\n\n classes, they are really two-way confusions.\n\n Within the same dataset, we can also see pullovers, coats and shirts \n\nfilling a triangular *plane*.\n\n This is different from the sandal-sneaker-ankle-boot case, as examples\n\n not only fall on the boundary of a triangle, but also in its interior: a\n\n true three-way confusion. \n\n Similarly, in the CIFAR-10 dataset we can see confusion between dogs \n\nand cats, airplanes and ships.\n\n The mixing pattern in CIFAR-10 is not as clear as in fashion-MNIST, \n\nbecause many more examples are misclassified.\n\n This linear projection clearly shows model’s confusion between\n\n cats and\n\n dogs.\n\n Similarly, this projection shows the confusion about\n\n airplanes and\n\n ships.\n\n Both projections are found by direct manipulations.\n\nThe Grand Tour\n\n--------------\n\n In the previous section, we took advantage of the fact that we knew \n\nwhich classes to visualize.\n\n That meant it was easy to design linear projections for the particular\n\n tasks at hand.\n\n But what if we don’t know ahead of time which projection to choose \n\nfrom, because we don’t quite know what to look for?\n\n Principal Component Analysis (PCA) is the quintessential linear \n\ndimensionality reduction method,\n\n choosing to project the data so as to preserve the most variance \n\npossible. \n\n However, the distribution of data in softmax layers often has similar \n\nvariance along many axis directions, because each axis concentrates a \n\nsimilar number of examples around the class vector.We\n\n are assuming a class-balanced training dataset. Nevertheless, if the \n\ntraining dataset is not balanced, PCA will prefer dimensions with more \n\nexamples, which might not be help much either.\n\n As a result, even though PCA projections are interpretable and \n\nconsistent through training epochs, the first two principal components \n\nof softmax activations are not substantially better than the third.\n\n So which of them should we choose?\n\n Instead of PCA, we propose to visualize this data by smoothly \n\nanimating random projections, using a technique called the Grand Tour.\n\nStarting with a random velocity, it smoothly rotates data points around \n\nthe origin in high dimensional space, and then projects it down to 2D \n\nfor display. \n\nHere are some examples of how Grand Tour acts on some (low-dimensional) \n\nobjects:\n\n* On a square, the Grand Tour rotates it with a constant angular velocity.\n\nGrand tours of a square, a cube and a tesseract\n\n### The Grand Tour of the Softmax Layer\n\n We first look at the Grand Tour of the softmax layer. \n\n The Grand Tour of softmax layer in the last (99th) epoch, with \n\n MNIST, \n\n fashion-MNIST or \n\n CIFAR-10 dataset.\n\n The Grand Tour of the softmax layer lets us qualitatively assess the \n\nperformance of our model.\n\n In the particular case of this article, since we used comparable \n\narchitectures for three datasets, this also allows us to gauge the \n\nrelative difficulty of classifying each dataset. \n\n We can see that data points are most confidently classified for the \n\nMNIST dataset, where the digits are close to one of the ten corners of \n\nthe softmax space. For Fashion-MNIST or CIFAR-10, the separation is not \n\nas clean, and more points appear *inside* the volume.\n\n### The Grand Tour of Training Dynamics\n\n Linear projection methods naturally give a formulation that is \n\nindependent of the input points, allowing us to keep the projection \n\nfixed while the\n\n data changes.\n\n To recap our working example, we trained each of the neural networks \n\nfor 99 epochs and recorded the entire history of neuron activations on a\n\n subset of training and testing examples. We can use the Grand Tour, \n\nthen, to visualize the actual training process of these networks.\n\n In the beginning when the neural networks are randomly initialized, \n\nall examples are placed around the center of the softmax space, with \n\nequal weights to each class. \n\n Through training, examples move to class vectors in the softmax space.\n\n The Grand Tour also lets us\n\n compare visualizations of the training and testing data, giving us a \n\nqualitative assessment of over-fitting. \n\n In the MNIST dataset, the trajectory of testing images through \n\ntraining is consistent with the training set. \n\n Data points went directly toward the corner of its true class and all \n\nclasses are stabilized after about 50 epochs.\n\n On the other hand, in CIFAR-10 there is an *inconsistency* \n\nbetween the training and testing sets. Images from the testing set keep \n\noscillating while most images from training converges to the \n\ncorresponding class corner. \n\n In epoch 99, we can clearly see a difference in distribution between \n\nthese two sets.\n\n This signals that the model overfits the training set and thus does \n\nnot generalize well to the testing set. \n\n With this view of CIFAR-10 , \n\n the color of points are more mixed in testing (right) than training \n\n(left) set, showing an over-fitting in the training process.\n\n Compare \n\n CIFAR-10 \n\n with \n\n MNIST\n\n or fashion-MNIST, \n\n where there is less difference between training and testing sets.\n\n### The Grand Tour of Layer Dynamics\n\n Given the presented techniques of the Grand Tour and direct \n\nmanipulations on the axes, we can in theory visualize and manipulate any\n\n intermediate layer of a neural network by itself. Nevertheless, this \n\nis not a very satisfying approach, for two reasons:\n\n \n\n* In the same way that we’ve kept the projection fixed as the \n\ntraining data changed, we would like to “keep the projection fixed”, as \n\nthe data moves through the layers in the neural network. However, this \n\nis not straightforward. For example, different layers in a neural \n\nnetwork have different dimensions. How do we connect rotations of one \n\nlayer to rotations of the other?\n\n* The class “axis handles” in the softmax layer convenient, but \n\nthat’s only practical when the dimensionality of the layer is relatively\n\n small.\n\n With hundreds of dimensions, for example, there would be too many \n\naxis handles to naturally interact with.\n\n In addition, hidden layers do not have as clear semantics as the \n\nsoftmax layer, so manipulating them would not be as intuitive.\n\n To address the first problem, we will need to pay closer attention to \n\nthe way in which layers transform the data that they are given. \n\n To see how a linear transformation can be visualized in a particularly\n\n ineffective way, consider the following (very simple) weights \n\n A=[−1,00,−1]\n\n A = \\begin{bmatrix}\n\n -1, 0 \\\\\n\n 0, -1\n\n \\end{bmatrix}\n\n A=[−1,00,−1​]\n\n Imagine that we wish to visualize the behavior of network as the data \n\n xt=(1−t)⋅x0+t⋅x1=(1−2t)⋅x0\n\n x\\_t = (1-t) \\cdot x\\_0 + t \\cdot x\\_1 = (1-2t) \\cdot x\\_0 \n\n xt​=(1−t)⋅x0​+t⋅x1​=(1−2t)⋅x0​\n\n for t∈[0,1].t \\in [0,1].t∈[0,1].\n\n Effectively, this strategy reuses the linear projection coefficients \n\nfrom one layer to the next. This is a natural thought, since they have \n\nthe same dimension.\n\n However, notice the following: the transformation given by A is a \n\n if only that transformation operated on the negative values of the \n\nentries.\n\n In addition, since the Grand Tour has a rotation itself built-in, for \n\n into account.\n\n In effect, the naive interpolation fails the principle of data-visual \n\ncorrespondence: a simple change in data (negation in 2D/180 degree \n\nrotation) results in a drastic change in visualization (all points cross\n\n the origin).\n\n This observation points to a more general strategy: when designing a \n\nvisualization, we should be as explicit as possible about which parts of\n\n the input (or process) we seek to capture in our visualizations.\n\n We should seek to explicitly articulate what are purely \n\nrepresentational artifacts that we should discard, and what are the real\n\n features a visualization we should *distill* from the \n\nrepresentation.\n\n Here, we claim that rotational factors in linear transformations of \n\nneural networks are significantly less important than other factors such\n\n as scalings and nonlinearities.\n\n As we will show, the Grand Tour is particularly attractive in this \n\ncase because it is can be made to be invariant to rotations in data.\n\n As a result, the rotational components in the linear transformations \n\nof a neural network will be explicitly made invisible.\n\n - a simple (coordinate-wise) scaling. \n\n This is explicitly saying that any linear operation (whose matrix is \n\nrepresented in standard bases) is a scaling operation with appropriately\n\n chosen orthonormal bases on both sides.\n\n So the Grand Tour provides a natural, elegant and computationally \n\n layers are also linear. One can instantly see that by forming the \n\nlinear transformations between flattened feature maps, or by taking the \n\ncirculant structure of convolutional layers directly into account\n\n (For the following portion, we reduce the number of data points to 500\n\n and epochs to 50, in order to reduce the amount of data transmitted in a\n\n web-based demonstration.)\n\n With the linear algebra structure at hand, now we are able to trace \n\nbehaviors and patterns from the softmax back to previous layers.\n\n In fashion-MNIST, for example, we observe a separation of shoes \n\n(sandals, sneakers and ankle boots as a group) from all other classes in\n\n the softmax layer. \n\n Tracing it back to earlier layers, we can see that this separation \n\nhappened as early as layer 5:\n\n### The Grand Tour of Adversarial Dynamics\n\n as they are processed by a neural network.\n\n For this illustration, we use the MNIST dataset, and we adversarially \n\nadd perturbations to 89 digit 8s to fool the network into thinking they \n\nare 0s.\n\n Previously, we either animated the training dynamics or the layer \n\ndynamics.\n\n We fix a well-trained neural network, and visualize the training \n\nprocess of adversarial examples, since they are often themselves \n\ngenerated by an optimization process. Here, we used the Fast Gradient \n\nSign method.\n\n Again, because the Grand Tour is a linear method, the change in the \n\npositions of the adversarial examples over time can be faithfully \n\nattributed to changes in how the neural network perceives the images, \n\nrather than potential artifacts of the visualization.\n\n Let us examine how adversarial examples evolved to fool the network:\n\n From this view of softmax, we can see how \n\n adversarial examples \n\n evolved from 8s \n\n into 0s.\n\n Through this adversarial training, the network eventually claims, with\n\n high confidence, that the inputs given are all 0s.\n\n If we stay in the softmax layer and slide though the adversarial \n\ntraining steps in the plot, we can see adversarial examples move from a \n\nhigh score for class 8 to a high score for class 0.\n\n Although all adversarial examples are classified as the target class \n\n(digit 0s) eventually, some of them detoured somewhere close to the \n\ncentroid of the space (around the 25th epoch) and then moved towards the\n\n target. \n\n Comparing the actual images of the two groups, we see those that those\n\n “detouring” images tend to be noisier.\n\n More interesting, however, is what happens in the intermediate layers.\n\nDiscussion\n\n----------\n\n### Limitations of the Grand Tour\n\n Early on, we compared several state-of-the-art dimensionality \n\nreduction techniques with the Grand Tour, showing that non-linear \n\nmethods do not have as many desirable properties as the Grand Tour for \n\nunderstanding the behavior of neural networks. \n\n However, the state-of-the-art non-linear methods come with their own \n\nstrength. \n\n Whenever geometry is concerned, like the case of understanding \n\nmulti-way confusions in the softmax layer, linear methods are more \n\ninterpretable because they preserve certain geometrical structures of \n\ndata in the projection. \n\n When topology is the main focus, such as when we want to cluster the \n\ndata or we need dimensionality reduction for downstream models that are \n\nless sensitive to geometry, we might choose non-linear methods such as \n\nUMAP or t-SNE for they have more freedom in projecting the data, and \n\nwill generally make better use of the fewer dimensions available. \n\n### The Power of Animation and Direct Manipulation\n\n When comparing linear projections with non-linear dimensionality \n\nreductions, we used small multiples to contrast training epochs and \n\ndimensionality reduction methods.\n\n The Grand Tour, on the other hand, uses a single animated view.\n\n When comparing small multiples and animations, there is no general \n\nconsensus on which one is better than the other in the literature, \n\naside. \n\n between small multiples and animated plots.\n\n Regardless of these concerns, in our scenarios, the use of animation \n\ncomes naturally from the direct manipulation and the existence of a \n\ncontinuum of rotations for the Grand Tour to operate in.\n\n### Non-sequential Models\n\n In our work we have used models that are purely “sequential”, in the \n\nsense that the layers can be put in numerical ordering, and that the \n\nactivations for\n\n \n\n With our technique, one can visualize neuron activations on each such \n\nbranch, but additional research is required to incorporate multiple \n\nbranches directly.\n\n### Scaling to Larger Models\n\n Modern architectures are also wide. Especially when convolutional \n\nlayers are concerned, one could run into issues with scalability if we \n\nsee such layers as a large sparse matrix acting on flattened \n\nmulti-channel images.\n\n For the sake of simplicity, in this article we brute-forced the \n\ncomputation of the alignment of such convolutional layers by writing out\n\n their explicit matrix representation. \n\n However, the singular value decomposition of multi-channel 2D \n\nfunction toggle(event, id){\n\n let caller = d3.select(event.target); //DOM that received the event\n\n let callerIsActive = caller.classed('clickable-active');\n\n let selection = d3.select(id); //DOM to be toggled\n\n let isHidden = selection.classed('hidden');\n\n selection.classed('hidden', !isHidden);\n\n}\n\nTechnical Details\n\n------------------\n\nThis section presents the technical details necessary to implement the \n\ndirect manipulation of axis handles and data points, as well as how to \n\nimplement the projection consistency technique for layer transitions.\n\n### Notation\n\n In this section, our notational convention is that data points are \n\nrepresented as row vectors.\n\n An entire dataset is laid out as a matrix, where each row is a data \n\npoint, and each column represents a different feature/dimension.\n\n As a result, when a linear transformation is applied to the data, the \n\nrow vectors (and the data matrix overall) are left-multiplied by the \n\ntransformation matrix.\n\n This has a side benefit that when applying matrix multiplications in a\n\n chain, the formula reads from left to right and aligns with a \n\ncommutative diagram.\n\nX↦MY\n\n X \n\n \\overset{M}{\\mapsto}\n\n Y\n\nX↦M​Y\n\nX↦U↦Σ↦VTY\n\n X \n\n \\overset{U}{\\mapsto} \n\n \\overset{\\Sigma}{\\mapsto} \n\n \\overset{V^T}{\\mapsto} Y\n\nX↦U​↦Σ​↦VT​Y\n\nnicely aligns with the formula.\n\n### \n\nDirect Manipulation\n\n The direct manipulations we presented earlier provide explicit control\n\n over the possible projections for the data points.\n\n We provide two modes: directly manipulating class axes (the “axis \n\nmode”), or directly manipulating a group of data points through their \n\ncentroid (the “data point mode”).\n\n we may prefer one mode than the other.\n\n \n\n We will see that the axis mode is a special case of data point mode, \n\nbecause we can view an axis handle as a particular “fictitious” point in\n\n the dataset.\n\n Because of its simplicity, we will first introduce the axis mode.\n\n#### \n\n The Axis Mode\n\n The implied semantics of direct manipulation is that when a user \n\ndrags an UI element (in this case, an axis handle), they are signaling \n\nto the system that they wished that the corresponding\n\n data point had been projected to the location where the UI element \n\nwas dropped, rather than where it was dragged from.\n\n In our case the overall projection is a rotation (originally \n\ndetermined by the Grand Tour), and an arbitrary user manipulation might \n\nnot necessarily generate a new projection that is also a rotation. Our \n\ngoal, then, is to find a new rotation which satisfies the user request \n\nand is close to the previous state of the Grand Tour projection, so that\n\n the resulting state satisfies the user request.\n\n \n\n \n\nei↦GTei~↦π2(xi,yi)\n\n e\\_i \\overset{GT}{\\mapsto} \\tilde{e\\_i} \\overset{\\pi\\_2}{\\mapsto} (x\\_i, y\\_i)\n\nei​↦GT​ei​~​↦π2​​(xi​,yi​)\n\n The coordinate of the handle becomes:\n\n Recall that the convention is that vectors are in row form and \n\nlinear transformations are matrices that are multiplied on the right.\n\n. \n\n GT~←GT\\widetilde{GT} \\leftarrow GTGT\n\n←GT\n\nGT~i,1←GTi,1+dx\\widetilde{GT}\\_{i,1} \\leftarrow GT\\_{i,1} + dxGT\n\ni,1​←GTi,1​+dx\n\nGT~i,2←GTi,2+dy\\widetilde{GT}\\_{i,2} \\leftarrow GT\\_{i,2} + dyGT\n\ni,2​←GTi,2​+dy\n\n However, GT~\\widetilde{GT}GT\n\n is not orthogonal for arbitrary (dx,dy)(dx, dy)(dx,dy).\n\n In order to find an approximation to GT~\\widetilde{GT}GT\n\n, with the ithi^{th}ith row considered first in the Gram-Schmidt process:\n\n)\n\n \n\n  .\n\n \n\n#### \n\n The Data Point Mode\n\n We now explain how we directly manipulate data points. \n\n Technically speaking, this method only considers one point at a time.\n\n Next, as the first step in Gram-Schmidt, we normalized this row: \n\n GTi(new):=normalize(GT~i)=normalize(ei~+Δ~)\n\n GTi(new)​:=normalize(GT\n\ni​)=normalize(ei​~​+Δ~)\n\n in high dimensional space.\n\n This intuition is precisely how we implemented our direct manipulation\n\n on arbitrary data points, which we will specify as below.\n\n Generalizing this observation from axis handle to arbitrary data \n\npoint, we want to find the rotation that moves the centroid of a \n\nselected subset of data points c~\\tilde{c}c~ to \n\n c~(new):=(c~+Δ~)⋅∣∣c~∣∣/∣∣c~+Δ~∣∣\n\n c~(new):=(c~+Δ~)⋅∣∣c~∣∣/∣∣c~+Δ~∣∣\n\n First, the angle of rotation can be found by their cosine similarity:\n\n θ=arccos(⟨c~,c~(new)⟩∣∣c~∣∣⋅∣∣c~(new)∣∣) \\theta = \\textrm{arccos}(\n\n )θ=arccos(∣∣c~∣∣⋅∣∣c~(new)∣∣⟨c~,c~(new)⟩​)\n\n Next, to find the matrix form of the rotation, we need a convenient basis.\n\n c~⊥(new):=c~−∣∣c~∣∣⋅cosθc~(new)∣∣c~(new)∣∣\n\n \\tilde{c}^{(new)}\\_{\\perp} \n\n c~⊥(new)​:=c~−∣∣c~∣∣⋅cosθ∣∣c~(new)∣∣c~(new)​\n\nQ:=[⋯normalize(c~)⋯⋯normalize(c~⊥(new))⋯P]\n\n Q :=\n\n \\begin{bmatrix}\n\n \\cdots \\textsf{normalize}(\\tilde{c}) \\cdots \\\\\n\n \\cdots \\textsf{normalize}(\\tilde{c}^{(new)}\\_{\\perp}) \\cdots \\\\\n\n P\n\n \\end{bmatrix}\n\n Q:=⎣⎡​⋯normalize(c~)⋯⋯normalize(c~⊥(new)​)⋯P​⎦⎤​\n\n where PPP completes the remaining space.\n\n ρ=QT[cosθsinθ00⋯−sinθcosθ00⋯00⋮⋮I]Q=:QTR1,2(θ)Q\n\n \\rho = Q^T\n\n \\begin{bmatrix}\n\n \\cos \\theta& \\sin \\theta& 0& 0& \\cdots\\\\\n\n -\\sin \\theta& \\cos \\theta& 0& 0& \\cdots\\\\\n\n 0& 0& \\\\ \n\n \\vdots& \\vdots& & I& \\\\\n\n \\end{bmatrix}\n\n Q\n\n =: Q^T R\\_{1,2}(\\theta) Q\n\n ρ=QT⎣⎢⎢⎢⎢⎡​cosθ−sinθ0⋮​sinθcosθ0⋮​00​00I​⋯⋯​⎦⎥⎥⎥⎥⎤​Q=:QTR1,2​(θ)Q\n\n GT(new):=GT⋅ρ\n\n GT^{(new)} := GT \\cdot \\rho\n\n GT(new):=GT⋅ρ\n\n repeatedly take a random vector, find its orthogonal component to the \n\nspan of the current basis vectors and add it to the basis set. \n\n \n\n \n\n### \n\n Layer Transitions\n\n#### \n\n ReLU Layers\n\n Since ReLU does not change the dimensionality and the function is taken\n\n coordinate wise, we can animate the transition by a simple linear \n\ninterpolation: for a time parameter t∈[0,1]t \\in [0,1]t∈[0,1],\n\n X(l−1)→l(t):=(1−t)Xl−1+tXl\n\n X^{(l-1) \\to l}(t) := (1-t) X^{l-1} + t X^{l}\n\n X(l−1)→l(t):=(1−t)Xl−1+tXl\n\n \n\n#### \n\n Linear Layers\n\n Transitions between linear layers can seem complicated, but as we will\n\n show, this comes from choosing mismatching bases on either side of the \n\ntransition. \n\n M=UΣVTM = U \\Sigma V^TM=UΣVT\n\n The Grand Tour has a single parameter that represents the current \n\nrotation of the dataset. Since our goal is to keep the transition \n\nconsistent, we notice that UUU and VTV^TVT\n\n have essentially no significance - they are just rotations to the view \n\nthat can be exactly “canceled” by changing the rotation parameter of the\n\n Grand Tour in either layer.\n\n Given Xl=Xl−1UΣVTX^{l} = X^{l-1} U \\Sigma V^TXl=Xl−1UΣVT, we have\n\n (XlV)=(Xl−1U)Σ\n\n (X^{l}V) = (X^{l-1}U)\\Sigma\n\n (XlV)=(Xl−1U)Σ\n\n For a time parameter t∈[0,1]t \\in [0,1]t∈[0,1],\n\n X(l−1)→l(t):=(1−t)(Xl−1U)+t(XlV)=(1−t)(Xl−1U)+t(Xl−1UΣ)\n\n X(l−1)→l(t):=(1−t)(Xl−1U)+t(XlV)=(1−t)(Xl−1U)+t(Xl−1UΣ)\n\n \n\n#### \n\n Convolutional Layers\n\n Convolutional layers can be represented as special linear layers.\n\n With a change of representation, we can animate a convolutional layer \n\nlike the previous section.\n\n For 2D convolutions this change of representation involves flattening \n\nthe input and output, and repeating the kernel pattern in a sparse \n\nmatrix M∈Rm×nM \\in \\mathbb{R}^{m \\times n}M∈Rm×n, where mmm and nnn\n\n are the dimensionalities of the input and output respectively.\n\n This change of representation is only practical for a small \n\ndimensionality (e.g. up to 1000), since we need to solve SVD for linear \n\nlayers.\n\n However, the singular value decomposition of multi-channel 2D \n\n \n\n#### \n\n Max-pooling Layers\n\n \n\n \n\nConclusion\n\n----------\n\n As powerful as t-SNE and UMAP are, they often fail to offer the \n\ncorrespondences we need, and such correspondences can come, \n\nsurprisingly, from relatively simple methods like the Grand Tour. The \n\nGrand Tour method we presented is particularly useful when direct \n\nmanipulation from the user is available or desirable.\n\n We believe that it might be possible to design methods that highlight \n\nthe best of both worlds, using non-linear dimensionality reduction to \n\ncreate intermediate, relatively low-dimensional representations of the \n\nactivation layers, and using the Grand Tour and direct manipulation to \n\ncompute the final projection.\n\n", "bibliography_bib": [{"title": "The grand tour: a tool for viewing multidimensional data"}, {"title": "Visualizing data using t-SNE"}, {"title": "Umap: Uniform manifold approximation and projection for dimension reduction"}, {"title": "Intriguing properties of neural networks"}, {"title": "ImageNet Large Scale Visual Recognition Challenge"}, {"title": "The mythos of model interpretability"}, {"title": "Visualizing dataflow graphs of deep learning models in tensorflow"}, {"title": "An algebraic process for visualization design"}, {"title": "Feature visualization"}, {"title": "MNIST handwritten digit database"}, {"title": "Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms"}, {"title": "Learning multiple layers of features from tiny images"}, {"title": "Rectified linear units improve restricted boltzmann machines"}, {"title": "Visualizing time-dependent data using dynamic t-SNE"}, {"title": "How to use t-sne effectively"}, {"title": "We Recommend a Singular Value Decomposition"}, {"title": "The singular values of convolutional layers"}, {"title": "Explaining and harnessing adversarial examples"}, {"title": "Animation, small multiples, and the effect of mental map preservation in dynamic graphs"}, {"title": "Animation: can it facilitate?"}, {"title": "Highway networks"}, {"title": "Going deeper with convolutions"}], "filename": "Visualizing Neural Networks with the Grand Tour.html", "id": "8b44ed0ed60e97b10d3976c174bcc618"} {"url": "n/a", "source": "distill", "source_type": "html", "converted_with": "python", "title": "A Visual Exploration of Gaussian Processes", "authors": ["Jochen Görtler", "Rebecca Kehlbeck", "Oliver Deussen"], "date_published": "2019-04-02", "abstract": " Even if you have spent some time reading about machine learning, chances are that you have never heard of Gaussian processes. And if you have, rehearsing the basics is always a good way to refresh your memory. With this blog post we want to give an introduction to Gaussian processes and make the mathematical intuition behind them more approachable. ", "journal_ref": "distill-pub", "doi": "https://doi.org/10.23915/distill.00017", "text": "\n\n Even if you have spent some time reading about machine learning, \n\nchances are that you have never heard of Gaussian processes.\n\n And if you have, rehearsing the basics is always a good way to \n\nrefresh your memory.\n\n With this blog post we want to give an introduction to Gaussian \n\nprocesses and make the mathematical intuition behind them more \n\napproachable.\n\n \n\n Gaussian processes are a powerful tool in the machine learning toolbox.\n\n Their most obvious area of application is *fitting* a function to the data.\n\n The mean of this probability distribution then represents the most \n\nprobable characterization of the data.\n\n Furthermore, using a probabilistic approach allows us to incorporate\n\n the confidence of the prediction into the regression result.\n\n \n\n We will first explore the mathematical foundation that Gaussian \n\nprocesses are built on — we invite you to follow along using the \n\ninteractive figures and hands-on examples.\n\n They help to explain the impact of individual components, and show \n\nthe flexibility of Gaussian processes.\n\n After following this article we hope that you will have a visual \n\nintuition on how Gaussian processes work and how you can configure them \n\nfor different types of data.\n\n \n\nMultivariate Gaussian distributions\n\n-----------------------------------\n\n distribution) is the basic building block of Gaussian processes.\n\n In particular, we are interested in the multivariate case of this \n\ndistribution, where each random variable is distributed normally and \n\ntheir joint distribution is also Gaussian.\n\n \n\n The mean vector μ\\muμ describes the expected value of the distribution.\n\n Each of its components describes the mean of the corresponding dimension.\n\n The covariance matrix is always symmetric and positive semi-definite.\n\n \n\nX=[X1X2⋮Xn]∼N(μ,Σ)\n\n X=⎣⎢⎢⎡​X1​X2​⋮Xn​​⎦⎥⎥⎤​∼N(μ,Σ)\n\nΣ=Cov(Xi,Xj)=E[(Xi−μi)(Xj−μj)T]\n\n Σ=Cov(Xi​,Xj​)=E[(Xi​−μi​)(Xj​−μj​)T]\n\n shows the influence of these parameters on a two-dimensional Gaussian \n\ndistribution. The variances for each random variable are on the diagonal\n\n of the covariance matrix, while the other values show the covariance \n\nbetween them.\n\nCovariance matrix (Σ)10.70.72By dragging the handles you \n\n Gaussian distributions are widely used to model the real world.\n\n One of the implications of this theorem is that a collection of \n\nindependent, identically distributed random variables with finite \n\nvariance have a mean that is distributed normally.\n\n .\n\n In the next section we will take a closer look at how to \n\nmanipulate Gaussian distributions and extract useful information from \n\nthem.\n\n \n\n### Marginalization and Conditioning\n\n Gaussian distributions have the nice algebraic property of being \n\nclosed under conditioning and marginalization.\n\n Being closed under conditioning and marginalization means that the \n\nresulting distributions from these operations are also Gaussian, which \n\nmakes many problems in statistics and machine learning tractable.\n\n In the following we will take a closer look at both of these \n\noperations, as they are the foundation for Gaussian processes.\n\n \n\nPX,Y=[XY]∼N(μ,Σ)=N([μXμY],[ΣXXΣXYΣYXΣYY])P\\_{X,Y}\n\n = \\begin{bmatrix} X \\\\ Y \\end{bmatrix} \\sim \\mathcal{N}(\\mu, \\Sigma) = \n\n\\mathcal{N} \\left( \\begin{bmatrix} \\mu\\_X \\\\ \\mu\\_Y \\end{bmatrix}, \n\n\\begin{bmatrix} \\Sigma\\_{XX} \\, \\Sigma\\_{XY} \\\\ \\Sigma\\_{YX} \\, \\Sigma\\_{YY}\n\n \\end{bmatrix} \\right)PX,Y​=[XY​]∼N(μ,Σ)=N([μX​μY​​],[ΣXX​ΣXY​ΣYX​ΣYY​​])\n\nWith XXX and YYY representing subsets of original random variables.\n\nThrough *marginalization* we can extract partial information\n\n from multivariate probability distributions. In particular, given a \n\nX∼N(μX,ΣXX)Y∼N(μY,ΣYY)\n\n \\begin{aligned}\n\n X &\\sim \\mathcal{N}(\\mu\\_X, \\Sigma\\_{XX}) \\\\\n\n Y &\\sim \\mathcal{N}(\\mu\\_Y, \\Sigma\\_{YY})\n\n \\end{aligned}\n\n XY​∼N(μX​,ΣXX​)∼N(μY​,ΣYY​)​\n\n \n\npX(x)=∫ypX,Y(x,y)dy=∫ypX∣Y(x∣y)pY(y)dy\n\n p\\_X(x) = \\int\\_y p\\_{X,Y}(x,y)dy = \\int\\_y p\\_{X|Y}(x|y) p\\_Y(y) dy\n\n pX​(x)=∫y​pX,Y​(x,y)dy=∫y​pX∣Y​(x∣y)pY​(y)dy\n\n X=xX = xX=x, we need to consider all possible outcomes of\n\n YYY that can jointly lead to the result\n\n The corresponding [Wikipedia\n\n .\n\n \n\n Another important operation for Gaussian processes is *conditioning*.\n\n Conditioning is defined by:\n\n \n\nX∣Y∼N(μX+ΣXYΣYY−1(Y−μY),ΣXX−ΣXYΣYY−1ΣYX)Y∣X∼N(μY+ΣYXΣXX−1(X−μX),ΣYY−ΣYXΣXX−1ΣXY)\n\n \\begin{aligned}\n\n X|Y &\\sim \\mathcal{N}(\\:\\mu\\_X + \\Sigma\\_{XY}\\Sigma\\_{YY}^{-1}(Y -\n\n \\mu\\_Y),\\: \\Sigma\\_{XX}-\\Sigma\\_{XY}\\Sigma\\_{YY}^{-1}\\Sigma\\_{YX}\\:) \\\\\n\n Y|X &\\sim \\mathcal{N}(\\:\\mu\\_Y + \\Sigma\\_{YX}\\Sigma\\_{XX}^{-1}(X -\n\n \\mu\\_X),\\: \\Sigma\\_{YY}-\\Sigma\\_{YX}\\Sigma\\_{XX}^{-1}\\Sigma\\_{XY}\\:) \\\\\n\n \\end{aligned}\n\n Note that the new mean only depends on the conditioned variable, \n\nwhile the covariance matrix is independent from this variable.\n\n \n\n Now that we have worked through the necessary equations, we will \n\nthink about how we can understand the two operations visually.\n\n While marginalization and conditioning can be applied to \n\nmultivariate distributions of many dimensions, it makes sense to \n\n Marginalization can be seen as integrating along one of the \n\ndimensions of the Gaussian distribution, which is in line with the \n\ngeneral definition of the marginal distribution.\n\n Conditioning also has a nice geometric interpretation — we can \n\nimagine it as making a cut through the multivariate distribution, \n\nyielding a new Gaussian distribution with fewer dimensions.\n\n \n\n#### Marginalization (Y)\n\n#### Conditioning (X = 1.2)\n\nμY = 0σY = 1.4\n\nX = 1.2\n\nμY|X = 0.96σY|X = 1.4\n\n A bivariate normal distribution in the center.\n\n On the left you can see the result of marginalizing this \n\ndistribution for Y, akin to integrating along the X axis. On the right \n\nyou can see the distribution conditioned on a given X, which is similar \n\nto a cut through the original distribution. The Gaussian distribution \n\nand the conditioned variable can be changed by dragging the handles.\n\nGaussian Processes\n\n------------------\n\n Now that we have recalled some of the basic properties of \n\nmultivariate Gaussian distributions, we will combine them together to \n\ndefine Gaussian processes, and show how they can be used to tackle \n\nregression problems.\n\n \n\n First, we will move from the continuous view to the discrete \n\nrepresentation of a function:\n\n rather than finding an implicit function, we are interested in \n\n In addition, each of these random variables has a corresponding index iii.\n\n \n\nx1x2\n\nμx1x2\n\n Here, we have a two-dimensional normal distribution.\n\n Each dimension xix\\_ixi​ is assigned an index i∈{1,2}i \\in \\{1,2\\}i∈{1,2}.\n\n gets larger and vice versa.\n\n You can also drag the handles in the figure to the right and \n\nobserve the probability of such a configuration in the figure to the \n\nleft.\n\n \n\n Respective to the test data XXX, we will denote the training data as YYY.\n\n \n\n In the case of Gaussian processes, this information is the training data.\n\n Thus, we are interested in the conditional probability PX∣YP\\_{X|Y}PX∣Y​. \n\n \n\n of the Gaussian process.\n\n We will talk about this in detail in the next section.\n\n But before we come to this, let us reflect on how we can use \n\nmultivariate Gaussian distributions to estimate function values.\n\n \n\nf(x) = ?We are interested in predicting the\n\n In Gaussian processes we treat each test point as a random variable. \n\n Since we want to predict the function values at\n\n ∣X∣=N|X|=N∣X∣=N\n\n test points, the corresponding multivariate Gaussian distribution is also\n\n NNN\n\n -dimensional.\n\n \n\n### Kernels\n\n This process is also called *centering* of the data.\n\n \n\n The covariance matrix will not only describe the shape of our \n\ndistribution, but ultimately determines the characteristics of the \n\nfunction that we want to predict.\n\n \n\nk:Rn×Rn→R,Σ=Cov(X,X′)=k(t,t′)\n\n k: \\mathbb{R}^n \\times \\mathbb{R}^n \\rightarrow \\mathbb{R},\\quad \n\n \\Sigma = \\text{Cov}(X,X’) = k(t,t’)\n\n k:Rn×Rn→R,Σ=Cov(X,X′)=k(t,t′)\n\n This step is also depicted in the [figure above](#PriorFigure).\n\n In order to get a better intuition for the role of the kernel, \n\nlet’s think about what the entries in the covariance matrix describe.\n\n random variable.\n\n Since the kernel describes the similarity between the values of \n\nour function, it controls the possible shape that a fitted function can \n\nadopt.\n\n Note that when we choose a kernel, we need to make sure that the \n\nresulting matrix adheres to the properties of a covariance matrix.\n\n \n\n Many of these kernels conceptually embed the input points into a \n\n \n\n#### RBF Kernel\n\n\\sigma^2 \\exp \\left( - \\frac{||t-t'||^2}{2 l^2} \\right)\n\n = \n\n = \n\n#### Periodic\n\n\\sigma^2 \\exp \\left( - \\frac{2 \\sin^2(\\pi |t-t'| / p)}{l^2} \\right)\n\n = \n\n = \n\n = \n\n#### Linear\n\n\\sigma\\_b^2 + \\sigma^2 (t - c)(t' - c)\n\n = \n\n = \n\n = \n\nFor the\n\n \n\n kernel\n\n the parameter\n\n ...\n\n determines\n\n ...\n\n \n\n \n\n There are many more kernels that can describe different classes of \n\nfunctions, which can be used to model the desired shape of the function.\n\n A good overview of different kernels is given by Duvenaud.\n\n It is also possible to combine several kernels — but we will get to this later.\n\n \n\n### Prior Distribution\n\n We will now shift our focus back to the original task of regression.\n\n In [this figure above](#DimensionSwap), we show this connection:\n\n Recall that we usually assume μ=0\\mu=0μ=0.\n\n \n\n \n\n In the previous section we have looked at examples of different \n\nkernels.\n\n The kernel is used to define the entries of the covariance matrix.\n\n Consequently, the covariance matrix determines which type of \n\nfunctions from the space of all possible functions are more probable.\n\n As the prior distribution does not yet contain any additional \n\ninformation, it is perfect to visualize the influence of the kernel on \n\nthe distribution of functions.\n\n \n\nRBF\n\nPeriodic\n\nLinear\n\ny = 0μ + 2σμ - 2σ(click to start)\n\n = \n\n = \n\n Clicking on the graph results in continuous samples drawn from a\n\n Gaussian process using the selected\n\n functions are distributed normally around the mean µ .\n\n \n\n This also varies the confidence of the prediction.\n\n \n\n### Posterior Distribution\n\n So what happens if we observe training data?\n\n Let’s get back to the model of Bayesian inference, which states \n\nthat we can incorporate this additional information into our model, \n\nyielding the *posterior* distribution PX∣YP\\_{X|Y}PX∣Y​.\n\n We will now take a closer look at how to do this for Gaussian processes.\n\n \n\n \n\n Using *conditioning* we can find PX∣YP\\_{X|Y}PX∣Y​ from PX,YP\\_{X,Y}PX,Y​.\n\n More details can be found in the [related section](#MargCond)\n\n on conditioning multivariate Gaussian distributions.\n\n The intuition behind this step is that the training points \n\nconstrain the set of functions to those that pass through the training \n\npoints.\n\n \n\nf(x) = ?Adding training points (■) changes\n\n In many cases this can lead to fitted functions that are unnecessarily complex.\n\n Also, up until now, we have considered the training points YYY\n\n to be perfect measurements.\n\n But in real-world scenarios this is an unrealistic assumption, \n\nsince most of our data is afflicted with measurement errors or \n\nuncertainty.\n\n Gaussian processes offer a simple solution to this problem by \n\nmodeling the error of the measurements.\n\n \n\nY=f(X)+ϵ\n\n Y = f(X) + \\epsilon\n\n Y=f(X)+ϵ\n\n \n\nPX,Y=[XY]∼N(0,Σ)=N([00],[ΣXXΣXYΣYXΣYY+ψ2I])P\\_{X,Y}\n\n = \\begin{bmatrix} X \\\\ Y \\end{bmatrix} \\sim \\mathcal{N}(0, \\Sigma) = \n\n\\mathcal{N} \\left( \\begin{bmatrix} 0 \\\\ 0 \\end{bmatrix}, \\begin{bmatrix}\n\n \\Sigma\\_{XX} & \\Sigma\\_{XY} \\\\ \\Sigma\\_{YX} & \\Sigma\\_{YY}+\\psi^2I \n\n\\end{bmatrix} \\right)PX,Y​=[XY​]∼N(0,Σ)=N([00​],[ΣXX​ΣYX​​ΣXY​ΣYY​+ψ2I​])\n\n In this formulation, ψ\\psiψ is an additional parameter of our model.\n\n \n\n Analogous to the prior distribution, we could obtain a prediction \n\nfor our function values by sampling from this distribution.\n\n But, since sampling involves randomness, the resulting fit to the \n\ndata would not be deterministic and our prediction could end up being an\n\n outlier.\n\n In order to make a more meaningful prediction we can use the other\n\n basic operation of Gaussian distributions.\n\n \n\n Extracting μ′\\mu’μ′ and σ′\\sigma’σ′\n\n does not only lead to a more meaningful prediction, it also allows us \n\nto make a statement about the confidence of the prediction.\n\n \n\n At first, no training points have been observed.\n\n \n\n The training points can be activated by clicking on them, which \n\nleads to a constrained distribution.\n\n This change is reflected in the entries of the covariance matrix, \n\nand leads to an adjustment of the mean and the standard deviation of the\n\n predicted function.\n\n As we would expect, the uncertainty of the prediction is small in \n\nregions close to the training data and grows as we move further away \n\nfrom those points.\n\n \n\ny = 0μ + 2σμ - 2σ(click to resample)\n\nWithout any activated training data,\n\n value on its neighbours.\n\n The distribution changes when we observe training data.\n\n Individual points can be activated by clicking on them.\n\n training data.\n\n Therefore, the function must pass directly through it.\n\n \n\n### Combining different kernels\n\n As described earlier, the power of Gaussian processes lies in the \n\nchoice of the kernel function.\n\n This property allows experts to introduce domain knowledge into \n\nthe process and lends Gaussian processes their flexibility to capture \n\ntrends in the training data.\n\n For example, by choosing a suitable bandwidth for the RBF kernel, \n\nwe can control how smooth the resulting function will be.\n\n \n\n A big benefit that kernels provide is that they can be combined \n\ntogether, resulting in a more specialized kernel.\n\n The decision which kernel to use is highly dependent on prior \n\nknowledge about the data, e.g. if certain characteristics are expected.\n\n Examples for this would be stationary nature, or global trends and\n\n patterns.\n\n The most common kernel combinations would be addition and multiplication.\n\n \n\n This is how we would multiply the two:\n\n \n\nk∗(t,t′)=klin(t,t′)⋅kper(t,t′)\n\n k^{\\ast}(t,t’) = k\\_{\\text{lin}}(t,t’) \\cdot k\\_{\\text{per}}(t,t’)\n\n k∗(t,t′)=klin​(t,t′)⋅kper​(t,t′)\n\n However, combinations are not limited to the above example, and \n\nthere are more possibilities such as concatenation or composition with a\n\n function.\n\n To show the impact of a kernel combination and how it might retain \n\n If we add a periodic and a linear kernel, the global trend of the \n\nlinear kernel is incorporated into the combined kernel.\n\n The result is a periodic function that follows a linear trend.\n\n When combining the same kernels through multiplication instead, the \n\nresult is a periodic function with a linearly growing amplitude away \n\nfrom linear kernel parameter ccc.\n\n \n\ny = 0\n\nLinear\n\ny = 0\n\nPeriodic\n\ny = 0\n\nLinear + Periodic\n\ny = 0\n\nLinear ⋅ Periodic\n\n If we draw samples from a combined linear and periodic kernel, we \n\ncan observe the different retained characteristics in the new sample.\n\n Addition results in a periodic function with a global trend, while\n\n the multiplication increases the periodic amplitude outwards.\n\n \n\n Knowing more about how kernel combinations influence the shape of \n\nthe resulting distribution, we can move on to a more complex example.\n\n At first glance, the RBF kernel accurately approximates the points.\n\n But since the RBF kernel is stationary it will always return to μ=0\\mu=0μ=0\n\n in regions further away from observed training data.\n\n This decreases the accuracy for predictions that reach further \n\ninto the past or the future.\n\n An improved model can be created by combining the individual \n\nkernels through addition, which maintains both the periodic nature and \n\nthe global ascending trend of the data.\n\n This procedure can be used, for example, in the analysis of \n\nweather data.\n\n \n\n RBF Periodic Linear\n\ny = 0(hover for information)RBF:\n\n Variance σ = 1, Length l = 1, \n\n \n\n As discussed in the [section about GPs](#GaussianProcesses),\n\n a Gaussian process can model uncertain observations.\n\n This can be seen when only selecting the linear kernel, as it \n\nallows us to perform linear regression even if more than two points have\n\n been observed, and not all functions have to pass directly through the \n\nobserved training data.\n\n \n\nConclusion\n\n----------\n\n understanding on how they work.\n\n make them even more versatile.\n\n \n\n For instance, sometimes it might not be possible to describe the \n\nkernel in simple terms.\n\n To overcome this challenge, learning specialized kernel functions \n\n different purposes, e.g. *model-peeling* and hypothesis testing.\n\n appropriate combination and parameterization of the kernel.\n\n \n\n \n\n", "bibliography_bib": [{"title": "Gaussian Processes in Machine Learning"}, {"title": "Gaussian Processes for Object Categorization"}, {"title": "Clustering Based on Gaussian Processes"}, {"title": "Covariance matrix (Encyclopedia of Mathematics)"}, {"title": "The Nature of Statistical Learning Theory"}, {"title": "Automatic model construction with Gaussian processes"}, {"title": "Information theory, inference and learning algorithms"}, {"title": "Using Deep Belief Nets to Learn Covariance Kernels for Gaussian Processes"}, {"title": "Deep Kernel Learning"}, {"title": "Deep Neural Networks as Gaussian Processes"}, {"title": "Deep Gaussian Processes"}], "filename": "A Visual Exploration of Gaussian Processes.html", "id": "f9868d8e3c6f9754e08dd4beb0de3b25"} {"url": "n/a", "source": "distill", "source_type": "html", "converted_with": "python", "title": "A Discussion of 'Adversarial Examples Are Not Bugs, They Are Features': Adversarial Examples are Just Bugs, Too", "authors": ["Preetum Nakkiran"], "date_published": "2019-08-06", "abstract": " This article is part of a discussion of the Ilyas et al. paper “Adversarial examples are not bugs, they are features”. You can learn more in the main discussion article . ", "journal_ref": "distill-pub", "doi": "https://doi.org/10.23915/distill.00019.5", "text": "\n\n .comment-info {\n\n background-color: hsl(54, 78%, 96%);\n\n border-left: solid hsl(54, 33%, 67%) 1px;\n\n padding: 1em;\n\n color: hsla(0, 0%, 0%, 0.67);\n\n }\n\n #header-info {\n\n margin-top: 0;\n\n margin-bottom: 1.5rem;\n\n display: grid;\n\n grid-template-columns: 65px max-content 1fr;\n\n grid-template-areas:\n\n \"icon explanation explanation\"\n\n \"icon back comment\";\n\n grid-column-gap: 1.5em;\n\n }\n\n #header-info .icon-multiple-pages {\n\n grid-area: icon;\n\n padding: 0.5em;\n\n content: url(images/multiple-pages.svg);\n\n }\n\n #header-info .explanation {\n\n grid-area: explanation;\n\n font-size: 85%;\n\n color: hsl(0, 0%, 0.33);\n\n }\n\n #header-info .back {\n\n grid-area: back;\n\n }\n\n #header-info .back::before {\n\n content: \"←\";\n\n margin-right: 0.5em;\n\n }\n\n #header-info .comment {\n\n grid-area: comment;\n\n scroll-behavior: smooth;\n\n }\n\n #header-info .comment::before {\n\n content: \"↓\";\n\n margin-right: 0.5em;\n\n }\n\n #header-info a.back,\n\n #header-info a.comment {\n\n font-size: 80%;\n\n font-weight: 600;\n\n border-bottom: none;\n\n text-transform: uppercase;\n\n color: #2e6db7;\n\n display: block;\n\n margin-top: 0.25em;\n\n letter-spacing: 0.25px;\n\n }\n\n \n\n This article is part of a discussion of the Ilyas et al. paper\n\n *“Adversarial examples are not bugs, they are features”.*\n\n You can learn more in the\n\n [main discussion article](https://distill.pub/2019/advex-bugs-discussion/) .\n\n \n\n[All Responses](https://distill.pub/2019/advex-bugs-discussion/#articles)\n\n We demonstrate that there exist adversarial examples which are just “bugs”:\n\n \n\n1. Do not transfer between models, and\n\n2. Do not leak “non-robust features” which allow for learning, in the\n\n sense of Ilyas-Santurkar-Tsipras-Engstrom-Tran-Madry\n\n .\n\n We replicate the Ilyas et al.\n\n experiment of training on mislabeled adversarially-perturbed images\n\n (Section 3.2 of ),\n\n and show that it fails for our construction of adversarial perturbations.\n\n \n\n The message is, whether adversarial examples are features or bugs depends\n\n \n\n classifier.\n\n intrinsically have any vulnerable directions.\n\n \n\n### Background\n\n Many have understood Ilyas et al. \n\n to claim that adversarial examples are not “bugs”, but are “features”.\n\n Specifically, Ilyas et al. postulate the following two worlds:\n\n As communicated to us by the original authors.\n\n In this world, adversarial examples occur because classifiers behave\n\n poorly off-distribution,\n\n when they are evaluated on inputs that are not natural images.\n\n Here, adversarial examples would occur in arbitrary directions,\n\n having nothing to do with the true data distribution.\n\n and which contain features of the target class.\n\n For example, consider the perturbation that\n\n makes an image of a dog to be classified as a cat.\n\n vs. dogs.\n\n are in both.\n\n Ilyas et al. \n\n show that there exist adversarial examples in World 2, and we show there exist\n\n examples in World 1.\n\n \n\n Constructing Non-transferrable Targeted Adversarial Examples\n\n--------------------------------------------------------------\n\n fff,\n\n which do not transfer to other classifiers trained for the same problem.\n\n \n\n f(x′)=ytargf(x′)=ytargf(x’) = y\\_{targ}.\n\n \n\n \n\n PGD is described in the appendix.\n\n \n\n which starts at input xxx, and iteratively takes steps {xt}{xt}\\{x\\_t\\}\n\n to minimize the loss L(f,xt,ytarg)L(f,xt,ytarg)L(f, x\\_t, y\\_{targ}).\n\n That is, we take steps in the direction\n\n −∇xL(f,xt,ytarg)−∇xL(f,xt,ytarg)-\\nabla\\_x L(f, x\\_t, y\\_{targ})\n\n where L(f,x,y)L(f,x,y)L(f, x, y) is the loss of fff on input xxx, label yyy.\n\n \n\n Note that since PGD steps in the gradient direction towards the target class,\n\n background).\n\n since the “blue” direction is correlated with the plane class.\n\n In our construction below, we attempt to eliminate such feature leakage.\n\n \n\n and an off-manifold “random component”.\n\n Our construction below attempts to step only in the off-manifold direction.\n\n \n\n### Our Construction\n\n for the same classification problem as fff.\n\n different random initializations.\n\n \n\n For input example (x,y)(x,y)(x, y) and target class ytargytargy\\_{targ},\n\n we perform iterative updates to find adversarial attacks — as in PGD.\n\n However, instead of stepping directly in the gradient direction, we\n\n step in the direction\n\n \n\n Formally, we replace the iterative step with\n\n where ΠεΠε\\Pi\\_\\eps is the projection onto the εε\\eps-ball around xxx.\n\n \n\n we minimize the “disentangled loss”\n\n \n\n We could also consider explicitly using the ensemble to decorrelate,\n\n by stepping in direction\n\n This works well for small ϵϵ\\epsilon,\n\n but the given loss has better optimization properties for larger ϵϵ\\epsilon.\n\n \n\n This loss encourages finding an xtxtx\\_t which is adversarial for fff,\n\n but not for the ensemble {fi}{fi}\\{f\\_i\\}.\n\n \n\n these examples are also not adversarial for\n\n *new* classifiers trained for the same problem.\n\n \n\n### Experiments\n\n We train a ResNet18 on CIFAR10 as our target classifier fff.\n\n We then test the probability that\n\n a targeted attack for fff\n\n transfers to a new (freshly-trained) ResNet18, with the same targeted class.\n\n \n\n For L∞L∞L\\_{\\infty} attacks:\n\n \n\n| | | |\n\n| --- | --- | --- |\n\n| | Attack Success | Transfer Success |\n\n| PGD | 99.6% | 52.1% |\n\n| Ours | 98.6% | 0.8% |\n\n For L2L2L\\_2 attacks:\n\n \n\n| | | |\n\n| --- | --- | --- |\n\n| | Attack Success | Transfer Success |\n\n| PGD | 99.9% | 82.5% |\n\n| Ours | 99.3% | 1.7% |\n\nAdversarial Examples With No Features\n\n-------------------------------------\n\n Using the above, we can construct adversarial examples\n\n which *do not suffice* for learning.\n\n Here, we replicate the Ilyas et al. experiment\n\n that “Non-robust features suffice for standard classification”\n\n (Section 3.2 of ),\n\n but show that it fails for our construction of adversarial examples.\n\n \n\n To review, the Ilyas et al. non-robust experiment was:\n\n \n\n1. Train a standard classifier fff for CIFAR.\n\n2. From the CIFAR10 training set S={(Xi,Yi)}S={(Xi,Yi)}S = \\{(X\\_i, Y\\_i)\\},\n\n Note that S′S′S’ appears to humans as “mislabeled examples”.\n\n3. Train a new classifier f′f′f’ on train set S′S′S’.\n\n Ilyas et al. use Step (3) to argue that\n\n adversarial examples have a meaningful “feature” component.\n\n However, for adversarial examples constructed using our method, Step (3) fails.\n\n (X,Y+1)(X,Y+1)(X, Y+1), which is intuitively what we trained on.\n\n \n\nFor L∞L∞L\\_{\\infty} attacks:\n\n| | | |\n\n| --- | --- | --- |\n\n| PGD | 23.7% | 40.4% |\n\n| Ours | 2.5% | 75.9% |\n\n Table: Test Accuracies of f′f′f’\n\nFor L2L2L\\_2 attacks:\n\n| | | |\n\n| --- | --- | --- |\n\n| PGD | 33.2% | 27.3% |\n\n| Ours | 2.8% | 70.8% |\n\n Table: Test Accuracies of f′f′f’\n\nAdversarial Squares: Adversarial Examples from Robust Features\n\n--------------------------------------------------------------\n\n To further illustrate that adversarial examples can be “just bugs”,\n\n \n\n reasonable\n\n *intrinsic* definition, this problem has no non-robust features.\n\n \n\n In particular, by the Ilyas et al. definition, **every** distribution\n\n distribution.\n\n family of classifiers being considered.\n\n \n\n overfitting, and\n\n label noise.\n\n \n\n with a small amount of random pixel noise and label noise.\n\n \n\n A sample of images from the distribution.\n\n \n\n Formally, let the distribution be as follows.\n\n Pick label Y∈{±1}Y∈{±1}Y \\in \\{\\pm 1\\} uniformly,\n\n \\begin{cases}\n\n (+\\vec{\\mathbb{1}} + \\vec\\eta\\_\\eps) \\cdot \\eta & \\text{if $Y=1$}\\\\\n\n (-\\vec{\\mathbb{1}} + \\vec\\eta\\_\\eps) \\cdot \\eta & \\text{if $Y=-1$}\\\\\n\n \\end{cases}\n\n and\n\n \n\n \n\n A plot of samples from a 2D-version of this distribution is shown to the right.\n\n \n\n However, if we sample 10000 training images from this distribution, and train\n\n a ResNet18 to 99.9% train accuracy,\n\n \n\n \n\n the resulting classifier is highly non-robust:\n\n \n\n The input-noise and label noise are both essential for this construction.\n\n One intuition for what is happening is: in the initial stage of training\n\n classifier).\n\n to fit the label-noise, which hurts robustness.\n\n \n\n \n\n Figure adapted from .\n\n \n\nAddendum: Data Poisoning via Adversarial Examples\n\n-------------------------------------------------\n\n As an addendum, we observe that the “non-robust features”\n\n experiment of (Section 3.2)\n\n directly implies data-poisoning attacks:\n\n to the classifier output labels (e.g. swapping cats and dogs).\n\n \n\n notation, and also using vanilla PGD to find adversarial examples.\n\n on distribution (X,Y)(X,Y)(X, Y).\n\n \n\n By permutation-symmetry of the labels, this implies that:\n\n \n\n on distribution (X,Y−1)(X,Y−1)(X, Y-1).\n\n \n\n but the classifier learns to predict the cyclically-shifted labels.\n\n Concretely, using the original numbers of\n\n Table 1 in , this reduction implies that\n\n and cause the learnt classifier to output shifted-labels\n\n 43.7% of the time\n\n (cats classified as birds, dogs as deers, etc).** \n\n \n\n To cite Ilyas et al.’s response, please cite their\n\n \n\n**Response Summary**: We note that as discussed in\n\n mere existence of adversarial\n\n examples\n\n that are “features” is sufficient to corroborate our main thesis. This comment\n\n illustrates, however, that we can indeed craft adversarial examples that are\n\n based on “bugs” in realistic settings. Interestingly, such examples don’t\n\n transfer, which provides further support for the link between transferability\n\n and non-robust features.\n\n \n\n we did not intend to claim\n\n that adversarial examples arise *exclusively* from (useful) features but rather\n\n that useful non-robust features exist and are thus (at least\n\n partially) responsible for adversarial vulnerability. In fact,\n\n prior work already shows how in theory adversarial examples can arise from\n\n insufficient samples or finite-sample overfitting\n\n , and the experiments\n\n presented here (particularly, the adversarial squares) constitute a neat\n\n real-world demonstration of these facts. \n\n Our main thesis that “adversarial examples will not just go away as we fix\n\n stemming from “bugs.” As long as adversarial examples can stem from non-robust\n\n features (which the commenter seems to agree with), fixing these bugs will not\n\n solve the problem of adversarial examples. \n\nMoreover, with regards to feature “leakage” from PGD, recall that in\n\n or D\\_det dataset, the non-robust features are associated with the\n\n correct label whereas the robust features are associated with the wrong\n\n one. We wanted to emphasize that, as\n\n [shown in Appendix D.6](https://arxiv.org/abs/1905.02175) ,\n\n models trained on our DdetDdetD\\_{det} dataset actually generalize *better* to\n\n the non-robust feature-label association that to the robust\n\n feature-label association. In contrast, if PGD introduced a small\n\n “leakage” of non-robust features, then we would expect the trained model\n\n would still predominantly use the robust feature-label association. \n\n That said, the experiments cleverly zoom in on some more fine-grained\n\n nuances in our understanding of adversarial examples. One particular thing that\n\n stood out to us is that by creating a set of adversarial examples that are\n\n *explicitly* non-transferable, one also prevents new classifiers from learning\n\n features from that dataset. This finding thus makes the connection between\n\n transferability of adversarial examples and their containing generalizing\n\n features even stronger! Indeed, we can add the constructed dataset into our\n\n “ˆDdetD^det\\widehat{\\mathcal{D}}\\_{det} learnability vs transferability” plot\n\n (Figure 3 in the paper) — the point\n\n corresponding to this dataset fits neatly onto the trendline! \n\n attacks\n\n \n\n \n\n", "bibliography_bib": [{"title": "Adversarial examples are not bugs, they are features"}, {"title": "SGD on Neural Networks Learns Functions of Increasing Complexity"}, {"title": "Adversarially Robust Generalization Requires More Data"}, {"title": "A boundary tilting persepective on the phenomenon of adversarial examples"}], "filename": "A Discussion of 'Adversarial Examples Are Not Bugs, They Are Features'_ Adversarial Examples are Just Bugs, Too.html", "id": "597ad290502e2ca5142e1528c55571a5"} {"url": "n/a", "source": "distill", "source_type": "html", "converted_with": "python", "title": "The Paths Perspective on Value Learning", "authors": ["Sam Greydanus", "Chris Olah"], "date_published": "2019-09-30", "abstract": " In the last few years, reinforcement learning (RL) has made remarkable progress, including beating world-champion Go players, controlling robotic hands, and even painting pictures. ", "journal_ref": "distill-pub", "doi": "https://doi.org/10.23915/distill.00020", "text": "\n\nIntroduction\n\n------------\n\n \n\n One of the key sub-problems of RL is value estimation – learning the\n\n long-term consequences of being in a state.\n\n This can be tricky because future returns are generally noisy, \n\naffected by many things other than the present state. The further we \n\nlook into the future, the more this becomes true.\n\n But while difficult, estimating value is also essential to many \n\napproaches to RL.For many approaches \n\n(policy-value iteration), estimating value essentially is the whole \n\nproblem, while in other approaches (actor-critic models), value \n\nestimation is essential for reducing noise.\n\n The natural way to estimate the value of a state is as the average \n\nreturn you observe from that state. We call this Monte Carlo value \n\nestimation.\n\n \n\n**Cliff World**\n\n is a classic RL example, where the agent learns to\n\n walk along a cliff to reach a goal.\n\n \n\n Sometimes the agent reaches its goal.\n\n \n\n Other times it falls off the cliff.\n\n \n\n![](The%20Paths%20Perspective%20on%20Value%20Learning_files/cliffworld-mc.svg)\n\n Monte Carlo averages over trajectories where they intersect.\n\n \n\n If a state is visited by only one episode, Monte Carlo says its \n\nvalue is the return of that episode. If multiple episodes visit a state,\n\n Monte Carlo estimates its value as the average over them.\n\n \n\n Let’s write Monte Carlo a bit more formally.\n\n In tabular settings such as the Cliff World example, this “update \n\n and we could just as easily use the “+=” notation. But when using \n\nparameteric function approximators such as neural networks, our “update \n\ntowards” operator may represent a gradient step, which cannot be written\n\n in “+=” notation. In order to keep our notation clean and general, we \n\nchose to use the ↩\\hookleftarrow↩ operator throughout.\n\nV(st)  V(s\\_t)~~V(st​)  \n\n↩  \\hookleftarrow~~↩  \n\nRtR\\_tRt​\n\n State value \n\n Return \n\n The term on the right is called the return and we use it to measure \n\nthe amount of long-term reward an agent earns. The return is just a \n\n is a discount factor which controls how much short term rewards are \n\nworth relative to long-term rewards. Estimating value by updating \n\n \n\nBeating Monte Carlo\n\n-------------------\n\n \n\nV(st)  V(s\\_t)~~V(st​)  \n\n↩  \\hookleftarrow~~↩  \n\nrtr\\_{t} rt​\n\n+++\n\nγV(st+1)\\gamma V(s\\_{t+1})γV(st+1​)\n\n State value \n\nReward\n\nNext state value\n\n Intersections between two trajectories are handled differently \n\nunder this update. Unlike Monte Carlo, TD updates merge intersections so\n\n that the return flows backwards to all preceding states.\n\n \n\n Sometimes the agent reaches its goal.\n\n \n\n Other times it falls off the cliff.\n\n \n\n![](The%20Paths%20Perspective%20on%20Value%20Learning_files/cliffworld-td.svg)\n\n TD learning merges paths where they intersect.\n\n \n\n What does it mean to “merge trajectories” in a more formal sense? \n\n \n\nV(st+1)  V(s\\_{t+1})~~V(st+1​)  \n\n≃  \\simeq~~≃  \n\n≃  \\simeq~~≃  \n\n Now we can use this equation to expand the TD update rule recursively:\n\n \n\nV(st) V(s\\_t)~V(st​) \n\n↩ \\hookleftarrow~↩ \n\nrtr\\_{t} rt​\n\n+++\n\nγV(st+1)\\gamma V(s\\_{t+1})γV(st+1​)\n\n↩ \\hookleftarrow~↩ \n\nrtr\\_{t} rt​\n\n+++\n\nγE[rt+1′]\\gamma \\mathop{\\mathbb{E}} \\bigr[ r’\\_{t+1} \\bigl]γE[rt+1′​]\n\n+++\n\n↩ \\hookleftarrow~↩ \n\nrtr\\_{t} rt​\n\n+++\n\nγE [rt+1′]\\gamma \\mathop{\\mathbb{E}} ~ \\bigr[ r’\\_{t+1} \\bigl]γE [rt+1′​]\n\n+++\n\n+ +~+ \n\n...  …~~...  \n\n This gives us a strange-looking sum of nested expectation values. \n\nAt first glance, it’s not clear how to compare them with the more \n\nsimple-looking Monte Carlo update. More importantly, it’s not clear that\n\n we *should* compare the two; the updates are so different that it \n\nfeels a bit like comparing apples to oranges. Indeed, it’s easy to think\n\n of Monte Carlo and TD learning as two entirely different approaches.\n\n \n\n But they are not so different after all. Let’s rewrite the Monte \n\nCarlo update in terms of reward and place it beside the expanded TD \n\nupdate.\n\n \n\n**MC update**\n\nV(st) V(s\\_t)~V(st​) \n\n ↩  ~\\hookleftarrow~~ ↩  \n\nrtr\\_{t}rt​\n\n+ +~+ \n\nγ rt+1\\gamma ~ r\\_{t+1}γ rt+1​\n\n+ +~+ \n\nγ2 rt+2\\gamma^2 ~ r\\_{t+2}γ2 rt+2​\n\n+ +~+ \n\n...…...\n\nReward from present path.\n\nReward from present path.\n\nReward from present path…\n\n**TD update**\n\nV(st) V(s\\_t)~V(st​) \n\n ↩  ~\\hookleftarrow~~ ↩  \n\nrtr\\_{t}rt​\n\n+ +~+ \n\nγE [rt+1′]\\gamma \\mathop{\\mathbb{E}} ~ \\bigr[ r’\\_{t+1} \\bigl]γE [rt+1′​]\n\n+ +~+ \n\n+ +~+ \n\n...…...\n\nReward from present path.\n\nExpectation over paths intersecting present path.\n\nExpectation over paths intersecting *paths intersecting* present path…\n\n A pleasant correspondence has emerged. The difference between \n\nMonte Carlo and TD learning comes down to the nested expectation \n\noperators. It turns out that there is a nice visual interpretation for \n\nwhat they are doing. We call it the *paths perspective* on value learning.\n\n \n\nThe Paths Perspective\n\n---------------------\n\n \n\n Trajectory 1\n\n \n\n Trajectory 2\n\n \n\n But this way of organizing experience de-emphasizes relationships *between*\n\n trajectories. Wherever two trajectories intersect, both outcomes are \n\nvalid futures for the agent. So even if the agent has followed \n\nTrajectory 1 to the intersection, it could *in theory* follow \n\nTrajectory 2 from that point onward. We can dramatically expand the \n\nagent’s experience using these simulated trajectories or “paths.”\n\n \n\n #cliffworld-paths {\n\n display: grid;\n\n grid-template-columns: repeat(auto-fit, minmax(130px, 1fr));\n\n grid-gap: 20px;\n\n }\n\n \n\n Path 1\n\n \n\n Path 2\n\n \n\n Path 3\n\n \n\n Path 4\n\n \n\n**Estimating value.** It turns out that Monte Carlo is \n\naveraging over real trajectories whereas TD learning is averaging over \n\nall possible paths. The nested expectation values we saw earlier \n\ncorrespond to the agent averaging across *all possible future paths*.\n\n \n\n #compare-mctd {\n\n display: grid;\n\n grid-gap: 40px;\n\n grid-template-columns: repeat(auto-fit, minmax(320px, 1fr));\n\n }\n\n #compare-mctd .subfigure {\n\n display: grid;\n\n grid-gap: 20px;\n\n grid-template-columns: repeat(2, minmax(160px, 1fr));\n\n /\\* grid-auto-rows: min-content; \\*/\n\n /\\* grid-template-rows: min-content auto; \\*/\n\n }\n\n #compare-mctd .subfigure .column-heading {\n\n grid-column: 1 / -1;\n\n }\n\n #compare-mctd .figcaption {\n\n border-top: 1px solid rgba(0, 0, 0, 0.1);\n\n padding-top: 5px;\n\n margin-top: 5px;\n\n /\\* min-height: 179px; \\*/\n\n /\\* min-width: 179px; \\*/\n\n /\\* flex: 1; \\*/\n\n }\n\n \n\n#### Monte Carlo Estimation\n\n![](The%20Paths%20Perspective%20on%20Value%20Learning_files/traj-thumbnails.svg)\n\n Averages over **real trajectories**\n\n![](The%20Paths%20Perspective%20on%20Value%20Learning_files/cliffworld-mc.svg)\n\n Resulting MC estimate\n\n \n\n#### Temporal Difference Estimation\n\n![](The%20Paths%20Perspective%20on%20Value%20Learning_files/path-thumbnails.svg)\n\n Averages over **possible paths**\n\n![](The%20Paths%20Perspective%20on%20Value%20Learning_files/cliffworld-td.svg)\n\n Resulting TD estimate\n\n \n\n**Comparing the two.** Generally speaking, the best value \n\nestimate is the one with the lowest variance. Since tabular TD and Monte\n\n Carlo are empirical averages, the method that gives the better estimate\n\n is the one that averages over more items. This raises a natural \n\nquestion: Which estimator averages over more items?\n\n \n\nVar[V(s)]  Var[V(s)]~~Var[V(s)]  \n\n∝  \\propto~~∝  \n\n1N\\frac{1}{N} N1​\n\n Variance of estimate \n\nInverse of the number of items in the average\n\n First off, TD learning never averages over fewer trajectories than\n\n Monte Carlo because there are never fewer simulated trajectories than \n\nreal trajectories. On the other hand, when there are *more* \n\nsimulated trajectories, TD learning has the chance to average over more \n\nof the agent’s experience.\n\n This line of reasoning suggests that TD learning is the better \n\nestimator and helps explain why TD tends to outperform Monte Carlo in \n\ntabular environments.\n\n \n\nIntroducing Q-functions\n\n-----------------------\n\n An alternative to the value function is the Q-function. Instead of\n\n estimating the value of a state, it estimates the value of a state and \n\nan action. The most obvious reason to use Q-functions is that they allow\n\n us to compare different actions.\n\n \n\n #qlearning-intro {\n\n display: grid;\n\n grid-template-columns: repeat(auto-fit, minmax(130px, 1fr));\n\n grid-column-gap: 30px;\n\n }\n\n \n\n![](The%20Paths%20Perspective%20on%20Value%20Learning_files/policy.svg)\n\n Many times we’d like to compare the value of actions under a policy.\n\n \n\n![](The%20Paths%20Perspective%20on%20Value%20Learning_files/value.svg)\n\n It’s hard to do this with a value function.\n\n \n\n![](The%20Paths%20Perspective%20on%20Value%20Learning_files/qvalue.svg)\n\n It’s easier to use Q-functions, which estimate joint state-action values.\n\n \n\n There are some other nice properties of Q-functions. In order to \n\nsee them, let’s write out the Monte Carlo and TD update rules.\n\n \n\n \n\nQ(st,at)  Q(s\\_t, a\\_t)~~Q(st​,at​)  \n\n↩  \\hookleftarrow~~↩  \n\nRtR\\_tRt​\n\n State-action value \n\n Return \n\n We still update towards the return. Instead of updating towards \n\nthe return of being in some state, though, we update towards the return \n\nof being in some state *and* selecting some action.\n\n \n\n Now let’s try doing the same thing with the TD update:\n\n \n\nQ(st,at)  Q(s\\_t, a\\_t)~~Q(st​,at​)  \n\n↩  \\hookleftarrow~~↩  \n\nrtr\\_{t} rt​\n\n+++\n\nγQ(st+1,at+1)\\gamma Q(s\\_{t+1}, a\\_{t+1})γQ(st+1​,at+1​)\n\n State-action value \n\nReward\n\nNext state value\n\n What we need is a better estimate of V(st+1)V(s\\_{t+1})V(st+1​).\n\n \n\nQ(st,at)  Q(s\\_t, a\\_t)~~Q(st​,at​)  \n\n↩  \\hookleftarrow~~↩  \n\nrtr\\_{t} rt​\n\n+++\n\nγV(st+1)\\gamma V(s\\_{t+1})γV(st+1​)\n\n State-action value \n\nReward\n\nNext state value\n\nV(st+1)  V(s\\_{t+1})~~V(st+1​)  \n\n=  =~~=  \n\n?? ?\n\n \n\nLearning Q-functions with reweighted paths\n\n------------------------------------------\n\n**Expected Sarsa.**\n\n \n\n \n\n![](The%20Paths%20Perspective%20on%20Value%20Learning_files/sarsa.svg)\n\n \n\n![](The%20Paths%20Perspective%20on%20Value%20Learning_files/expected-sarsa.svg)\n\n than a value estimate computed straight from the experience. This is \n\nbecause the expectation value weights the Q-values by the true policy \n\ndistribution rather than the empirical policy distribution. In doing \n\n**Off-policy value learning.** We can push this idea even \n\nfurther. Instead of weighting Q-values by the true policy distribution, \n\nwe can weight them by an arbitrary policy, πoff\\pi^{off}πoff:\n\n \n\n**Off-policy value learning** weights Q-values by an arbitrary policy.\n\n \n\n![](The%20Paths%20Perspective%20on%20Value%20Learning_files/off-policy.svg)\n\n This slight modification lets us estimate value under any policy \n\nwe like. It’s interesting to think about Expected Sarsa as a special \n\ncase of off-policy learning that’s used for on-policy estimation.\n\n \n\n**Re-weighting path intersections.** What does the paths \n\nperspective say about off-policy learning? To answer this question, \n\nlet’s consider some state where multiple paths of experience intersect.\n\n \n\n #reweighting {\n\n display: grid;\n\n grid-template-columns: repeat(auto-fit, minmax(160px, 1fr));\n\n grid-gap: 30px;\n\n }\n\n /\\* #reweighting .wrapper {\n\n min-width: 160px;\n\n } \\*/\n\n #reweighting-full .cls-1 {\n\n fill: #2d307b;\n\n }\n\n #reweighting-full .cls-2 {\n\n fill: #e7ebe8;\n\n }\n\n #reweighting-full .cls-3 {\n\n fill: #cac9cc;\n\n }\n\n #reweighting-full .cls-4 {\n\n fill: #bd5f35;\n\n }\n\n #reweighting-full .cls-5, .cls-6, .cls-7 {\n\n fill: none;\n\n stroke-width: 10px;\n\n }\n\n #reweighting-full .cls-5 {\n\n stroke: #bd5f35;\n\n }\n\n #reweighting-full .cls-5, .cls-7 {\n\n stroke-miterlimit: 10;\n\n }\n\n #reweighting-full .cls-6 {\n\n stroke: #2d307b;\n\n stroke-linecap: round;\n\n stroke-linejoin: round;\n\n }\n\n #reweighting-full .cls-7 {\n\n stroke: #8191c9;\n\n }\n\n #reweighting-full .cls-8 {\n\n fill: #8191c9;\n\n }\n\n #reweighting-full .cls-9 {\n\n font-size: 24.25341px;\n\n fill: #d1d3d4;\n\n font-family: Arial-BoldMT, Arial;\n\n font-weight: 700;\n\n }\n\n \n\n![](The%20Paths%20Perspective%20on%20Value%20Learning_files/reweighting-1.svg)\n\n Multiple paths of experience intersect at this state.\n\n \n\n![](The%20Paths%20Perspective%20on%20Value%20Learning_files/reweighting-2.svg)\n\n \n\n+2\n\n-1\n\nWeight of upward path: 0.50\n\n \n\n Wherever intersecting paths are re-weighted, the paths that are \n\nmost representative of the off-policy distribution end up making larger \n\ncontributions to the value estimate. Meanwhile, paths that have low \n\nprobability make smaller contributions.\n\n \n\n**Q-learning.** There are many cases where an agent needs to \n\ncollect experience under a sub-optimal policy (e.g. to improve \n\nexploration) while estimating value under an optimal one. In these \n\ncases, we use a version of off-policy learning called Q-learning.\n\n \n\n \n\n![](The%20Paths%20Perspective%20on%20Value%20Learning_files/q-learning.svg)\n\n Q-learning prunes away all but the highest-valued paths. The paths \n\nthat remain are the paths that the agent will follow at test time; they \n\nare the only ones it needs to pay attention to. This sort of value \n\n \n\n**Double Q-Learning.** The problem with Q-learning is that it \n\ngives biased value estimates. More specifically, it is over-optimistic \n\nin the presence of noisy rewards. Here’s an example where Q-learning \n\nfails:\n\n \n\n*You go to a casino and play a hundred slot machines. It’s your \n\nlucky day: you hit the jackpot on machine 43. Now, if you use Q-learning\n\n to estimate the value of being in the casino, you will choose the best \n\noutcome over the actions of playing slot machines. You’ll end up \n\nthinking that the value of the casino is the value of the jackpot…and \n\ndecide that the casino is a great place to be!*\n\n Sometimes the largest Q-value of a state is large *just by chance*;\n\n choosing it over others makes the value estimate biased.\n\n One way to reduce this bias is to have a friend visit the casino \n\nand play the same set of slot machines. Then, ask them what their \n\nwinnings were at machine 43 and use their response as your value \n\nestimate. It’s not likely that you both won the jackpot on the same \n\nmachine, so this time you won’t end up with an over-optimistic estimate.\n\n We call this approach *Double Q-learning*.\n\n**Putting it together.** It’s easy to think of Sarsa, Expected\n\n Sarsa, Q-learning, and Double Q-learning as different algorithms. But \n\n \n\n#### On-policy methods\n\n \n\n![](The%20Paths%20Perspective%20on%20Value%20Learning_files/sarsa.svg)\n\n \n\n![](The%20Paths%20Perspective%20on%20Value%20Learning_files/expected-sarsa.svg)\n\n#### Off-policy methods\n\n**Off-policy value learning** weights Q-values by an arbitrary policy.\n\n \n\n![](The%20Paths%20Perspective%20on%20Value%20Learning_files/off-policy.svg)\n\n \n\n![](The%20Paths%20Perspective%20on%20Value%20Learning_files/q-learning.svg)\n\n QBQ\\_BQB​.\n\n \n\n \n\n**Re-weighting paths with Monte Carlo.** At this point, a \n\nnatural question is: Could we accomplish the same re-weighting effect \n\nwith Monte Carlo? We could, but it would be messier and involve \n\nre-weighting all of the agent’s experience. By working at intersections,\n\n TD learning re-weights individual transitions instead of episodes as a \n\nwhole. This makes TD methods much more convenient for off-policy \n\nlearning.\n\n \n\nMerging Paths with Function Approximators\n\n-----------------------------------------\n\n Up until now, we’ve learned one parameter — the value \n\nestimate — for every state or every state-action pair. This works well \n\nfor the Cliff World example because it has a small number of states. But\n\n most interesting RL problems have a large or infinite number of states.\n\n This makes it hard to store value estimates for each state.\n\n \n\n #figure-fnapprox-intro {\n\n display: grid;\n\n grid-template-columns: repeat(auto-fit, minmax(150px, 1fr));\n\n grid-column-gap: 20px;\n\n grid-row-gap: 10px;\n\n /\\* grid-auto-flow: column; \\*/\n\n }\n\n \n\n in these spaces often requires function approximation.\n\n \n\n memory and don’t generalize.\n\n \n\n generalize to states they haven’t visited yet.\n\n \n\n Instead, we must force our value estimator to have fewer \n\nparameters than there are states. We can do this with machine learning \n\nmethods such as linear regression, decision trees, or neural networks. \n\nAll of these methods fall under the umbrella of function approximation.\n\n \n\n**Merging nearby paths.** From the paths perspective, we can \n\ninterpret function approximation as a way of merging nearby paths. But \n\nwhat do we mean by “nearby”? In the figure above, we made an implicit \n\ndecision to measure “nearby” with Euclidean distance. This was a good \n\nidea because the Euclidean distance between two states is highly \n\ncorrelated with the probability that the agent will transition between \n\nthem.\n\n \n\n However, it’s easy to imagine cases where this implicit assumption\n\n breaks down. By adding a single long barrier, we can construct a case \n\nwhere the Euclidean distance metric leads to bad generalization. The \n\nproblem is that we have merged the wrong paths.\n\n \n\n #fnapprox-barrier .wrapper {\n\n margin-left: auto;\n\n margin-right: auto;\n\n display: grid;\n\n grid-template-columns: 1fr 1.464fr;\n\n grid-template-rows: auto auto;\n\n grid-column-gap: 20px;\n\n grid-row-gap: 10px;\n\n grid-auto-flow: column;\n\n }\n\n \n\n Imagine changing the Cliff World setup by adding a long barrier.\n\n \n\n Now, using the Euclidean averager leads to bad value updates.\n\n \n\n**Merging the wrong paths.** The diagram below shows the \n\neffects of merging the wrong paths a bit more explicitly. Since the \n\nEuclidean averager is to blame for poor generalization, both Monte Carlo\n\n and TD make bad value updates. However, TD learning amplifies these \n\nerrors dramatically whereas Monte Carlo does not.\n\n \n\n We’ve seen that TD learning makes more efficient value updates. \n\nThe price we pay is that these updates end up being much more sensitive \n\nto bad generalization.\n\n \n\nImplications for deep reinforcement learning\n\n--------------------------------------------\n\n**Neural networks.** Deep neural networks are perhaps the most \n\npopular function approximators for reinforcement learning. These models \n\nare exciting for many reasons, but one particularly nice property is \n\nthat they don’t make implicit assumptions about which states are \n\n“nearby.”\n\n \n\n Early in training, neural networks, like averagers, tend to merge \n\nthe wrong paths of experience. In the Cliff Walking example, an \n\nuntrained neural network might make the same bad value updates as the \n\nEuclidean averager.\n\n \n\n But as training progresses, neural networks can actually learn to \n\novercome these errors. They learn which states are “nearby” from \n\nexperience. In the Cliff World example, we might expect a fully-trained \n\n the barrier. This isn’t something that most other function \n\napproximators can do. It’s one of the reasons deep RL is so interesting!\n\n \n\n![](The%20Paths%20Perspective%20on%20Value%20Learning_files/latent-distance.png)\n\n The agent, which was trained to grasp objects using the robotic arm, \n\ntakes into account obstacles and arm length when it measures the \n\ndistance between two states.\n\n \n\n**TD or not TD?** So far, we’ve seen how TD learning can \n\noutperform Monte Carlo by merging paths of experience where they \n\nintersect. We’ve also seen that merging paths is a double-edged sword: \n\nwhen function approximation causes bad value updates, TD can end up \n\ndoing worse.\n\n \n\n Over the last few decades, most work in RL has preferred TD \n\nlearning to Monte Carlo. Indeed, many approaches to RL use TD-style \n\nvalue updates. With that being said, there are many other ways to use \n\nMonte Carlo for reinforcement learning. Our discussion centers around \n\nMonte Carlo for value estimation in this article, but it can also be \n\nused for policy selection as in Silver et al.\n\n Since Monte Carlo and TD learning both have desirable properties, \n\nwhy not try building a value estimator that is a mixture of the two? \n\n coefficient constant as they train a deep RL model. However, if we \n\nthink Monte Carlo learning is best early in training (before the agent \n\nhas learned a good state representation) and TD learning is best later \n\non (when it’s easier to benefit from merging paths), maybe the best \n\napproach is to anneal λ\\lambdaλ over the course of training..\n\n \n\nConclusion\n\n----------\n\n In this article we introduced a new way to think about TD \n\nlearning. It helps us see why TD learning can be beneficial, why it can \n\nbe effective for off-policy learning, and why there can be challenges in\n\n combining TD learning with function approximators.\n\n \n\n \n\n#### Gridworld playground\n\n##### Learning Algorithm\n\nMonte CarloSarsaExpected SarsaQ-Learning##### Visualization\n\nPolicyQ(s,a)V(s)##### Epsilon-greedy policy\n\nexploreexploitAdd an agent+2-1\n\n", "bibliography_bib": [{"title": "Reinforcement Learning: An Introduction"}, {"title": "Double Q-learning"}, {"title": "Universal Planning Networks"}, {"title": "Mastering the game of go without human knowledge"}], "filename": "The Paths Perspective on Value Learning.html", "id": "e19bb7054247944a75cd31cdda8c9d42"} {"url": "n/a", "source": "distill", "source_type": "html", "converted_with": "python", "title": "Curve Circuits", "authors": ["Nick Cammarata", "Gabriel Goh", "Shan Carter", "Chelsea Voss", "Ludwig Schubert", "Chris Olah"], "date_published": "2021-01-30", "abstract": "This article is part of the Circuits thread, an experimental format collecting invited short articles and critical commentary delving into the inner workings of neural networks.", "journal_ref": "distill-pub", "doi": "https://doi.org/10.23915/distill.00024.006", "text": "### Contents\n\n* [3a Line Neuron Family](#line-families)\n\n* [Cosmetic Neurons](#cosmetic-neurons)\n\n an experimental format collecting invited short articles and critical \n\ncommentary delving into the inner workings of neural networks.\n\n takes the opposite approach: building up from individual neurons and \n\nindividual weights to easily verifiable explanations of tiny slices of \n\nneural networks. But it faces a different question: can we ever hope to \n\nunderstand a full neural network this way?\n\n We find that although curve detection involves more than 50,000 \n\nparameters, those parameters actually implement a simple algorithm that \n\ncan be read off the weights and described in just a few English \n\nsentences. Based on this understanding, we re-implement curve detectors \n\n Rotational equivariance reduces the complexity by a factor of 10-20x \n\nwhile scale equivariance reduces it by an additional 2-3x, for a total \n\nreduction of ~50xA small portion involving\n\n the interactions of color-contrast detectors with line detectors is \n\nfurther reduced by a factor of ~8x due to hue equivariance, but doesn't \n\nsignificantly reduce the complexity of the overall circuit..\n\n We find this an exciting example of how motifs can dramatically \n\nsimplify circuits. In this circuit the main motif is equivariance, but \n\nin others different motifs may be more important.\n\nWhile the curve \n\ncircuit in InceptionV1 is quaint next to the 175b weights in GPT-3, it's\n\n We think the surprising simplicity of the curve circuit is a glimmer of\n\n hope that the Circuits approach may scale to letting us \n\nreverse-engineer big neural networks into small verifiable explanations.\n\n---\n\nThe Curve Detector Algorithm\n\n----------------------------\n\n Decomposed feature visualization renders a grid of feature \n\nvisualization that shows the expanded weights of an upstream layer to a \n\ndownstream neuron of interest (in this case 3b:379)Specifically,\n\n we calculate the gradient from each layer to the 3b curve family using a\n\n linear approximation of InceptionV1 where ReLUs are replaced with the \n\nidentity. Then we use feature visualization with the objective of \n\nmaximizing the dot product between each image and each spatial position \n\nof the gradient.. Intuitively, this is like decomposing the\n\n curve into simpler shapes, with simpler shapes the further back we go, \n\nfrom early curves, to lines, and eventually Gabor filters. Each shape is\n\n oriented along the tangent of the curve in 3b:379.\n\nconv2d0Gabor\n\n Filters line the tangent with secondary influence from the Color \n\n tangent is lined primarily by Complex Gabor neurons, which resemble \n\nGabor Filters but are invariant to the exact positions of which side is \n\n tangent of 3a is primarily lined by the Early Curve family as well as a\n\n second Line family. We’ll explore this layer in depth throughout this \n\n use decomposed feature visualization to show how the first four layers \n\nof InceptionV1 incrementally build towards curve detectors in 3b. To \n\nincrease legibility, we render each feature visualization with alpha \n\ntransparency and grayscale. Since curve detectors are invariant to \n\ncolor, the vibrant colors feature visualization produces don't give us \n\n To give us a sense of the strength of each position we set the opacity \n\nrelative to its magnitude. We see the highest magnitude weights follow \n\nthe tangent of 3b:379.We can take the \n\nhighest magnitude position from each layer above and see which neuron \n\n doing this decomposition, it really matters which metric you choose to \n\nmeasure how neurons contribute to the vector, such as average or total \n\nweight, since there is a long tail of neurons with low magnitude that \n\ncollectively make up a large fraction of the total weight. Training \n\nmodels with weight sparsity would reduce this long tail..\n\n decompose the highest magnitude spatial position in each layer into \n\nneuron families to see which families contribute most to curve \n\n each family we can also visualize how they connect to each other, \n\nshowing us a birds-eye view of the curve detection algorithm. Again, we \n\nsee the layers progressively building more complex shapes, with shapes \n\nthat closely resemble curve detectors like early curves and well-defined\n\n lines being built in 3a, only one layer before curve detectors.\n\n![](Curve%20Circuits_files/figure_003.png)We\n\n can visualize how neuron families connect to see a high-level view of \n\nthe curve detection algorithm of InceptionV1. In addition to giving us a\n\n sense of how these families are built, it lets us see how information \n\nflows to 3b curves. For instance, we see that conv2d2 lines contribute \n\nto 3b curves through both 3a lines and 3a early curves. While it may \n\nseem like this throws away lots of information, in a sense this is a \n\nnatural way to understand a circuit. It’s much easier to think about a \n\nline family in conv2d2 than 33 individual line neurons that vary in \n\norientation.Now that we know which neuron families\n\n are most important for curve detection, we can invest in understanding \n\nthe weights connecting them. Luckily this is sometimes easy, since \n\n Many families in this diagram implement the rotational equivariance \n\nmotif, meaning that each neuron in a family is approximately a rotated \n\nversion of another neuron in that family. We can see this in the weights\n\n connecting 3a early curves and 3b curves.\n\n weights connecting 3a early curves with 3b:379 implement rotational \n\nequivariance. It turns out there are curves of curves literally \n\ninscribed in the weights of neural networks. We’ll discuss equivariance \n\nlater in this paper, and likely again in the Circuits thread in a \n\ndedicated paper.When neuron families implement \n\nrotational equivariance we learn a lot looking at the strongest positive\n\n and negative neuron connections, because the others are just rotated \n\nversions of them. In the weights we see a general pattern, as each layer\n\n builds a contrast detector by tiling a simpler contrast detector along \n\nits tangent. When the two shapes are aligned the weight is most \n\npositive, and most negative when they are perpendicular.\n\n![](Curve%20Circuits_files/figure_002.png)Some\n\n of the strongest positive and negative weights between neuron families \n\nin the curve circuit. We see two patterns. First, the most positive \n\nweights tend to be where shapes are aligned, and the most negative are \n\nwhere shapes are in opposing orientations. Secondly, we see the same \n\ntransitions can occur multiple times across different layers, such as \n\nthe transition from lines to curves.So far we’ve \n\nlooked at the neuron families that are most important to curve \n\ndetection, and also at the circuit motifs connecting them. This \n\nhigh-level “circuit schematic” is useful for seeing a complex algorithm \n\nat a glance, and it tells us the main components we'd need to build if \n\nwe wanted to reimplement the circuit from scratch.\n\nThe circuit \n\nschematic also makes it easy to describe a few sentence English story of\n\n how curve detection works. Gabor filters turn into proto-lines which \n\nbuild lines and early curves. Finally, lines and early curves are \n\ncomposed into curves. In each case, each shape family (eg conv2d2 line) \n\nhas positive weight across the tangent of the shape family it builds \n\n(eg. 3a early curve). Each shape family implements the rotational \n\nequivariance motif, containing multiple rotated copies of approximately \n\nthe same neuron.\n\nIn the next few sections we'll zoom in from this \n\nhigh level description to a weight-level analysis of how each of the 3a \n\nfamilies we've looked at so far contribute to curve detectors.To\n\n read more about the idea and benefits of fluidly navigating different \n\n---\n\nThe Components of Curve Detectors\n\n---------------------------------\n\n### 3a Early Curve Family\n\nEarly\n\n curves in 3a contribute more to the construction of curve detectors \n\nthan any other neuron family. At every position curve neurons are \n\nexcited by early curves in a similar orientation and inhibited by ones \n\nin the opposing orientation, closely following the general pattern we \n\nsaw earlier.\n\n every position curve neurons are excited by early curves in a similar \n\norientation and inhibited by ones in the opposing orientation. If you \n\nlook closely you can see the weights shift slightly over the course of \n\nthe curve to track the change in local curvature.\n\n are a couple other subtle things you can find if you look closely. \n\nFirstly, you can see that the weights shift slightly over the course of \n\nthe curve, to track the change in local curvature. Secondly, curve \n\ndetectors in the same orientation are slightly inhibitory when they’re \n\nlocated off the curve. This makes sense, as seeing an early curve \n\nlocated anywhere but the tangent indicates a different shape than a pure\n\n curve.The weight matrix connecting early curves \n\nand curves shows a striking set of positive weights lining the diagonal \n\nwhere the two shapes have similar orientations. This band of positive \n\nweights is surrounded by negative weights where the early curve and \n\ncurve orientations differ. The transition is smooth — if a curve \n\nstrongly wants to see an early curve it wants to see its neighbor a bit \n\ntoo.\n\n do we see that curve detectors which are rotated 180 degrees from each \n\nother are inhibitory, but not ones rotated 90 degrees? Recall from our \n\nprevious article that early curve detectors respond slightly to curves \n\nthat are 180 degrees rotated from their prefered orientation, which we \n\ncalled an echo. This makes sense: a curve in the opposite orientation \n\nruns tangent to the curve we’re trying to detect in the middle of the \n\nreceptive field, causing similar edge detectors to fire.\n\nThese \n\nnegative weights are the reason curve neurons in 3b have no echoes, \n\nwhich we can validate by using circuit editing to remove them and \n\nconfirming that echoes appear.\n\n curve family of 3b uses negative weights to avoid the problem of \n\nechoes. By removing these negative weights with circuit editing we see \n\nechoes reappear.Using negative weights in this way\n\n follows a more general trend: our experience is that negative weights \n\nare used for things that are in some way similar enough that they could \n\n### 3a Line Families\n\n Since they have similar roles in the context of curve detectors we’ll \n\ndiscuss them together, while pointing out their subtle differences.\n\nLike\n\n early curves, lines align to the tangent of curve detectors, with more \n\npositive weights when the neurons have the same orientation. However, \n\nthis pattern is more nuanced and discontinuous with lines than early \n\ncurves. A line with a similar orientation to a curve will excite it, but\n\n a line that's rotated a little may inhibit it. This makes it hard to \n\nsee a general pattern by looking directly at the weight matrix.\n\nInstead,\n\n we can study which line orientations excite curve detectors using \n\nsynthetic stimuli. We can take a similar approach to decomposed feature \n\nvisualization to see which lines different spatial positions respond to.\n\n This shows us that each 3b curve neuron is excited by edges along its \n\ntangent line with a tolerance of between about 10° to 45°.\n\n tolerance isn’t always symmetric, which we can see in 3b:406 below. On \n\nits left side it is most excited by lines about 10° upwards. If the line\n\n is oriented above 10° it is still excited, but if it is less than 10° \n\nit switches to being inhibited.\n\n view tells us how each curve detector responds to different \n\norientations of lines in general. We can connect this back to individual\n\n line neurons by studying which orientations those line neurons respond \n\n#### Lines\n\nThere\n\n are 11 simple line neurons in 3a that mostly fire to one orientation, \n\nalthough some activate more weakly to an \"echo\" 90 degrees away.\n\nFive\n\n neurons in 3a respond to lines that are perpendicular to the \n\norientation where they are longest. These neurons mostly detect fur, but\n\n they also contribute to 3b curves. \n\nFinally,\n\n there are five line neurons with curiously sharp transitions. These \n\nlines want an orientation facing a particular direction, and tolerate \n\nerror in that direction, but definitely don't want it going the other \n\nway, even if slightly. In curve detection, this is useful for handling \n\nimperfections, like bumps.\n\n find cliff-like line neurons an interesting example of non-linear \n\nbehavior. We usually think of neurons as measuring distance from some \n\nideal. For instance, we may expect car neurons to prefer Or\n\n we could imagine a more sophisticated metric, such as style. Since \n\nImageNet contains classes for sports cars and non-sports cars, there may\n\n be neurons that measure the “sportiness\" of different cars. We think \n\nthat studying neurons that correspond to cars is an interesting research\n\n direction, since it’s easy to accumulate datasets of cars with labelled\n\n properties such as year, country of origin, and price. \n\ncars of a certain size, and have weaker activations to cars that are \n\nbigger or smaller than its ideal. But this line family provides a \n\ncounter-example, accepting error on one side while not firing at all on \n\nthe other.\n\n#### Lines in conv2d2\n\nThe\n\n different types of line neurons we looked at above each have different \n\nbehaviors, which is part of why the weight matrix between 3a lines and \n\n3b curves is indecipherable. However, if we go back one more layer and \n\nlook at how conv2d2 lines connect to 3b curves, we see structure.\n\n think this points to an interesting property of both curves and lines \n\nin InceptionV1. The line family in conv2d2 are roughly \"pure\" line \n\ndetectors, detecting lines in an elegant pattern. The next layer (3a) \n\nbuilds lines too, but they're more applied and nuanced, with \n\nseemingly-awkward behavior like cliff-lines where the network finds them\n\n useful. Similarly, the 3b curve detectors are surprisingly elegant \n\ndetectors of curves behaviorally, and mostly follow clean patterns in \n\ntheir construction. In contrast, the curves in the next layer (4a) are \n\n and seemingly specialized for detecting real-world objects like the \n\ntops of cups and pans. Perhaps this points to a yet unnamed motif of \n\npure shape detectors directly followed by applied ones.\n\n### Cosmetic Neurons\n\nIn\n\n Curve Detectors we saw how curve neurons seem to be robust to several \n\ncosmetic properties, with similar behavior across textures like metal \n\nand wood in a variety of lighting. How do they do it?\n\n![](Curve%20Circuits_files/datasetExamples.png)We\n\n believe this reflects a more widespread phenomenon in early vision. As \n\nprogressively sophisticated shapes are built in each layer, new shapes \n\nincorporate cosmetic neuron families like colors and texture. For \n\ninstance, 3b curve neurons are built primarily from the line and early \n\ncurve neuron families, but they also incorporate a family of 65 texture \n\nneurons. This means they both inherit the cosmetic robustness of the \n\nline and early curve neurons, as well as strengthen it by including more\n\n textures.\n\nWhile we won't do a detailed weight-level analysis of \n\nhow cosmetic robustness propagates through the shapes of early vision in\n\n this article, which is a broader topic than curve detection, we think \n\nthis is an exciting direction for future research.\n\n---\n\nAn Artificial Artificial Neural Network\n\n---------------------------------------\n\nHow\n\n do we know this story about the mechanics of curve detectors is true? \n\nOne way is to use it to reimplement curve detectors from scratch. We \n\nmanually set the weights of a blank neural network to implement the \n\n This was initially a few hour process for one person (Chris Olah), and \n\nthey did not look at the original neural network’s weights when \n\nconstructing it, which would go against the spirit of the exercise. \n\nLater, before publishing they tweaked the weights and in particular \n\nadded negative weights to remove echoes in the activations.\n\nTo \n\ncompare our artificial curve detectors against InceptionV1's naturally \n\n available to us. We'll choose three: feature visualization, dataset \n\nexamples, and synthetic stimuli. From there we'll run two additional \n\ncomparisons by leveraging circuits and model editing.\n\nFirst we'll \n\nlook at feature visualization and responses to synthetic curve stimuli \n\ntogether. We see the feature visualizations indeed render curves, except\n\n they are grayscale since we never include cosmetic features such as \n\ncolor-contrast detectors in our artificial curves. We see their response\n\n to curve stimuli approximates the natural curve detectors across a \n\nrange of radii and all orientations. One difference is our artificial \n\n can compare the curve detectors in a neural network we hand-crafted \n\nwith the curve detectors in InceptionV1 by measuring how they activate \n\nto synthetic curve stimuli. We see that across a range of radii and \n\norientations, our artificial curve neurons approximate the natural ones.We\n\n can also get a qualitative sense for the differences by looking at a \n\nsaliency map of dataset examples that cause artificial curve detectors \n\nto fire stronglySince our artificial curve\n\n detectors don't respond to color, there are likely many false-negatives\n\n where natural curve detectors correctly catch that our artificial ones \n\ndon't. Our goal in building artificial curve detectors isn't perfection,\n\n but to show that we can use the circuit patterns described in this \n\narticle to roughly approximate the curve detection algorithm in \n\nInceptionV1..\n\n we can compare the weights for the circuits connecting neuron families \n\nin the two models, alongside feature visualizations for each of those \n\nfamilies. We see they follow approximately the same circuit structure.\n\n comparison of the most important family in each layer across our \n\nartificial artificial neural network with InceptionV1 trained on \n\nImageNet.We can also zoom into specific weight \n\nmatrices that we've already studied in this article. We see the raw \n\nweights between early curves and curves as well as lines to curves look \n\napproximately like the ones in InceptionV1, but cleaner since we set \n\nthem programmatically.\n\n weights of the artificial artificial neural network closely resemble \n\nthe equivariant weight in the naturally trained network, except their \n\npattern is cleaner because the weights are programmetically generated. \n\nequivariance.Finally, a preliminary experiment \n\nsuggests that adding artificial curve detectors helps recover some of \n\nthe loss in classification accuracy across the dataset of removing them \n\nfrom the model entirelySpecifically, when \n\nwe remove InceptionV1's 3b curve detectors entirely, the model's top-5 \n\naccuracy across 2000 validation set images drops from 88.6% to 86.3% (46\n\n fewer correct classifications). When we replace 3b curve detectors with\n\n our artificial curve detectors more than half of this drop is \n\nrecovered, with an accuracy of 87.6% (reducing the gap from 46 incorrect\n\n classifications to 20). We suspect the remaining gap is due to three \n\nfactors. First, our artificial neurons are grayscale, since we are not \n\nimplementing color contrast neurons or texture neurons. Secondly, the \n\nreceptive fields of artificial neurons may not be exactly centered the \n\nsame as natural curve detectors. Thirdly, we may not have scaled the \n\nactivations of curve detectors optimally. \n\n \n\nAdditionally, there are\n\n two caveats worth mentioning about our experimental setup. First, our \n\nImageNet evaluation likely doesn't mimic the exact conditions the model \n\nwas trained under (eg. data preprocessing), since the original model was\n\n trained at Google using a precursor Tensorflow. Secondly, the reason we\n\n ran it on less than the full validation set was operational, not a \n\nresult of cherry-picking. We initially ran it on a small set in a \n\nprototype experiment to validate our hypothesis. We planned to run it on\n\n the full set before our publication date, but due OpenAI infrastructure\n\n changes our setup broke and we were unable to reimplement it in time. \n\nFor this reason we emphasize that the experiment is preliminary, \n\nalthough we suspect it's likely to work on the full validation set as \n\nwell..\n\nOverall, we believe these five experiments \n\nshow our artificial curve detectors are roughly analogous to the \n\nnaturally trained ones. Since they are nearly a direct translation from \n\nthe neuron families and circuit motifs we've described in this article \n\ninto Python code for setting weights, we think this is strong evidence \n\nthese patterns accurately reflect the underlying circuits that construct\n\n curve detectors.\n\n---\n\nDownstream\n\n----------\n\nWhile \n\nthis article focused mostly on how curve detection works upstream of the\n\n curve detector, it's also worth briefly considering how curves are used\n\n downstream in the model. It's easiest to see their mark on the next \n\nlayer, 4a, where they're used to construct more sophisticated shapes.\n\nMany\n\n of these shapes look for individual curves at different spatial \n\npositions, such as circles and evolutes. These shapes often reappear \n\nacross different branches of 4a, such as the 3x3 and 5x5 branch. \n\n![](Curve%20Circuits_files/figure.png)Layer\n\n 4a also constructs a series of curve detectors, mostly in the 5x5 \n\nbranch that specializes in 3d geometry. However, we believe they should \n\nbe thought of less as pure abstract shapes and more as corresponding to \n\n an experimental format collecting invited short articles and critical \n\ncommentary delving into the inner workings of neural networks.\n\n", "bibliography_bib": null, "filename": "Curve Circuits.html", "id": "89c3dd9dda28e8dd0bbaf7f8a66dfd01"} {"url": "n/a", "source": "distill", "source_type": "html", "converted_with": "python", "title": "A Discussion of 'Adversarial Examples Are Not Bugs, They Are Features': Two Examples of Useful, Non-Robust Features", "authors": ["Gabriel Goh"], "date_published": "2019-08-06", "abstract": " This article is part of a discussion of the Ilyas et al. paper “Adversarial examples are not bugs, they are features”. You can learn more in the main discussion article . ", "journal_ref": "distill-pub", "doi": "https://doi.org/10.23915/distill.00019.3", "text": "\n\n #rebuttal,\n\n .comment-info {\n\n background-color: hsl(54, 78%, 96%);\n\n border-left: solid hsl(54, 33%, 67%) 1px;\n\n padding: 1em;\n\n color: hsla(0, 0%, 0%, 0.67);\n\n }\n\n #header-info {\n\n margin-top: 0;\n\n margin-bottom: 1.5rem;\n\n display: grid;\n\n grid-template-columns: 65px max-content 1fr;\n\n grid-template-areas:\n\n \"icon explanation explanation\"\n\n \"icon back comment\";\n\n grid-column-gap: 1.5em;\n\n }\n\n #header-info .icon-multiple-pages {\n\n grid-area: icon;\n\n padding: 0.5em;\n\n content: url(images/multiple-pages.svg);\n\n }\n\n #header-info .explanation {\n\n grid-area: explanation;\n\n font-size: 85%;\n\n }\n\n #header-info .back {\n\n grid-area: back;\n\n }\n\n #header-info .back::before {\n\n content: \"←\";\n\n margin-right: 0.5em;\n\n }\n\n #header-info .comment {\n\n grid-area: comment;\n\n scroll-behavior: smooth;\n\n }\n\n #header-info .comment::before {\n\n content: \"↓\";\n\n margin-right: 0.5em;\n\n }\n\n #header-info a.back,\n\n #header-info a.comment {\n\n font-size: 80%;\n\n font-weight: 600;\n\n border-bottom: none;\n\n text-transform: uppercase;\n\n color: #2e6db7;\n\n display: block;\n\n margin-top: 0.25em;\n\n letter-spacing: 0.25px;\n\n }\n\n This article is part of a discussion of the Ilyas et al. paper\n\n *“Adversarial examples are not bugs, they are features”.*\n\n You can learn more in the\n\n [main discussion article](https://distill.pub/2019/advex-bugs-discussion/) .\n\n \n\n[Other Comments](https://distill.pub/2019/advex-bugs-discussion/#commentaries)\n\n[Comment by Ilyas et al.](#rebuttal)\n\n Ilyas et al. define a *feature* as a function fff that\n\n label. But in the presence of an adversary Ilyas et al. argues\n\n the metric that truly matters is a feature’s *robust usefulness*,\n\n \n\nE[inf∥δ∥≤ϵyf(x+δ)],\n\n \\mathbf{E}\\left[\\inf\\_{\\|\\delta\\|\\leq\\epsilon}yf(x+\\delta)\\right],\n\n E[∥δ∥≤ϵinf​yf(x+δ)],\n\n its correlation with the label while under attack. Ilyas et al. \n\n like?\n\n \n\n### Non-Robust Features in Linear Models\n\n nonlinear models encountered in deep learning. As Ilyas et al \n\n to linear features of the form:\n\n \n\n \\text{and} \\quad \\mathbf{E}[x] = 0.\n\n f(x)=∥a∥Σ​aTx​whereΣ=E[xxT]andE[x]=0.\n\n The robust usefulness of a linear feature admits an elegant decomposition\n\n This\n\n \\begin{aligned}\n\n \n\n\\mathbf{E}\\left[\\inf\\_{\\|\\delta\\|\\leq\\epsilon}yf(x+\\delta)\\right] &\n\n \n\n=\\mathbf{E}\\left[yf(x)+\\inf\\_{\\|\\delta\\|\\leq\\epsilon}yf(\\delta)\\right]\\\\\n\n & \n\n &\n\n \n\n \\end{aligned}\n\n into two terms:\n\n \n\n .undomargin {\n\n position: relative;\n\n left: -1em;\n\n top: 0.2em;\n\n }\n\n \n\nE[inf∥δ∥≤ϵyf(x+δ)]\n\n \\mathbf{E}\\left[\\inf\\_{\\|\\delta\\|\\leq\\epsilon}yf(x+\\delta)\\right]\n\n E[∥δ∥≤ϵinf​yf(x+δ)]\n\n===\n\nE[yf(x)]\\mathop{\\mathbf{E}[yf(x)]}E[yf(x)]\n\n−-−\n\nϵ∥a∥∗∥a∥Σ\\epsilon\\frac{\\|a\\|\\_{\\*}}{\\|a\\|\\_{\\Sigma}}ϵ∥a∥Σ​∥a∥∗​​\n\n The robust usefulness of a feature\n\n \n\n the correlation of the feature with the label\n\n \n\n the feature’s non-robustness\n\n \n\n dimensional plot.\n\n \n\n \n\n Subject to an\n\n L\\_2\n\n adversery, observe that high frequency features are both less useful and\n\n less robust.\n\n Useful Non-Useful A B C D E F \n\n Pareto frontier of points in the non-robustness and usefulness space.\n\n \n\n positive label.\n\n \\log \\left( \\frac{\\|a\\_i\\|\\_\\Sigma}{\\|a\\_i\\|} \\right) =\n\n \\log(\\lambda\\_i) Feature’s log robustness. When\n\n a\\_i's\n\n are the\n\n i^{th}\n\n eigenvalues of\n\n \\Sigma\n\n , the robustness reduces to the\n\n i^{th}\n\n singular value of\n\n \\lambda\\_i ABCDEFf-12-11-10-9-8-7-6-5-4-3-4-3-2-10 \n\n \n\n We demonstrate two constructions:\n\n \n\n**Ensembles** The work of Tsipras et al \n\n suggests a collection of non-robust and non-useful features, if \n\nsufficiently uncorrelated, can be ensembled into a single useful, \n\nnon-robust useful feature f.\n\n process is illustrated above numerically. We choose a set of non-robust\n\n features by excluding all features above a threshold, and naively \n\nensembling them according to:\n\n (1-\\alpha) \\cdot a\\_{\\text{non-robust}} + \\alpha \\cdot a\\_{\\text{robust}}, \n\n \n\n It is surprising, thus, that the experiments of Madry et al. \n\n \n\n To cite Ilyas et al.’s response, please cite their\n\n**Response Summary**: The construction of explicit non-robust features is\n\n the useful non-robust features detected by our experiments. We also agree that\n\n non-robust features arising as “distractors” is indeed not precluded by our\n\n theoretical framework, even if it is precluded by our experiments.\n\n This simple theoretical framework sufficed for reasoning about and\n\n predicting the outcomes of our experiments\n\n We also presented a theoretical setting where we can\n\n analyze things fully rigorously in Section 4 of our paper..\n\n However, this comment rightly identifies finding a more comprehensive\n\n definition of feature as an important future research direction.\n\n \n\n**Response**: These experiments (visualizing the robustness and\n\n corroborate the existence of useful, non-robust features and make progress\n\n towards visualizing what these non-robust features actually look like. \n\nWe also appreciate the point made by the provided construction of non-robust\n\n features (as defined in our theoretical framework) that are combinations of\n\n useful+robust and useless+non-robust features. Our theoretical framework indeed\n\n enables such a scenario, even if — as the commenter already notes — our\n\n framework technically captures.) Specifically, in such a scenario, during the\n\n construction of the D^det\\widehat{\\mathcal{D}}\\_{det}D\n\ndet​ dataset, only the non-robust\n\n and useless term of the feature would be flipped. Thus, a classifier trained on\n\n such a dataset would associate the predictive robust feature with the\n\n *wrong* label and would thus not generalize on the test set. In contrast,\n\ndet​\n\n do generalize.\n\nOverall, our focus while developing our theoretical framework was on\n\n the comment points out, putting forth a theoretical framework that captures\n\n non-robust features in a very precise way is an important future research\n\n direction in itself. \n\n", "bibliography_bib": [{"title": "Adversarial examples are not bugs, they are features"}], "filename": "A Discussion of 'Adversarial Examples Are Not Bugs, They Are Features' Two Examples of Useful, Non-Robust Features.html", "id": "150ff8c5e381ed651157d0f4cae65d9a"} {"url": "n/a", "source": "distill", "source_type": "html", "converted_with": "python", "title": "Multimodal Neurons in Artificial Neural Networks", "authors": ["Gabriel Goh", "Nick Cammarata †", "Chelsea Voss †", "Shan Carter", "Michael Petrov", "Ludwig Schubert", "Alec Radford", "Chris Olah"], "date_published": "2021-03-04", "abstract": "In 2005, a letter published in Nature described human neurons responding to specific people, such as Jennifer Aniston or Halle Berry . The exciting thing wasn’t just that they selected for particular people, but that they did so regardless of whether they were shown photographs, drawings, or even images of the person’s name. The neurons were multimodal. As the lead author would put it: \"You are looking at the far end of the transformation from metric, visual shapes to conceptual… information.\" Quiroga's full quote, from New Scientist reads: \"I think that’s the excitement to these results. You are looking at the far end of the transformation from metric, visual shapes to conceptual memory-related information. It is that transformation that underlies our ability to understand the world. It’s not enough to see something familiar and match it. It’s the fact that you plug visual information into the rich tapestry of memory that brings it to life.\" We elided the portion discussing memory since it was less relevant.", "journal_ref": "distill-pub", "doi": "https://doi.org/10.23915/distill.00030", "text": "#### Contents\n\n* [Emotion Neurons](#emotion-neurons)\n\n* [Region Neurons](#region-neurons)\n\n* [Feature Properties](#feature-properties)\n\n* [Understanding Language](#understanding-language)\n\n* [Emotion Composition](#emotional-intelligence)\n\n* [Faceted Feature Visualization](#faceted-feature-visualization)\n\n* [CLIP Training](#clip)\n\nIn\n\n 2005, a letter published in Nature described human neurons responding \n\nto specific people, such as Jennifer Aniston or Halle Berry .\n\n The exciting thing wasn’t just that they selected for particular \n\npeople, but that they did so regardless of whether they were shown \n\nphotographs, drawings, or even images of the person’s name. The neurons \n\nwere multimodal. As the lead author would put it: \"You are looking at \n\nthe far end of the transformation from metric, visual shapes to \n\n reads: \"I think that’s the excitement to these results. You are looking\n\n at the far end of the transformation from metric, visual shapes to \n\nconceptual memory-related information. It is that transformation that \n\nunderlies our ability to understand the world. It’s not enough to see \n\nsomething familiar and match it. It’s the fact that you plug visual \n\ninformation into the rich tapestry of memory that brings it to life.\" We\n\n elided the portion discussing memory since it was less relevant.\n\nWe\n\n report the existence of similar multimodal neurons in artificial neural\n\n networks. This includes neurons selecting for prominent public figures \n\n important to note that the vast majority of people these models \n\nrecognize don’t have a specific neuron, but instead are represented by a\n\n combination of neurons. Often, the contributing neurons are \n\nconceptually related. For example, we found a Donald Trump neuron which \n\nfires (albeit more weakly) for Mike Pence, contributing to representing \n\nhim.Some of the \n\nneurons we found seem strikingly similar to those described in \n\nneuroscience. A Donald Trump neuron we found might be seen as similar to\n\n And although we don’t find an exact Jennifer Aniston neuron, we do find\n\n a neuron for the TV show “Friends” which fires for her. \n\nLike the biological multimodal neurons, these artificial neurons respond\n\n to the same subject in photographs, drawings, and images of their name:\n\n neurons only scratch the surface of the highly abstract neurons we've \n\nfound. Some neurons seem like topics out of a kindergarten curriculum: \n\nweather, seasons, letters, counting, or primary colors. All of these \n\nfeatures, even the trivial-seeming ones, have rich multimodality, such \n\nWe find these multimodal neurons in the recent CLIP models ,\n\n although it's possible similar undiscovered multimodal neurons may \n\n authors also kindly shared an alternative version from earlier \n\nexperiments, where the training objective was an autoregressive language\n\n modelling objective, instead of a contrastive objective. The features \n\nseem pretty similar. There are several CLIP models of \n\nvarying sizes; we find multimodal neurons in all of them, but focus on \n\nstudying the mid-sized RN50-x4 model. We \n\nfound it challenging to make feature visualization work on the largest \n\nCLIP models. The reasons why remain unclear. See faceted feature \n\n for more detailed discussion of CLIP’s architecture and performance. \n\nOur analysis will focus on CLIP's vision side, so when we talk about a \n\nmultimodal neuron responding to text we mean the model \"reading\" text in\n\n images. The alignment with the text side \n\nof the model might be seen as an additional form of multimodality, \n\nperhaps analogous to a human neuron responding to hearing a word rather \n\nthan seeing it (see Quiroga’s later work). But since that is an expected\n\n result of the training objective, it seems less interesting.\n\nCLIP’s\n\n abstract visual features might be seen as the natural result of \n\naligning vision and text. We expect word embeddings (and language models\n\n generally) to learn abstract \"topic\" features .\n\n Either the side of the model which processes captions (the “language \n\nside”) needs to give up those features, or its counterpart, the “vision \n\nside”, needs to build visual analogues. Many\n\n researchers are interested in “grounding” language models by training \n\nthem on tasks involving another domain, in the hope of them learning a \n\nmore real world understanding of language. The abstract features we find\n\n in vision models can be seen as a kind of “inverse grounding”: vision \n\ntaking on more abstract features by connection to language. This\n\n includes some of the classic kinds of bias we see in word embeddings, \n\nsuch as a “terrorism”/”Islam” neuron, or an “Immigration”/”Mexico” \n\nneuron. See discussion in the [region neurons section](#region-neurons).\n\n But even if these features seem natural in retrospect, they are \n\nqualitatively different from neurons previously studied in vision models\n\n (eg. ).\n\n They also have real world implications: these models are vulnerable to a\n\n kind of “typographic attack” where adding adversarial text to images \n\ncan cause them to be systematically misclassified.\n\n A typographic attack.\n\n \n\n \n\n---\n\n \n\nA Guided Tour of Neuron Families\n\n--------------------------------\n\nWhat\n\n features exist in CLIP models? In this section, we examine neurons \n\nfound in the final convolutional layer of the vision side across four \n\nmodels. A majority of these neurons seem to be interpretable.We\n\n checked a sample of 50 neurons from this layer and classified them as \n\ninterpretable, polysemantic, or uninterpretable. We found that 76% of \n\nthe sampled neurons were interpretable. (As a 95% confidence interval, \n\nthat’s between 64% and 88%.) A further 18% were polysemantic but with \n\ninterpretable facets, and 6% were as yet uninterpretable. \n\nEach layer consists of thousands of neurons, so for our preliminary \n\nanalysis we looked at feature visualizations, the dataset examples that \n\nmost activated the neuron, and the English words which most activated \n\nthe neuron when rastered as images. This revealed an incredible \n\ndiversity of features, a sample of which we share below:\n\n neurons respond to content associated with with a geographic region, \n\nwith neurons ranging in scope from entire hemispheres to individual \n\n this, we mean both that it responds to people presenting as this \n\ngender, as well as that it responds to concepts associated with that \n\n neurons detect features that an image might contain, whether it's \n\nnormal object recognition or detection of more exotic features such as \n\n despite being able to “read” words and map them to semantic features, \n\nthe model keeps a handful of more typographic features in its high-level\n\n representations. Like a child spelling out a word they don’t know, we \n\n many of the neurons in the model contribute to recognizing an \n\nincredible diversity of abstract concepts that cannot be cleanly \n\n neurons respond to any visual information that contextualizes the image\n\n in a particular time – for some it's a season, for others it's a \n\n This diagram presents selected neurons from the final layer of four \n\nCLIP models, hand organized into \"families\" of similar neurons. Each \n\nneuron is represented by a feature visualization (selected from regular \n\nor [faceted feature visualization](#faceted-feature-visualization)\n\n to best illustrate the neuron) with human-chosen labels to help quickly\n\n provide a sense of each neuron. Labels were picked after looking at \n\nhundreds of stimuli that activate the neuron, in addition to feature \n\nvisualizations. \n\n \n\n You can click on any neuron to open it up in OpenAI Microscope to \n\nsee feature visualizations, dataset examples that maximally activate the\n\n neuron, and more.\n\nThese neurons don’t just select for a single \n\nobject. They also fire (more weakly) for associated stimuli, such as a \n\nBarack Obama neuron firing for Michelle Obama or a morning neuron firing\n\n for images of breakfast. They also tend to be maximally inhibited by \n\nstimuli which could be seen, in a very abstract way, as their opposite. Some\n\n neurons seem less abstract. For example, typographic features like the \n\n“-ing” detector seem to roughly fire based on how far a string is away \n\nin Levenshtein distance. Although, even these show remarkable \n\ngeneralization, such as responding to different font sizes and rotated \n\ntext.\n\nHow should we think of these neurons? From an \n\ninterpretability perspective, these neurons can be seen as extreme \n\nexamples of “multi-faceted neurons” which respond to multiple distinct \n\n is a classic example in neuroscience of a hypothetical neuron that \n\nresponds in a highly specific way to some complex concept or stimulus – \n\n but this framing might encourage people to overinterpret these \n\nartificial neurons. Instead, the authors generally think of these \n\nneurons as being something like the visual version of a topic feature, \n\nactivating for features we might expect to be similar in a word \n\nembedding.\n\nMany of these neurons deal with sensitive topics, from \n\npolitical figures to emotions. Some neurons explicitly represent or are \n\nclosely related to protected characteristics: age, gender, race, \n\nreligion, sexual orientation,There’s a \n\nneuron we conceptualize as an LGBT neuron, which responds to the Pride \n\nflag, rainbows, and images of words like “LGBT”. Previous work (Wang \n\n& Kosinski) has suggested that neural networks might be able to \n\ndetermine sexual orientation from facial structure. This work has since \n\nbeen thoroughly rebutted and we wish to emphasize that we see no \n\n neurons related to age and gender, see \"person trait neurons.\" Region \n\nneurons seem closely related to race and national origin, responding to \n\nethnicities associated with given regions of the world. For sexual \n\n These neurons may reflect prejudices in the “associated” stimuli they \n\nrespond to, or be used downstream to implement biased behavior. There \n\nare also a small number of people detectors for individuals who have \n\ncommitted crimes against humanity, and a “toxic” neuron which responds \n\nto hate speech and sexual content. Having neurons corresponding to \n\nsensitive topics doesn’t necessarily mean a network will be prejudiced. \n\nYou could even imagine explicit representations helping in some cases: \n\nthe toxic neuron might help the model match hateful images with captions\n\n that refute them. But they are a warning sign for a wide range of \n\npossible biases, and studying them may help us find potential biases \n\nwhich might be less on our radar.Examples\n\n of bias in AI models, and work drawing attention to it, has helped the \n\nresearch community to become somewhat “alert” to potential bias with \n\nregards to gender and race. However, CLIP could easily have biases which\n\n we are less alert to, such as biased behavior towards parents when \n\nthere’s a child’s drawing in the background.\n\nCLIP \n\ncontains a large number of interesting neurons. To allow detailed \n\nexamination we’ll focus on three of the “neuron families” shown above: \n\npeople neurons, emotion neurons, and region neurons. We invite you to \n\nexplore others in Microscope.\n\n### Person Neurons\n\nThis\n\n section will discuss neurons representing present and historical \n\nfigures. Our discussion is intended to be descriptive and frank about \n\nwhat the model learned from the internet data it was trained on, and is \n\nnot endorsement of associations it makes or of the figures discussed, \n\nwho include political figures and people who committed crimes against \n\nhumanity. This content may be disturbing to some readers.To \n\ncaption images on the Internet, humans rely on cultural knowledge. If \n\nyou try captioning the popular images of a foreign place, you’ll quickly\n\n find your object and scene recognition skills aren't enough. You can't \n\ncaption photos at a stadium without recognizing the sport, and you may \n\neven need to know specific players to get the caption right. Pictures of\n\n politicians and celebrities speaking are even more difficult to caption\n\n if you don’t know who’s talking and what they talk about, and these are\n\n some of the most popular pictures on the Internet. Some public figures \n\nelicit strong reactions, which may influence online discussion and \n\ncaptions regardless of other content.\n\nWith this in mind, perhaps \n\nit’s unsurprising that the model invests significant capacity in \n\nrepresenting specific public and historical figures — especially those \n\n detects Christian symbols like crosses and crowns of thorns, paintings \n\nof Jesus, his written name, and feature visualization shows him as a \n\n recognizes the masked hero and knows his secret identity, Peter Parker.\n\n It also responds to images, text, and drawings of heroes and villians \n\n learns to detect his face and body, symbols of the Nazi party, relevant\n\n historical documents, and other loosely related concepts like German \n\nfood. Feature visualization shows swastikas and Hitler seemingly doing a\n\n Nazi salute.\n\nWhich\n\n people the model develops dedicated neurons for is stochastic, but \n\nseems correlated with the person's prevalence across the datasetThe\n\n model’s dataset was collected in 2019 and likely emphasizes content \n\nfrom around that time. In the case of the Donald Trump neuron, it seems \n\nlikely there would have also been a Hillary Clinton neuron if data had \n\n and the intensity with which people respond to them. The one person \n\nwe’ve found in every CLIP model is Donald Trump. It strongly responds to\n\n images of him across a wide variety of settings, including effigies and\n\n caricatures in many artistic mediums, as well as more weakly activating\n\n for people he’s worked closely with like Mike Pence and Steve Bannon. \n\nIt also responds to his political symbols and messaging (eg. “The Wall” \n\nand “Make America Great Again” hats). On the other hand, it most \n\n\\*negatively\\* activates to musicians like Nicky Minaj and Eminem, video \n\ngames like Fortnite, civil rights activists like Martin Luther King Jr.,\n\n and LGBT symbols like rainbow flags.\n\nTo understand the Trump neuron in more depth, we collected about 650 \n\nimages that cause it to fire different amounts and labeled them by hand \n\ninto categories we created. This lets us estimate the conditional \n\n for details. As the black / LGBT category contains only a few images, \n\nsince they don't occur frequently in the dataset, we validated they \n\ncause negative activations with a futher experimentAs\n\n we were labeling images for the conditional probability plot in Figure 2\n\n we were surprised that images related to black and gay rights \n\nconsistently caused strong negative activations. However, since there \n\nwere four images in that category, we decided to do a follow-up \n\nexperiment on more images. \n\n \n\nWe searched Google Images for the \n\nterms \"black rights\" and \"gay rights\" and took ten top images for each \n\nterm without looking at their activations. Then we validated these \n\nimages reliably cause the Trump neuron to fire in the range of roughly \n\nnegative ~3-6 standard deviations from zero. The images that cause less \n\nstrong negative activations near -3 standard deviations tend to have \n\nbroad symbols such as an image of several black teenagers raising their \n\narm and fist that causes a -2.5 standard deviations. Conversely, images \n\nof more easy to recognize and specific symbols such as rainbow flags or \n\nphotos of Martin Luther King Jr consistently cause activations of at \n\nleast -4 standard deviations. In Figure 3 we also show activations \n\nrelated to photos of Martin Luther King Jr.. \n\n \n\n Across\n\n all categories, we see that higher activations of the Trump neuron are \n\nhighly selective, as more than 90% of the images with a standard \n\ndeviation greater than 30 are related to Donald Trump.While\n\n labeling images for the previous experiment it became clear the neuron \n\nactivates different amounts for specific people. We can study this more \n\nby searching the Internet for pictures of specific people and measuring \n\nhow the images of each person makes the neuron fire.\n\nTo see how the Trump neuron responds to different individuals, we \n\nsearched the query \"X giving a speech at a microphone\" for various \n\nindividuals on Google Images. We cleaned the data by hand, excluding \n\nphotos that are not clear photos of the individual's face. The bar \n\nlength for each individual shows the median activation of the person's \n\nphotos in standard deviations of the neuron over the dataset, and the \n\nrange over the bar shows the standard deviation of the activations of \n\nthe person's photos.Presumably, person \n\nneurons also exist in other models, such as facial recognition models. \n\nWhat makes these neurons unique is that they respond to the person \n\nacross modalities and associations, situating them in a cultural \n\ncontext. In particular, we're struck by how the neuron's response tracks\n\n an informal intuition with how associated people are. In this sense, \n\nperson neurons can be thought of as a landscape of person-associations, \n\nwith the person themself as simply the tallest peak.\n\n### Emotion Neurons\n\nThis\n\n section will discuss neurons representing emotions, and a neuron for \n\n“mental illness.” Our discussion is intended to be descriptive and frank\n\n about what the model learned from the internet data it was trained on \n\nand is not endorsement. This content may be disturbing to some readers.Since\n\n a small change in someone's expression can radically change the meaning\n\n of a picture, emotional content is essential to the task of captioning.\n\n The model dedicates dozens of neurons to this task, each representing a\n\n different emotion.\n\nThese emotion neurons don’t just respond to \n\nfacial expressions associated with an emotion -- they’re flexible, \n\nresponding to body language and facial expressions in humans and \n\n activates even when the majority of the face is obscured. It responds \n\nto slang like \"OMG!\" and \"WTF\", and text feature visualization produces \n\nsimilar words of shock and surprise. There are even some emotion neurons\n\n in discussion of emotions because (1) they are sometimes included in \n\nemotion wheels, (2) they seem to play a role in captioning emotions and \n\nfeelings, and (3) being more inclusive in our discussion allow us to \n\nexplore more of the model. Of course, these neurons simply \n\nrespond to cues associated with an emotion and don’t necessarily \n\ncorrespond to the mental state of subjects in an image.In\n\n addition to CLIP neurons potentially incorrectly recognizing cues, they\n\n cues themselves don’t necessarily reflect people’s mental states. For \n\nexample, facial expressions don’t reliably correspond to someone \n\n addition to these emotion neurons, we also find which neurons respond \n\nto an emotion as a secondary role, but mostly respond to something else.\n\n which primarily responds to jail and incarceration helps represent \n\nemotions such as “persecuted.” Similarly, a neuron that primarily \n\ndetects pornographic content seems to have a secondary function of \n\nto see some of these different facets. In particular, the face facet \n\nshows facial expressions corresponding to different emotions, such as \n\nsmiling, crying, or wide-eyed shock. Click on any neuron to open it in \n\nMicroscope to see more information, including dataset examples.While\n\n most emotion neurons seem to be very abstract, there are also some \n\nneurons which simply respond to specific body and facial expressions, \n\n It activates most to the internet-born duckface expression and peace \n\nsigns, and we'll see later that both words show up in the maximally \n\ncorresponding captions.\n\nOne\n\n neuron that doesn't represent a single emotion but rather a high level \n\n This neuron activates when images contain words associated with \n\nnegative mental states (eg. “depression,” “anxiety,” “lonely,” \n\n“stressed”), words associated with clinical mental health treatment \n\n(“psychology”, “mental,” “disorder”, “therapy”) or mental health \n\npejoratives (“insane,” “psycho”). It also fires more weakly for images \n\nof drugs, and for facial expressions that look sad or stressed, and for \n\nthe names of negative emotions.\n\n we wouldn’t think of mental illness as a dimension of emotion. However,\n\n a couple things make this neuron important to frame in the emotion \n\ncontext. First, in its low-mid range activations, it represents common \n\nnegative emotions like sadness. Secondly, words like “depressed” are \n\noften colloquially used to describe non-clinical conditions. Finally, \n\nwe’ll see in a later section that this neuron plays an important role in\n\n captioning emotions, composing with other emotion neurons to \n\ndifferentiate “healthy” and “unhealthy” versions of an emotion.\n\nTo\n\n better understand this neuron we again estimated the conditional \n\nprobabilities of various categories by activation magnitude. The \n\nstrongest positive activations are concepts related to mental illness. \n\nConversely, the strongest negative activations correspond to activities \n\nlike exercise, sports, and music events.\n\n To understand the \"mental illness neuron\" in more depth, we collected \n\nimages that cause it to fire different amounts and labeled them by hand \n\ninto categories we created. This lets us estimate the conditional \n\n for details. During the labeling process we couldn't see how much it \n\nmade the neuron fire. We see that the strongest activations all belong \n\nto labels corresponding to low-valence mental states. On the other hand,\n\n many images with a negative pre-ReLU activation are of scenes we may \n\ntypically consider high-valence, like photos with pets, travel photos, \n\nand or pictures of sporting events.### Region Neurons\n\nThis\n\n section will discuss neurons representing regions of the world, and \n\nindirectly ethnicity. The model’s representations are learned from the \n\ninternet, and may reflect prejudices and stereotypes, sensitive regional\n\n situations, and colonialism. Our discussion is intended to be \n\ndescriptive and frank about what the model learned from the internet \n\ndata it was trained on, and is not endorsement of the model’s \n\nrepresentations or associations. This content may be disturbing to some \n\nreaders.From local weather and food, to travel and immigration,\n\n to language and race: geography is an important implicit or explicit \n\ncontext in a great deal of online discourse. Blizzards are more likely \n\n They respond to a wide variety of modalities and facets associated with\n\n a given region: country and city names, architecture, prominent public \n\nfigures, faces of the most common ethnicity, distinctive clothing, \n\nwildlife, and local script (if not the Roman alphabet). If shown a world\n\n map, even without labels, these neurons fire selectively for the \n\nrelevant region on the map.Map responses \n\nseem to be strongest around distinctive geographic landmarks, such as \n\nthe Gulf Of Carpentaria and Cape York Peninsula for Australia, or the \n\nGulf of Guinea for Africa.\n\n which responds to bears, moose, coniferous forest, and the entire \n\nNorthern third of a world map — down to sub-regions of countries, such \n\nas the US West Coast.One interesting \n\nproperty of the regional neuron “hierarchy” is that the parent neuron \n\noften doesn’t fire when a child is uniquely implicated. So while the \n\nEurope neuron fires for the names of European cities, the general United\n\n States neuron generally does not, and instead lets neurons like the \n\nWest Coast neuron fire. See also another example of a neuron “hierarchy”\n\n Some region neurons seem to form more consistently than others. Which \n\nneurons form doesn't seem to be fully explained by prevalence in the \n\n but not all models seem to have a UK neuron. Why is that? One intuition\n\n is that there’s more variance in neurons when there’s a natural \n\nsupercategory they can be grouped into. For example, when an individual \n\nUK neuron doesn’t exist, it seems to be folded into a Europe neuron. In \n\nAfrica, we sometimes see multiple different Africa neurons (in \n\nparticular a South/West Africa neuron and an East Africa neuron), while \n\nother times there seems to be a single unified Africa neuron. In \n\ncontrast, Australia is perhaps less subdividable, since it’s both a \n\ncontinent and country.\n\nGeographical Activation of Region Neurons **Unlabeled map activations**:\n\n Spatial activations of neurons in response to unlabeled geographical \n\nworld map. Activations averaged over random crops. Note that neurons for\n\n Countries colored by activations of neurons in response to rastered \n\nimages of country names. Activations averaged over font sizes, max over \n\nword positions. **City name activations**:\n\n Cities colored by activations of neurons in response to rastered images\n\n of city names. Activations averaged over font sizes, max over word \n\npositions. Selected Region Neurons Most Activating Words\n\nHover on a neuron to isolate activations. Click to open in Microscope.\n\n \n\n This diagram contextualizes region neurons with a map.\n\n Each neuron is mapped to a hue, and then regions where it \n\nactivates are colored in that hue, with intensity proportional to \n\nactiviation. If multiple neurons of opposing hues fire, the region will \n\nbe colored in a desaturated gray.\n\n It can show their response\n\n to an unlabeled geographical map,\n\n to country names,\n\n and to city names.\n\n \n\n \n\n \"large region neurons\"\n\n (such as the \"Northern Hemisphere\" neuron)\n\n and at\n\n \"secondarily regional neurons\"\n\n \"entrepreneurship\"\n\n or\n\n may not. This means that visualizing behavior on a global map \n\nunderrepresents the sheer number of region neurons that exist in CLIP. \n\nUsing the top-activating English words as a heuristic, we estimate \n\naround 4% of neurons are regional.To \n\nestimate the fraction of neurons that are regional, we looked at what \n\nfraction of each neuron's top-activating words (ie. words it responds to\n\n when rastered as images) were explicitly linked to geography, and used \n\nthis as a heuristic for whether a neuron was regional. To do this, we \n\ncreated a list of geographic words consisting of continent / country / \n\n \n\n We found 2.5% (64) of RN50-x4 neurons had geographic words for all of \n\nthe five maximally activating words. This number varied between 2-4% in \n\nother CLIP models. However, looking only at neurons for which all top \n\nfive words are explicitly geographic misses many region neurons which \n\nrespond strongly to words with implicit regional connotations (eg. \n\n“hockey” for a Canada neuron, “volkswagen” for a German neuron, “palm” \n\nfor an equatorial neuron). We bucketed neurons by fraction of five most \n\nactivating words that are geographic, then estimated the fraction of \n\neach bucket that were regional. With many neurons, the line was quite \n\nblurry (should we include polysemantic neurons where one case is \n\nregional? What about “secondarily regional neurons”?). For a relatively \n\nconservative definition, this seems to get us about 4%, but with a more \n\nliberal one you might get as high as 8%.\n\n caution is needed in interpreting these neurons as truly regional, \n\nrather than spuriously weakly firing for part of a world map. Important \n\nvalidations are that they fire for the same region on multiple different\n\n maps, and if they respond to words for countries or cities in that \n\nregion. These neurons don’t have a region as the primary \n\nfocus, but have some kind of geographic information baked in, firing \n\n also find that the linear combination of neurons that respond to Russia\n\n on a map strongly responds to Pepe the frog, a symbol of white \n\nnationalism in the United States allegedly promoted by Russia. Our \n\nimpression is that Russians probably wouldn’t particularly see this as a\n\n symbol of Russia, suggesting it is more “Russia as understood by the \n\nUS.”\n\n#### Case Study: Africa Neurons\n\nDespite these\n\n examples of neurons learning Americentric caricatures, there are some \n\nareas where the model seems slightly more nuanced than one might fear, \n\nespecially given that CLIP was only trained on English language data. \n\nFor example, rather than blurring all of Africa into a monolithic \n\nentity, the RN50-x4 model develops neurons for three regions within \n\nAfrica. This is significantly less detailed than its representation of \n\nmany Western countries, which sometimes have neurons for individual \n\ncountries or even sub-regions of countries, but was still striking to \n\nus.It’s important to keep in mind that \n\nthe model can represent many more things using combinations of neurons. \n\nWhere the model dedicates neurons may give us some sense of the level of\n\n nuance, but we shouldn’t infer, for example, that it doesn’t somehow \n\nrepresent individual African countries. To\n\n contextualize this numerically, the model seems to dedicate ~4% of its \n\nregional neurons to Africa, which accounts for ~20% of the world’s \n\nlandmass, and ~15% of the world’s population.\n\n early explorations it quickly became clear these neurons “know” more \n\nabout Africa than the authors. For example, one of the first feature \n\nvisualizations of the South African regional neuron drew the text \n\n Learning about a TV drama might not be the kind of deep insights one \n\nmight have envisioned, but it is a charming proof of concept.\n\nWe\n\n chose the East Africa neuron for more careful investigation, again \n\nusing a conditional probability plot. It fires most strongly for flags, \n\n activations tend to follow an exponential distribution in their tails, a\n\n point that was made to us by Brice Menard. This means that strong \n\nactivations are more common than you’d expect in a Gaussian (where the \n\ntail decays at exp(-x^2)), but are much less common than weaker \n\nactivations. — have a significantly different distribution \n\nand seems to be mostly about ethnicity. Perhaps this is because \n\nethnicity is implicit in all images of people, providing weak evidence \n\nfor a region, while features like flags are far less frequent, but \n\nprovide strong evidence when they do occur. This is the first neuron \n\nwe've studied closely with a distinct regime change between medium and \n\nstrong activations.\n\nWe labeled more than 400 images that causes a neuron that most strongly \n\nresponds to the word “Ghana” to fire at different levels of activation, \n\nwithout access to how much each image caused the neuron to fire while \n\nlabeling. See [the appendix](#conditional-probability) for details. \n\n It fires most strongly for people of African descent as well as African\n\n words like country names. It’s pre-ReLU activation is negative for \n\nsymbols associated with other countries, like the Tesla logo or British \n\nflag, as well as people of non-African descent. Many of its strongest \n\nnegative activations are for weaponry such as military vehicles and \n\nhandguns. Ghana, the country name it responds to most strongly, has a \n\nGlobal Peace Index rating higher than most African countries, and \n\nperhaps it learns this anti-association.We\n\n also looked at the activations of the other two Africa neurons. We \n\nsuspect they have interesting differences beyond their detection of \n\ndifferent country names and flags — why else would the model dedicate \n\nthree neurons — but we lacked the cultural knowledge to appreciate their\n\n subtleties.\n\n### Feature properties\n\nSo\n\n far, we’ve looked at particular neurons to give a sense of the kind of \n\nfeatures that exist in CLIP models. It's worth noting several properties\n\n that might be missed in the discussion of individual features:\n\n**Image-Based Word Embedding:**\n\n Despite being a vision model, one can produce “image-based word \n\nembeddings” with the visual CLIP model by rastering words into images \n\nand then feeding these images into the model, and then subtracting off \n\nthe average over words. Like normal word embeddings, the nearest \n\nOriginal \n\nWord\n\nNearest Neighbors \n\nCollobert embeddings\n\nNearest Neighbors \n\nCLIP image-based embeddings\n\nFrance\n\nFrench, Francis, Paris, Les, Des, Sans, Le, Pairs, Notre, Et\n\nJesus\n\nGod, Sati, Christ, Satan, Indra, Vishnu, Ananda, Parvati, Grace\n\nChrist, God, Bible, Gods, Praise, Christians, Lord, Christian, Gospel, Baptist\n\nxbox\n\nAmiga, Playstation, Msx, Ipod, Sega, Ps#, Hd, Dreamcast, Geforce, Capcom\n\n*V(Img(*“King”*)) - V(Img(*“Man”*)) + V(Img(*“Woman”*)) = V(Img(*“Queen”*))* \n\n work in some cases if we mask non-semantic lexicographic neurons (eg. \n\n“-ing” detectors). It seems likely that mixed arithmetic of words and \n\nimages should be possible.\n\n**Limited Multilingual Behavior:** \n\nAlthough CLIP’s training data was filtered to be English, many features \n\n responds to images of English “Thank You”, French “Merci”, German \n\n“Danke”, and Spanish “Gracias,” and also to English “Congratulations”, \n\nGerman “Gratulieren”, Spanish “Felicidades”, and Indonesian “Selamat”. \n\nAs the example of Indonesian demonstrates, the model can recognize some \n\nwords from non Romance/Germanic languages. However, we were unable to \n\nfind any examples of the model mapping words in non-Latin script to \n\nsemantic meanings. It can recognize many scripts (Arabic, Chinese, \n\nJapanese, etc) and will activate the corresponding regional neurons, but\n\n doesn’t seem to be able to map words in those scripts to their \n\nmeanings.One interesting question is why \n\nthe model developed reading abilities in latin alphabet languages, but \n\nnot others. Was it because more data of that type slipped into the \n\ntraining data, or (the more exciting possibility) because it’s easier to\n\n learn a language from limited data if you already know the alphabet?\n\n The most striking examples are likely racial and religious bias. As \n\n which responds to images of words such as “Terrorism”, “Attack”, \n\n“Horror”, “Afraid”, and also “Islam”, “Allah”, “Muslim”. This isn’t just\n\n an illusion from looking at a single neuron: the image-based word \n\nembedding for “Terrorist” has a cosine similarity of 0.52 with \n\n“Muslims”, the highest value we observe for a word that doesn’t include \n\n“terror.”By “image-based word embedding”,\n\n we mean the activation for an image of that word, with the average \n\nactivation over images of 10,000 English words subtracted off. The \n\nintuition is that this removes generic “black text on white background” \n\nfatures. If one measures the cosine similarity between “Terrorism” and \n\n“Muslim” without subtracting off the average, it’s much higher at about \n\n0.98, but that’s because all values are shifted up due to sharing the \n\n**Polysemanticity and Conjoined Neurons:**\n\n Our qualitative experience has been that individual neurons are more \n\ninterpretable than random directions; this mirrors observations made in \n\nprevious work.Although\n\n we’ve focused on neurons which seem to have a single clearly defined \n\nconcept they respond to, many CLIP neurons are “polysemantic” ,\n\n responding to multiple unrelated features. Unusually, polysemantic \n\nneurons in CLIP often have suspicious links between the different \n\n The concepts in these neurons seem “conjoined”, overlapping in a \n\nsuperficial way in one facet, and then generalizing out in multiple \n\ndirections. We haven’t ruled out the possibility that these are just \n\ncoincidences, given the large number of facets that could overlap for \n\neach concept. But if conjoined features genuinely exist, they hint at \n\nnew potential explanations of polysemanticity.In\n\n the past, when we've observed seemingly polysemantic neurons, we've \n\nconsidered two possibilities: either it is responding to some shared \n\nfeature of the stimuli, in which case it isn’t really polysemantic, or \n\nit is genuinely responding to two unrelated cases. Usually we \n\ndistinguish these cases with feature visualization. For example, \n\nInceptionV1 4e:55 responds to cars and cat heads. One could imagine it \n\nbeing the case that it’s responding to some shared feature — perhaps cat\n\n eyes and car lights look similar. But feature visualization establishes\n\n a facet selecting for a globally coherent cat head, whiskers and all, \n\nas well as the metal chrome and corners of a car. We concluded that it \n\nwas genuinely *OR(cat, car)*. \n\n \n\nConjoined features can be seen\n\n as a kind of mid-point between detecting a shared low-level feature and\n\n detecting independent cases. Detecting Santa Claus and “turn” are \n\nclearly true independent cases, but there was a different facet where \n\nthey share a low-level feature. \n\n \n\nWhy would models have conjoined \n\nfeatures? Perhaps they’re a vestigial phenomenon from early in training \n\nwhen the model couldn’t distinguish between the two concepts in that \n\nfacet. Or perhaps there’s a case where they’re still hard to \n\ndistinguish, such as large font sizes. Or maybe it just makes concept \n\npacking more efficient, as in the superposition hypothesis.\n\n \n\n---\n\n \n\nUsing Abstractions\n\n------------------\n\nWe\n\n typically care about features because they’re useful, and CLIP’s \n\nfeatures are more useful than most. These features, when ensembled, \n\nallow direct retrieval on a variety of queries via the dot product \n\nalone.\n\nUntangling the image into its semantics \n\n enables the model to perform a wide variety of downstream tasks \n\nincluding imagenet classification, facial expression detection, \n\ngeolocalization and more. How do they do this? Answering these questions\n\n will require us to look at how neurons work in concert to represent a \n\nbroader space of concepts.\n\n### The Imagenet Challenge\n\nTo\n\n study how CLIP classifies Imagenet, it helps to look at the simplest \n\ncase. We use a sparse linear model for this purpose, following the \n\nmethodology of Radford et al .\n\n With each class using only 3 neurons on average, it is easy to look at \n\nall of the weights. This model, by any modern standard, fares poorly \n\nwith a top-5 accuracy of 56.4%, but the surprising thing is that such a \n\nmiserly model can do anything at all. How is each weight carrying so \n\nmuch weight?\n\nImageNet \n\n organizes images into categories borrowed from another project called \n\nWordNet.\n\nNeural networks typically classify images treating ImageNet classes as \n\nstructureless labels. But WordNet actually gives them a rich structure \n\nof higher level nodes. For example, a Labrador Retriever is a Canine \n\nwhich is a Mammal which is an Animal.\n\nWe find that the weights and\n\n neurons of CLIP reflect some of this structure. At the highest levels \n\nwe find conventional categories such as\n\n This diagram visualizes a submatrix of the full weight matrix that \n\ntakes neurons in the penultimate layer of Resnet 4x to the imagenet \n\nclasses. Each grey circle represents a positive weight. We see the model\n\n fails in ways that close but incorrect, such as its labeling of \n\nscorpion as a fish.\n\n arrive at a surprising discovery: it seems as though the neurons appear\n\n to arrange themselves into a taxonomy of classes that appear to mimic, \n\nvery approximately, the imagenet hierarchy. While there have been \n\nattempts to explicitly integrate this information ,\n\n CLIP was not given this information as a training signal. The fact that\n\n these neurons naturally form a hierarchy — form a hierarchy without \n\neven being trained on ImageNet — suggests that such hierarchy may be a \n\nuniversal feature of learning systems.We’ve\n\n seen hints of similar structure in region neurons, with a whole world \n\nneuron, a northern hemisphere neuron, a USA neuron, and then a West \n\nCoast neuron.\n\n### Understanding Language\n\nThe\n\n most exciting aspect of CLIP is its ability to do zero-shot \n\nclassification: it can be “programmed” with natural language to classify\n\n images into new categories, without fitting a model. Where linear \n\nprobes had fixed weights for a limited set of classes, now we have \n\ndynamic weight vectors that can be generated automatically from text. \n\nIndeed, CLIP makes it possible for end-users to ‘roll their own \n\nclassifier’ by programming the model via intuitive, natural language \n\ncommands - this will likely unlock a broad range of downstream uses of \n\nCLIP-style models.\n\nRecall that CLIP has two sides, a vision side \n\n(which we’ve discussed up to this point) and a language side. The two \n\nsides meet at the end, going through some processing and then performing\n\n a dot product to create a logit. If we ignore spatial structureIn\n\n order to use a contrastive loss, the 3d activation tensor of the last \n\nconvolutional layer must discard spatial information and be reduced to a\n\n single vector which can be dot producted with the language embedding. \n\nCLIP does this with an attention layer, first generating attention \n\n is the text embedding. We focus on the bilinear interaction term, which\n\n governs local interactions in most directions. Although this \n\nWe’ll\n\n mostly be focusing on using text to create zero-shot weights for \n\nimages. But it’s worth noting one tool that the other direction gives \n\nus. If we fix a neuron on the vision side, we can search for the text \n\nthat maximizes the logit. We do this with a hill climbing algorithm to \n\nfind what amounts to the text maximally corresponding to that neuron. \n\n### Emotion Composition\n\nAs\n\n we see above, English has far more descriptive words for emotions than \n\nthe vision side has emotion neurons. And yet, the vision side recognizes\n\n these more obscure emotions. How can it do that?\n\nWe can see what \n\ndifferent emotion words correspond to on the vision side by taking \n\nattribution, as described in the previous section, to \"I feel X\" on the \n\nlanguage side. This gives us a vector of image neurons for each emotion \n\nword.Since the approximations we made in \n\nthe previous section aren’t exact, we double-checked these attribution \n\nvectors for all of the “emotion equations” shown by taking the top image\n\n neuron in each one, artificially increasing its activation at the last \n\nlayer on the vision side when run on a blank image, and confirming that \n\nthe logit for the corresponding emotion word increases on the language \n\n for the prompts \"i am feeling {emotion}, \"Me feeling {emotion} on my \n\nface\", \"a photo of me with a {emotion} expression on my face\" on each \n\none of the emotion-words on the emotion-wheel. We assign each prompt a \n\nlabel corresponding to the emotion-word, and then we then run sparse \n\nlogistic regression to find the neurons that maximally discriminate \n\nbetween the attribution vectors. For the purposes of this article, these\n\n vectors are then cleaned up by hand by removing neurons that respond to\n\n bigrams. This may relate to a line of thinking in \n\npsychology where combinations of basic emotions form the “complex \n\nemotions” we experience.The theory of constructed emotion.\n\nFor\n\n example, the jealousy emotion is success + grumpy. Bored is relaxed + \n\ngrumpy. Intimate is soft smile + heart - sick. Interested is question \n\nmark + heart and inquisitive is question mark + shocked. Surprise is \n\ncelebration + shock.\n\n physical objects contribute to representing emotions.\n\nFor example, part of \"powerful\" is a lightning neuron, part of \n\n\"creative\" is a painting neuron, part of \"embarrassed\" is a neuron \n\n also see concerning use of sensitive topics in these emotion vectors, \n\nsuggesting that problematic spurious correlations are used to caption \n\nexpressions of emotion. For instance, \"accepted\" detects LGBT. \n\n\"Confident\" detects overweight. \"Pressured\" detects Asian culture.\n\n can also search for examples where particular neurons are used, to \n\nexplore their role in complex emotions. We see the mental illness neuron\n\n contributes to emotions like “stressed,” “anxious,” and “mad.”\n\n far, we’ve only looked at a subset of these emotion words. We can also \n\nsee a birds-eye view of this broader landscape of emotions by \n\nvisualizing every attribution vector together.\n\n of complex emotions by applying non-negative matrix factorization to \n\nthe emotion attribution vectors and using the factors to color each \n\ncell. The atlas resembles common feeling wheels \n\n hand-crafted by psychologists to explain the space of human emotions, \n\nindicating that the vectors have a high-level structure that resembles \n\nemotion research in psychology.This atlas has a\n\n few connections to classical emotion research. When we use just 2 \n\nfactors, we roughly reconstruct the canonical mood-axes used in much of \n\npsychology: valence and arousal. If we increase to 7 factors, we nearly \n\nreconstruct a well known categorization of these emotions into happy, \n\nsurprised, sad, bad, disgusted, fearful, and angry, except with \n\n“disgusted” switched for a new category related to affection that \n\nincludes “valued,” “loving,” “lonely,” and “insignificant.”\n\n \n\n---\n\n \n\nTypographic Attacks\n\n-------------------\n\nAs\n\n we’ve seen, CLIP is full of multimodal neurons which respond to both \n\nimages and text for a given concept. Given how strongly these neurons \n\nreact to text, we wonder: can we perform a kind of non-programmatic \n\nadversarial attack – a *typographic attack* – simply using handwriting?\n\nTo\n\n test this hypothesis, we took several common items and deliberately \n\nmislabeled them. We then observed how this affects ImageNet \n\n#### No label\n\n#### Labeled “ipod”\n\n#### Labeled “library”\n\n#### Labeled “pizza”\n\n Physical typographic attacks.\n\n Above we see the CLIP RN50-4x model's classifications of \n\nobjects labeled with incorrect ImageNet classes. Each row corresponds to\n\n an object, and each column corresponds to a labeling. Some attacks are \n\nmore effective than others, and some objects are more resilient to \n\nattack.\n\n \n\n Expand more examples\n\n \n\n Recall that there are two ways to use CLIP for ImageNet \n\nclassification: zero-shot and linear probes.\n\n For this style of attack, we observe that the zero-shot \n\nmethodology is somewhat consistently effective, but that the linear \n\nprobes methodology is ineffective. Later on, we show an attack style \n\n \n\n Displayed ImageNet classification method:\n\n Zero-shot\n\n Linear probes\n\n Adversarial patches are stickers that can be placed on real-life \n\nobjects in order to cause neural nets to misclassify those objects as \n\nsomething else – for example, as toasters. Physical adversarial examples\n\n are complete 3D objects that are reliably misclassified from all \n\nperspectives, such as a 3D-printed turtle that is reliably misclassified\n\n as a rifle. Typographic attacks are both weaker and stronger than \n\nthese. On the one hand, they only work for models with multimodal \n\nneurons. On the other hand, once you understand this property of the \n\n### Evaluating Typographic Attacks\n\nOur\n\n physical adversarial examples are a proof of concept, but they don’t \n\ngive us a very good sense of how frequently typographic attacks succeed.\n\n Duct tape and markers don't scale, so we create an automated setup to \n\nmeasure the attack’s success rate on the ImageNet validation set.\n\nTarget class:\n\n`pizza`\n\nAttack text:\n\n \n\nWe found text snippets for our attacks in \n\ntwo different ways. Firstly, we manually looked through the multimodal \n\nmodel's neurons for those that appear sensitive to particular kinds of \n\n attacks. Secondly, we brute-force searched through all of the ImageNet \n\nclass names looking for short class names which are, in and of \n\n this setup, we found several attacks to be reasonably effective. The \n\nmost successful attacks achieve a 97% attack success rate with only \n\naround 7% of the image's pixels changed. These results are competitive \n\nwith the results found in *Adversarial Patch*, albeit on a different model.\n\n| Target class | Attack text | Pixel cover | Success Linear probes |\n\n| --- | --- | --- | --- |\n\n| `waste container` | *trash* | 7.59% | 95.4% |\n\n| `iPod` | *iPod* | 6.8% | 94.7% |\n\n| `rifle` | *rifle* | 6.41% | 91% |\n\n| `pizza` | *pizza* | 8.11% | 92.3% |\n\n| `radio` | *radio* | 7.73% | 77% |\n\n| `great white shark` | *shark* | 8.33% | 62.2% |\n\n| `library` | *library* | 9.95% | 75.9% |\n\n| `Siamese cat` | *meow* | 8.44% | 46.5% |\n\n| `piggy bank` | *$\\$\\$\\$$* | 6.99% | 36.4% |\n\n **Pixel cover** measures the attack's impact on the original \n\nimage: the average percentage of pixels that were changed by any amount \n\n(an L0-norm) in order to add the attack.\n\n **Success rate** is measured over 1000 ImageNet validation \n\nimages with an attack considered to have succeeded if the attack class \n\nis the most likely. We do not consider an attack to have succeeded if \n\nthe attack-free image was already classified as the attack class.\n\n### Comparison with the Stroop Effect\n\n Just as our models make errors when adversarial text is added to \n\nimages, humans are slower and more error prone when images have \n\nincongruent labels.\n\n is harder than normal. To compare CLIP’s behavior to these human \n\nexperiments, we had CLIP classify these stimuli by color, using its \n\nzero-shot classification. Unlike humans, CLIP can’t slow down to \n\ncompensate for the harder task. Instead of taking a longer amount of \n\ntime for the incongruent stimuli, it has a very high error rate.\n\n A Stroop effect experiment.\n\n Above we see the CLIP RN50-4x model's classifications of \n\nvarious words colored with various colors. Activations were gathered \n\n Expand more examples\n\n \n\n---\n\n \n\nAppendix: Methodological Details\n\n--------------------------------\n\n### Conditional Probability Plots\n\nIf\n\n we really want to understand the behavior of a neuron, it’s not enough \n\nto look at the cases where it maximally fires. We should look at the \n\nfull spectrum: the cases where it weakly fired, the cases where it was \n\non the border of firing, and the cases where it was strongly inhibited \n\nfrom firing. This seems especially true for highly abstract neurons, \n\nwhere weak activations can reveal “associated stimuli,” such as a Donald\n\n Trump neuron firing for Mike Pence.\n\nSince we have access to a \n\nvalidation set from the same distribution the model was trained on, we \n\ncan sample the distribution of stimuli that cause a certain level of \n\nactivation by iterating through the validation set until we find an \n\nimage that causes that activation.\n\nTo more rigorously characterize\n\n this, we create a plot showing the conditional probability of various \n\ncategories as a function of neuron activation, following the example of \n\nCurve Detectors . To do\n\n this, we defined uniformly spaced buckets between the maximally \n\ninhibitory and maximally excitatory activation values, and sampled a \n\nfixed number of stimuli for each activation range. Filling in the most \n\nextreme buckets requires checking the neuron activations for millions of\n\n stimuli. Once we have a full set of stimuli in each bucket, we blind a \n\nlabeler to the activation of each stimuli, and have them select salient \n\ncategories they observed, informed by the hypothesis we have for the \n\nneuron. The human labeler then categorized each stimuli into these \n\ncategories, while blinded to the activation.\n\nWe plot the \n\nactivation axis in terms of standard deviations of activation from zero,\n\n since activations have an arbitrary scale. But keep in mind that \n\nactivations aren’t Gaussian distributed, and have much thicker tails.\n\nIn\n\n reading these graphs, it’s important to keep in mind that different \n\nactivation levels can have many orders of magnitude differences in \n\nprobability density. In particular, probability density peaks around \n\nzero and decays exponentially to the tails. This means that false \n\nnegatives for a rare category will tend to not be very visible, because \n\nthey’ll be crowded out at zero: these graphs show a neuron’s precision, \n\nbut not recall. Curve Detectors discusses these issues in more detail.\n\nAn\n\n alternative possibility is to look at the distribution of activations \n\nconditioned on a category. We take this approach in our second plot for \n\nthe Trump neuron. These plots can help characterize how the neuron \n\nresponds to rare categories in regions of higher density, and can help \n\nresolve concerns about recall. However, one needs some way to get \n\nsamples conditioned on a category for these experiments, and it’s \n\npossible that your process may not be representative. For our purposes, \n\nsince these neurons are so high-level, we used a popular image search to\n\n sample images in a category.\n\n### Faceted Feature Visualization\n\nA neuron is said to have multiple facets\n\n if it responds to multiple, distinct categories of images. For example,\n\n a pose-invariant dog-head detector detects dog heads tilted to the \n\n look for a difference in texture from one side to the other but doesn’t\n\n care which is which. A neuron may even fire for many different, \n\nunrelated categories of images . We refer to these as polysemantic neurons.\n\n[Feature visualization](https://distill.pub/2017/feature-visualization/)is\n\n a technique where the input to a neural network is optimized to create a\n\n stimuli demonstrating some behavior, typically maximizing the \n\nactivation of a neuron. Neurons that possess multiple facets present \n\nparticular challenges to feature visualization as the multiple facets \n\nare difficult to represent as a single image. When such neurons are \n\nencountered, feature visualization often tries to draw both facets at \n\nonce (making it nonsensical), or just reveal one facet The\n\n difference between the two is believed to be related to the phenomena \n\nof mutual inhibition, see the InceptionV1 pose invariant dog head \n\ncircuit.. Both cases are inadequate.\n\nWe\n\n are aware of two past approaches to improving feature visualization for\n\n multi-faceted neurons. The first approach is to find highly diverse \n\nimages that activate a given neuron, and use them as seeds for the \n\nfeature visualization optimization process.\n\n The second tries to combine feature visualization together with a term \n\nthat encourages diversity of the activations on earlier layers.\n\n that allows us to steer the feature visualization towards a particular \n\ntheme (e.g. text, logos, facial features, etc), defined by a collection \n\nof images. The procedure works as follows: first we collect examples of \n\nimages in this theme, and train a linear probe on the lower layers of \n\nthe model to discriminate between those images and generic natural \n\nimages. We then do feature visualization by maximizing the penalized \n\nThe reader may be curious why we do not maximize f(g(x)) + w^Tg(x)\n\n instead. We have found that, in practice, the former objective produces\n\n far higher quality feature visualizations; we believe this is because \n\n", "bibliography_bib": [{"title": "Invariant visual representation by single neurons in the human brain"}, {"title": "Explicit encoding of multimodal percepts by single neurons in the human brain"}, {"title": "Learning Transferable Visual Models From Natural Language Supervision"}, {"title": "Deep Residual Learning for Image Recognition"}, {"title": "Attention is all you need"}, {"title": "Improved deep metric learning with multi-class n-pair loss objective"}, {"title": "Contrastive multiview coding"}, {"title": "Linear algebraic structure of word senses, with applications to polysemy"}, {"title": "Visualizing and understanding recurrent networks"}, {"title": "Object detectors emerge in deep scene cnns"}, {"title": "Network Dissection: Quantifying Interpretability of Deep Visual Representations"}, {"title": "Zoom In: An Introduction to Circuits"}, {"title": "Multifaceted feature visualization: Uncovering the different types of features learned by each neuron in deep neural networks"}, {"title": "Sparse but not ‘grandmother-cell’ coding in the medial temporal lobe"}, {"title": "Concept cells: the building blocks of declarative memory functions"}, {"title": "Emotional expressions reconsidered: Challenges to inferring emotion from human facial movements"}, {"title": "Geographical evaluation of word embeddings"}, {"title": "Using Artificial Intelligence to Augment Human Intelligence"}, {"title": "Visualizing Representations: Deep Learning and Human Beings"}, {"title": "Natural language processing (almost) from scratch"}, {"title": "Linguistic regularities in continuous space word representations"}, {"title": "Man is to computer programmer as woman is to homemaker? debiasing word embeddings"}, {"title": "Intriguing properties of neural networks"}, {"title": "Visualizing higher-layer features of a deep network"}, {"title": "Feature Visualization"}, {"title": "How does the brain solve visual object recognition?"}, {"title": "Imagenet: A large-scale hierarchical image database"}, {"title": "BREEDS: Benchmarks for Subpopulation Shift"}, {"title": "Global Weighted Average Pooling Bridges Pixel-level Localization and Image-level Classification"}, {"title": "Separating style and content with bilinear models"}, {"title": "The feeling wheel: A tool for expanding awareness of emotions and increasing spontaneity and intimacy"}, {"title": "Activation atlas"}, {"title": "Adversarial Patch"}, {"title": "Synthesizing Robust Adversarial Examples"}, {"title": "Studies of interference in serial verbal reactions."}, {"title": "Curve Detectors"}, {"title": "An overview of early vision in inceptionv1"}, {"title": "Deep inside convolutional networks: Visualising image classification models and saliency maps"}, {"title": "Deep neural networks are easily fooled: High confidence predictions for unrecognizable images"}, {"title": "Inceptionism: Going deeper into neural networks"}, {"title": "Plug & play generative networks: Conditional iterative generation of images in latent space"}, {"title": "Sun database: Large-scale scene recognition from abbey to zoo"}, {"title": "The pascal visual object classes (voc) challenge"}, {"title": "Fairface: Face attribute dataset for balanced race, gender, and age"}, {"title": "A style-based generator architecture for generative adversarial networks"}], "filename": "Multimodal Neurons in Artificial Neural Networks.html", "id": "690a095b024be98d3afa8e350da23482"} {"url": "n/a", "source": "distill", "source_type": "html", "converted_with": "python", "title": "Visualizing Weights", "authors": ["Chelsea Voss", "Nick Cammarata", "Gabriel Goh", "Michael Petrov", "Ludwig Schubert", "Ben Egan", "Swee Kiat Lim", "Chris Olah"], "date_published": "2021-02-04", "abstract": " This article is part of the Circuits thread, an experimental format collecting invited short articles and critical commentary delving into the inner workings of neural networks. ", "journal_ref": "distill-pub", "doi": "https://doi.org/10.23915/distill.00024.007", "text": "\n\n![](Visualizing%20Weights_files/multiple-pages.svg)\n\n an experimental format collecting invited short articles and critical \n\ncommentary delving into the inner workings of neural networks.\n\n \n\n[Curve Circuits](https://distill.pub/2020/circuits/curve-circuits/)\n\nIntroduction\n\n------------\n\nThe problem of understanding a neural network is a little bit \n\nlike reverse engineering a large compiled binary of a computer program. \n\nIn this analogy, the weights of the neural network are the compiled \n\nassembly instructions. At the end of the day, the weights are the \n\nfundamental thing you want to understand: how does this sequence of \n\nconvolutions and matrix multiplications give rise to model behavior?\n\nTrying to understand artificial neural networks also has a lot in\n\n common with neuroscience, which tries to understand biological neural \n\nnetworks. As you may know, one major endeavor in modern neuroscience is \n\nmapping the [connectomes](https://en.wikipedia.org/wiki/Connectome)\n\n of biological neural networks: which neurons connect to which. These \n\nconnections, however, will only tell neuroscientists which weights are \n\nnon-zero. Getting the weights – knowing whether a connection excites or \n\ninhibits, and by how much – would be a significant further step. One \n\nimagines neuroscientists might give a great deal to have the access to \n\nweights that those of us studying artificial neural networks get for \n\nfree.\n\nAnd so, it’s rather surprising how little attention we actually \n\ngive to looking at the weights of neural networks. There are a few \n\nexceptions to this, of course. It’s quite common for researchers to show\n\n pictures of the first layer weights in vision models\n\n (these are directly connected to RGB channels, so they’re easy to \n\nunderstand as images). In some work, especially historically, we see \n\nresearchers reason about the weights of toy neural networks by hand. And\n\n we quite often see researchers discuss aggregate statistics of weights.\n\n But actually looking at the weights of a neural network other than the \n\nfirst layer is quite uncommon – to the best of our knowledge, mapping \n\nweights between hidden layers to meaningful algorithms is novel to the \n\ncircuits project.\n\nIn this article, we’re focusing on visualizing weights. But \n\npeople often visualize activations, attributions, gradients, and much \n\nmore. How should we think about the meaning of visualizing these \n\ndifferent objects?\n\n* **Activations:** We generally think of these as being “what” \n\nthe network saw. If understanding a neural network is like reverse \n\ncompiling a computer program, the neurons are the variables, and the \n\nactivations are the values of those variables.\n\n* **Weights:** We generally think of these as being “how” the \n\nneural network computes one layer from the previous one. In the reverse \n\nengineering analogy, these are compiled assembly instructions.\n\n often think of this as “why” the neuron fired. We need to be careful \n\nwith attributions, because they’re a human-defined object on top of a \n\nneural network rather than a fundamental object. They aren’t always well\n\n defined, and people mean different things by them. (They are very well \n\ndefined if you are only operating across adjacent layers!)\n\nWhy it’s non-trivial to study weights in hidden layers\n\n------------------------------------------------------\n\nIt seems to us that there are three main barriers to making sense\n\n of the weights in neural networks, which may have contributed to \n\nresearchers tending to not directly inspect them:\n\n* **Lack of Contextualization:** Researchers often visualize \n\nweights in the first layer, because they are linked to RGB values that \n\nwe understand. That connection makes weights in the first layer \n\nmeaningful. But weights between hidden layers are meaningless by \n\ndefault: knowing nothing about either the source or the destination, how\n\n can we make sense of them?\n\n* **Indirect Interaction:** Sometimes, the meaningful weight \n\ninteractions are between neurons which aren’t literally adjacent in a \n\nneural network. For example, in a residual network, the output of one \n\nneuron can pass through the additive residual stream and linearly \n\ninteract with a neuron much later in the network. In other cases, \n\nneurons may interact through intermediate neurons without significant \n\nnonlinear interactions. How can we efficiently reason about these \n\ninteractions?\n\n* **Dimensionality and Scale:** Neural networks have lots of \n\nneurons. Those neurons connect to lots of other neurons. There’s a lot \n\nof data to display! How can we reduce it to a human-scale amount of \n\ninformation?\n\n The goal of this article is to show how similar ideas can be applied to\n\n weights instead of activations. Of course, we’ve already implicitly \n\nused these methods in various circuit articles,\n\n but in those articles the methods have been of secondary interest to \n\nthe results. It seems useful to give some dedicated discussion to the \n\nmethods.\n\nAside: One Simple Trick\n\n-----------------------\n\nInterpretability methods often fail to take off because they’re \n\nhard to use. So before diving into sophisticated approaches, we wanted \n\nto offer a simple, easy to apply method.\n\n is large. (If this is the first convolutional layer, visualize it as \n\n![](Visualizing%20Weights_files/screenshot_1.png)\n\n[1](#figure-1):\n\n NMF of input weights in InceptionV1 `mixed4d_5x5`, \n\nfor a selection of ten neurons. The red, green, and blue channels on \n\neach grid indicate the weights for each of the 3 NMF factors.\n\n \n\nThis visualization doesn’t tell you very much about what your \n\nweights are doing in the context of the larger model, but it does show \n\nyou that they are learning nice spatial structures. This can be an easy \n\nsanity check that your neurons are learning, and a first step towards \n\nunderstanding your neuron’s behavior. We’ll also see later that this \n\ngeneral approach of factoring weights can be extended into a powerful \n\ntool for studying neurons.\n\nDespite this lack of contextualization, one-sided NMF can be a \n\ngreat technique for investigating multiple channels at a glance. One \n\nthing you may quickly discover using this method is that, in models with\n\n global average pooling at the end of their convolutional layers, the \n\nlast few layers will have all their weights be horizontal bands.\n\n![](Visualizing%20Weights_files/screenshot_2.png)\n\n[2](#figure-2):\n\n Horizontally-banded weights in InceptionV1 `mixed5b_5x5`,\n\n for a selection of eight neurons. As in Figure 1, the red, green, and \n\nblue channels on each grid indicate the weights for each of the 3 NMF \n\nfactors.\n\n \n\nContextualizing Weights with Feature Visualization\n\n--------------------------------------------------\n\n The challenge of contextualization is a recurring challenge in \n\nunderstanding neural networks: we can easily observe every activation, \n\nevery weight, and every gradient; the challenge lies in determining what\n\n those values represent.\n\n`[relative x position, relative y position,\n\n input channels, output channels]`\n\nIf we fix the input channel and the output channel, we get a 2D \n\narray we can present with traditional data visualization. Let’s assume \n\nwe know which neuron we’re interested in understanding, so we have the \n\noutput channel. We can pick the input channels with high magnitude \n\nweights to our output channel.\n\nBut what does the input represent? What about the output?\n\nThe key trick is that techniques like feature visualization\n\n (or deeper investigations of neurons) can help us understand what the \n\ninput and output neurons represent, contextualizing the graph. Feature \n\nvisualizations are especially attractive because they’re automatic, and \n\nproduce a single image which is often very informative about the neuron.\n\n As a result, we often represent neurons as feature visualizations in \n\nweight diagrams.\n\n[3](#figure-3): Contextualizing weights.\n\n \n\nWe can liken this to how, when reverse-engineering a normal \n\ncompiled computer program, one would need to start assigning variable \n\nnames to the values stored in registers to keep track of them. Feature \n\nvisualizations are essentially automatic variable names for neurons, \n\nwhich are roughly analogous to those registers or variables.\n\n### Small Multiples\n\n \n\nAnd if we have two families of related neurons interacting, it \n\ncan sometimes even be helpful to show the weights between all of them as\n\n a grid of small multiples:\n\nAdvanced Approaches to Contextualization with Feature Visualization\n\n-------------------------------------------------------------------\n\nAlthough we most often use feature visualization to visualize \n\nneurons, we can visualize any direction (linear combination of neurons).\n\n This opens up a very wide space of possibilities for visualizing \n\nweights, of which we’ll explore a couple particularly useful ones.\n\n### Visualizing Spatial Position Weights\n\n matrix. But an alternative approach is to think of there as being a \n\nvector over input neurons at each spatial position, and to apply feature\n\n visualization to each of those vectors. You can think of this as \n\ntelling us what the weights in that position are collectively looking \n\nfor.\n\n![](Visualizing%20Weights_files/screenshot_6.png)\n\n[6](#figure-6). **Left:** Feature visualization of a car neuron. **Right:**\n\n Feature visualizations of the vector over input neurons at each spatial\n\n position of the car neuron’s weights. As we see, the car neuron broadly\n\n responds to window features above wheel features.\n\n \n\n from Building Blocks. It can be a nice, high density way to get an \n\noverview of what the weights for one neuron are doing. However, it will \n\nbe unable to capture cases where one position responds to multiple very \n\ndifferent things, as in a multi-faceted or polysemantic neuron.\n\n### Visualizing Weight Factors\n\nFeature visualization can also be applied to factorizations of \n\nthe weights, which we briefly discussed earlier. This is the weight \n\nanalogue to the “Neuron Groups” visualization from Building Blocks.\n\n or black and white vs color detectors that look are all mostly looking \n\nfor a small number of factors. For example, a large number of high-low \n\nfrequency detectors can be significantly understood as combining just \n\ntwo factors – a high frequency factor and a low-frequency factor – in \n\ndifferent patterns.\n\n .upstream-nmf {\n\n display: grid;\n\n grid-row-gap: .5rem;\n\n margin-bottom: 2rem;\n\n }\n\n .upstream-nmf .row {\n\n display: grid;\n\n grid-template-columns: min-content 1fr 6fr;\n\n grid-column-gap: 1rem;\n\n grid-row-gap: .5rem;\n\n }\n\n .units,\n\n .weights {\n\n display: grid;\n\n grid-template-columns: repeat(6, 1fr);\n\n grid-gap: 0.5rem;\n\n grid-column-start: 3;\n\n }\n\n img.fv {\n\n max-width: 100%;\n\n border-radius: 8px;\n\n }\n\n div.units img.full {\n\n margin-left: 1px;\n\n margin-bottom: 0px;\n\n }\n\n img.full {\n\n width: unset;\n\n object-fit: none;\n\n object-position: center;\n\n image-rendering: optimizeQuality;\n\n }\n\n img.weight {\n\n width: 100%;\n\n image-rendering: pixelated;\n\n align-self: center;\n\n border: 1px solid #ccc;\n\n }\n\n .annotated-image {\n\n display: grid;\n\n grid-auto-flow: column;\n\n align-items: center;\n\n }\n\n .annotated-image span {\n\n writing-mode: vertical-lr;\n\n }\n\n .layer-label {\n\n grid-row-start: span 2;\n\n text-align: end;\n\n }\n\n .layer-label label {\n\n display: inline-block;\n\n writing-mode: vertical-lr;\n\n }\n\n .layer-label.hidden {\n\n border-color: transparent;\n\n }\n\n .layer-label.nonhidden {\n\n margin-left: 1rem;\n\n }\n\n .layer-label.hidden label {\n\n visibility: hidden;\n\n }\n\nmixed3a\n\nHF-factor\n\n![](Visualizing%20Weights_files/conv2d2-hi.png)\n\n![](Visualizing%20Weights_files/neuron136-layermaxpool1-factor1.png)\n\n![](Visualizing%20Weights_files/neuron108-layermaxpool1-factor1.png)\n\n![](Visualizing%20Weights_files/neuron132-layermaxpool1-factor1.png)\n\n![](Visualizing%20Weights_files/neuron88-layermaxpool1-factor1.png)\n\n![](Visualizing%20Weights_files/neuron110-layermaxpool1-factor1.png)\n\n![](Visualizing%20Weights_files/neuron180-layermaxpool1-factor1.png)\n\nLF-factor\n\n![](Visualizing%20Weights_files/conv2d2-lo.png)\n\n![](Visualizing%20Weights_files/neuron136-layermaxpool1-factor0.png)\n\n![](Visualizing%20Weights_files/neuron108-layermaxpool1-factor0.png)\n\n![](Visualizing%20Weights_files/neuron132-layermaxpool1-factor0.png)\n\n![](Visualizing%20Weights_files/neuron88-layermaxpool1-factor0.png)\n\n![](Visualizing%20Weights_files/neuron110-layermaxpool1-factor0.png)\n\n![](Visualizing%20Weights_files/neuron180-layermaxpool1-factor0.png)\n\n[7](#figure-7):\n\n layer `conv2d2`.\n\n \n\n .upstream-neurons {\n\n display: grid;\n\n grid-gap: 1em;\n\n margin-bottom: 1em;\n\n }\n\n h5 {\n\n margin-bottom: 0px;\n\n }\n\n .upstream-neurons .row {\n\n display: grid;\n\n grid-column-gap: .25em;\n\n column-gap: .25em;\n\n align-items: center;\n\n }\n\n .units,\n\n .weights {\n\n display: grid;\n\n grid-template-columns: repeat(6, 1fr);\n\n grid-gap: 0.5rem;\n\n grid-column-start: 3;\n\n }\n\n img.fv {\n\n display: block;\n\n max-width: 100%;\n\n border-radius: 8px;\n\n }\n\n img.full {\n\n width: unset;\n\n object-fit: none;\n\n object-position: center;\n\n image-rendering: optimizeQuality;\n\n }\n\n img.weight {\n\n width: 100%;\n\n image-rendering: pixelated;\n\n align-self: center;\n\n border: 1px solid #ccc;\n\n }\n\n .layer-label {\n\n grid-row-start: span 2;\n\n }\n\n .layer-label label {\n\n display: inline-block;\n\n /\\* transform: rotate(-90deg); \\*/\n\n }\n\n .annotation {\n\n font-size: 1.5em;\n\n font-weight: 200;\n\n color: #666;\n\n margin-bottom: 0.2em;\n\n }\n\n .equal-sign {\n\n padding: 0 0.25em;\n\n }\n\n .ellipsis {\n\n padding: 0 0.25em;\n\n /\\* vertically align the ellipsis \\*/\n\n position: relative;\n\n bottom: 0.5ex;\n\n }\n\n .unit {\n\n display: block;\n\n min-width: 50px;\n\n }\n\n .factor {\n\n box-shadow: 0 0 8px #888;\n\n }\n\n .unit .bar {\n\n display: block;\n\n margin-top: 0.5em;\n\n background-color: #CCC;\n\n height: 4px;\n\n }\n\n .row h4 {\n\n border-bottom: 1px solid #ccc;\n\n }\n\n![](Visualizing%20Weights_files/conv2d2-hi.png)\n\n=\n\n+\n\n+\n\n+\n\n+\n\n+\n\n…\n\nHF-factor\n\n × 0.93\n\n × 0.73\n\n × 0.66\n\n × 0.59\n\n × 0.55\n\n![](Visualizing%20Weights_files/conv2d2-lo.png)\n\n=\n\n+\n\n+\n\n+\n\n+\n\n+\n\n…\n\nLF-factor\n\n × 0.44\n\n × 0.41\n\n × 0.38\n\n × 0.36\n\n × 0.34\n\n each factor.\n\n \n\nDealing with Indirect Interactions\n\n----------------------------------\n\nAs we mentioned earlier, sometimes the meaningful weight \n\ninteractions are between neurons which aren’t literally adjacent in a \n\nneural network, or where the weights aren’t directly represented in a \n\nsingle weight tensor. A few examples:\n\n* In a residual network, the output of one neuron can pass \n\nthrough the additive residual stream and linearly interact with a neuron\n\n much later in the network.\n\n* In a bottleneck architecture, neurons in the bottleneck may \n\nprimarily be a low-rank projection of neurons from the previous layer.\n\n* The behavior of an intermediate layer simply doesn’t introduce\n\n much non-linear behavior, leaving two neurons in non-adjacent layers \n\nwith a significant linear interaction.\n\nAs a result, we often work with “expanded weights” – that is, the\n\n result of multiplying adjacent weight matrices, potentially ignoring \n\nnon-linearities. We generally implement expanded weights by taking \n\ngradients through our model, ignoring or replacing all non-linear \n\noperations with the closest linear one.\n\nThese expanded weights have the following properties:\n\n* If two layers interact **linearly**, the expanded weights \n\nwill give the true linear map, even if the model doesn’t explicitly \n\nrepresent the weights in a single weight matrix.\n\n* If two layers interact **non-linearly**, the expanded \n\nweights can be seen as the expected value of the gradient up to a \n\nconstant factor, under the assumption that all neurons have an equal \n\n(and independent) probability of firing.\n\nThey also have one additional benefit, which is more of an \n\nimplementation detail: because they’re implemented in terms of \n\ngradients, you don’t need to know how the weights are represented. For \n\nexample, in TensorFlow, you don’t need to know which variable object \n\nrepresents the weights. This can be a significant convenience when \n\nyou’re working with unfamiliar models!\n\n### Benefits of Expanded Weights\n\n \n\nExpanding out the weights allows us to see an important aggregate\n\n effect of these connections: together, they look for the absence of \n\ncolor in the center one layer further back.\n\n \n\nA particularly important use of this method – which we’ve been \n\nimplicitly using in earlier examples – is to jump over “bottleneck \n\nlayers.” Bottleneck layers are layers of the network which squeeze the \n\nnumber of channels down to a much smaller number, typically in a branch,\n\n of InceptionV1 are one example. Since so much information is \n\ncompressed, these layers are often polysemantic, and it can often be \n\nmore helpful to jump over them and understand the connection to the \n\nwider layer before them.\n\n### Cases where expanded weights are misleading\n\n \n\n excited by high-frequency patterns on one side and inhibited on the \n\nother (and vice versa for low frequency), detecting both directions \n\nmeans that the expanded weights cancel out! As a result, expanded \n\nweights appear to show that boundary detectors are neither excited or \n\ninhibited by high frequency detectors two layers back, when in fact they\n\n are *both* excited and also inhibited by high frequency, depending\n\n on the context, and it’s just that those two different cases cancel \n\nout.\n\n[12](#figure-12).\n\n \n\nMore sophisticated techniques for describing multi-layer \n\ninteractions can help us understand cases like this. For example, one \n\ncan determine what the “best case” excitation interaction between two \n\nneurons is (that is, the maximum achievable gradient between them). Or \n\nyou can look at the gradient for a particular example. Or you can factor\n\n the gradient over many examples to determine major possible cases. \n\nThese are all useful techniques, but we’ll leave them for a future \n\narticle to discuss.\n\n### Qualitative properties\n\nOne qualitative property of expanding weights across many layers \n\ndeserves mention before we end our discussion of them. Expanded weights \n\noften get this kind of “electron orbital”-like smooth spatial \n\nstructures:\n\n Although the exact structures present may vary from neuron to \n\nneuron, this example is not cherry-picked: this smoothness is typical of\n\n most multiple-layer expanded weights.\n\n \n\nDimensionality and Scale\n\n------------------------\n\nSo far, we’ve addressed the challenges of contextualization and \n\nindirection interactions. But we’ve only given a bit of attention to our\n\n third challenge of dimensionality and scale. Neural networks contain \n\nmany neurons and each one connects to many others, creating a huge \n\namount of weights. How do we pick which connections between neurons to \n\nlook at?\n\nFor the purposes of this article, we’ll put the question of which\n\n neurons we want to study outside of our scope, and only discuss the \n\nproblem of picking which connections to study. (We may be trying to \n\ncomprehensively study a model, in which case we want to study all \n\nneurons. But we might also, for example, be trying to study neurons \n\nwe’ve determined related to some narrower aspect of model behavior.)\n\nGenerally, we chose to look at the largest weights, as we did at \n\nthe beginning of the section on contextualization. Unfortunately, there \n\ntends to be a long tail of small weights, and at some point it generally\n\n gets impractical to look at these. How much of the story is really \n\nhiding in these small weights? We don’t know, but polysemantic neurons \n\nsuggest there could be a very important and subtle story hiding here! \n\nThere’s some hope that sparse neural networks might make this much \n\nbetter, by getting rid of small weights, but whether such conclusions \n\ncan be drawn about non-sparse networks is presently speculative.\n\nAn alternative strategy that we’ve brushed on a few times is to \n\nreduce your weights into a few components and then study those factors \n\n(for example, with NMF). Often, a very small number of components can \n\nexplain much of the variance. In fact, sometimes a small number of \n\nfactors can explain the weights of an entire set of neurons! Prominent \n\nexamples of this are high-low frequency detectors (as we saw earlier) \n\nand black and white vs color detectors.\n\nHowever, this approach also has downsides. Firstly, these \n\ncomponents can be harder to understand and even polysemantic. For \n\nexample, if you apply the basic version of this method to a boundary \n\ndetector, one component will contain both high-to-low and low-to-high \n\nfrequency detectors which will make it hard to analyze. Secondly, your \n\nfactors no longer align with activation functions, which makes analysis \n\nmuch messier. Finally, because you will be reasoning about every neuron \n\nin a different basis, it is difficult to build a bigger picture view of \n\nthe model unless you convert your components back to neurons.\n\n![](Visualizing%20Weights_files/multiple-pages.svg)\n\n an experimental format collecting invited short articles and critical \n\ncommentary delving into the inner workings of neural networks.\n\n \n\n[Curve Circuits](https://distill.pub/2020/circuits/curve-circuits/)\n\n", "bibliography_bib": [{"title": "Imagenet classification with deep convolutional neural networks"}, {"title": "Understanding neural networks through deep visualization"}, {"title": "Visualizing and understanding convolutional networks"}, {"title": "The Building Blocks of Interpretability"}, {"title": "Zoom In: An Introduction to Circuits"}, {"title": "An Overview of Early Vision in InceptionV1"}, {"title": "Curve Detectors"}, {"title": "Feature Visualization"}, {"title": "Multifaceted feature visualization: Uncovering the different types of features learned by each neuron in deep neural networks"}, {"title": "Visualizing and understanding recurrent networks"}, {"title": "Visualizing higher-layer features of a deep network"}], "filename": "Visualizing Weights.html", "id": "fcf3efc7e491ae5a389cf6ac98147493"} {"url": "n/a", "source": "distill", "source_type": "html", "converted_with": "python", "title": "A Discussion of 'Adversarial Examples Are Not Bugs, They Are Features': Adversarial Example Researchers Need to Expand What is Meant by 'Robustness'", "authors": ["Justin Gilmer", "Dan Hendrycks"], "date_published": "2019-08-06", "abstract": " This article is part of a discussion of the Ilyas et al. paper “Adversarial examples are not bugs, they are features”. You can learn more in the main discussion article . ", "journal_ref": "distill-pub", "doi": "https://doi.org/10.23915/distill.00019.1", "text": "\n\n #rebuttal,\n\n .comment-info {\n\n background-color: hsl(54, 78%, 96%);\n\n border-left: solid hsl(54, 33%, 67%) 1px;\n\n padding: 1em;\n\n color: hsla(0, 0%, 0%, 0.67);\n\n }\n\n #header-info {\n\n margin-top: 0;\n\n margin-bottom: 1.5rem;\n\n display: grid;\n\n grid-template-columns: 65px max-content 1fr;\n\n grid-template-areas:\n\n \"icon explanation explanation\"\n\n \"icon back comment\";\n\n grid-column-gap: 1.5em;\n\n }\n\n #header-info .icon-multiple-pages {\n\n grid-area: icon;\n\n padding: 0.5em;\n\n content: url(images/multiple-pages.svg);\n\n }\n\n #header-info .explanation {\n\n grid-area: explanation;\n\n font-size: 85%;\n\n }\n\n #header-info .back {\n\n grid-area: back;\n\n }\n\n #header-info .back::before {\n\n content: \"←\";\n\n margin-right: 0.5em;\n\n }\n\n #header-info .comment {\n\n grid-area: comment;\n\n scroll-behavior: smooth;\n\n }\n\n #header-info .comment::before {\n\n content: \"↓\";\n\n margin-right: 0.5em;\n\n }\n\n #header-info a.back,\n\n #header-info a.comment {\n\n font-size: 80%;\n\n font-weight: 600;\n\n border-bottom: none;\n\n text-transform: uppercase;\n\n color: #2e6db7;\n\n display: block;\n\n margin-top: 0.25em;\n\n letter-spacing: 0.25px;\n\n }\n\n This article is part of a discussion of the Ilyas et al. paper\n\n *“Adversarial examples are not bugs, they are features”.*\n\n You can learn more in the\n\n [main discussion article](https://distill.pub/2019/advex-bugs-discussion/) .\n\n \n\n[Other Comments](https://distill.pub/2019/advex-bugs-discussion/#commentaries)\n\n \n\nDetailed Response\n\n-----------------\n\n commonly accepted in the robustness to distributional shift literature \n\n: a model’s lack of\n\n \n\n[1](https://distill.pub/2019/advex-bugs-discussion/response-1/#figure-1)\n\n high frequency features.\n\n \n\n Fourier basis vector,\n\n from 85.7% to 55.3%. Adversarial training similarly degrades robustness to\n\n contrast and low-pass\n\n settings.\n\n \n\n[2](https://distill.pub/2019/advex-bugs-discussion/response-1/#figure-2)\n\n an\n\n is smaller compared to that of the naturally trained model).\n\n \n\n detached from security and real-world robustness . While often thought an\n\n idiosyncratic quirk of deep\n\n to noise . They should not be surprising given the brittleness observed in\n\n numerous synthetic — and even\n\n natural  — conditions. Models reliably exhibit poor performance when they are\n\n evaluated on distributions\n\n comprehensive benchmarks accordingly . As long as models lack\n\n robustness to\n\n distributional shift, there will always be errors to find adversarially.\n\n \n\n To cite Ilyas et al.’s response, please cite their\n\n**Response Summary**: The demonstration of models that learn from\n\n high-frequency components of the data is interesting and nicely aligns with our\n\n findings. Now, even though susceptibility to noise could indeed arise from\n\n of ML models has been so far predominantly viewed as a consequence of model\n\n “bugs” that will be eliminated by “better” models. Finally, we agree that our\n\n models need to be robust to a much broader set of perturbations — expanding the\n\n set of relevant perturbations will help identify even more non-robust features\n\n and further distill the useful features we actually want our models to rely on.\n\n \n\n**Response**: The fact that models can learn to classify correctly based\n\n purely on the high-frequency component of the training set is neat! This nicely\n\n Also, while non-robustness to noise can be an indicator of models using\n\n More often than not, the brittleness of ML models to noise was instead regarded\n\n expected that progress towards “better”/”bug-free” models will lead to them\n\n being more robust to noise and adversarial examples.\n\n small subset of the perturbations we want our models to be robust to. Note,\n\n however, that the focus of our work is human-alignment — to that end, we\n\n demonstrate that models rely on features sensitive to patterns that are\n\n imperceptible to humans. Thus, the existence of other families of\n\n incomprehensible but useful features would provide even more support for our\n\n future research.\n\n", "bibliography_bib": [{"title": "Benchmarking Neural Network Robustness to Common Corruptions and Perturbations"}, {"title": "Measuring the tendency of CNNs to Learn Surface Statistical Regularities"}, {"title": "Nightmare at Test Time: Robust Learning by Feature Deletion"}, {"title": "A Robust Minimax Approach to Classification"}, {"title": "Generalisation in humans and deep neural networks"}, {"title": "A Fourier Perspective on Model Robustness in Computer Vision"}, {"title": "Motivating the Rules of the Game for Adversarial Example Research"}, {"title": "Adversarial Examples Are a Natural Consequence of Test Error in Noise"}, {"title": "Robustness of classifiers: from adversarial to random noise"}, {"title": "Natural Adversarial Examples"}, {"title": "{MNIST-C:} {A} Robustness Benchmark for Computer Vision"}, {"title": "{NICO:} {A} Dataset Towards Non-I.I.D. Image Classification"}, {"title": "Do ImageNet Classifiers Generalize to ImageNet?"}, {"title": "The Elephant in the Room"}, {"title": "Using Videos to Evaluate Image Model Robustness"}], "filename": "A Discussion of 'Adversarial Examples Are Not Bugs, They Are Features'_ Adversarial Example Researchers Need to Expand What is Meant by 'Robustness'.html", "id": "f146b48208d22f534f6048654c70b379"} {"url": "n/a", "source": "distill", "source_type": "html", "converted_with": "python", "title": "A Discussion of 'Adversarial Examples Are Not Bugs, They Are Features': Learning from Incorrectly Labeled Data", "authors": ["Eric Wallace"], "date_published": "2019-08-06", "abstract": " This article is part of a discussion of the Ilyas et al. paper “Adversarial examples are not bugs, they are features”. You can learn more in the main discussion article . ", "journal_ref": "distill-pub", "doi": "https://doi.org/10.23915/distill.00019.6", "text": "\n\n #rebuttal,\n\n .comment-info {\n\n background-color: hsl(54, 78%, 96%);\n\n border-left: solid hsl(54, 33%, 67%) 1px;\n\n padding: 1em;\n\n color: hsla(0, 0%, 0%, 0.67);\n\n }\n\n #header-info {\n\n margin-top: 0;\n\n margin-bottom: 1.5rem;\n\n display: grid;\n\n grid-template-columns: 65px max-content 1fr;\n\n grid-template-areas:\n\n \"icon explanation explanation\"\n\n \"icon back comment\";\n\n grid-column-gap: 1.5em;\n\n }\n\n #header-info .icon-multiple-pages {\n\n grid-area: icon;\n\n padding: 0.5em;\n\n content: url(images/multiple-pages.svg);\n\n }\n\n #header-info .explanation {\n\n grid-area: explanation;\n\n font-size: 85%;\n\n }\n\n #header-info .back {\n\n grid-area: back;\n\n }\n\n #header-info .back::before {\n\n content: \"←\";\n\n margin-right: 0.5em;\n\n }\n\n #header-info .comment {\n\n grid-area: comment;\n\n scroll-behavior: smooth;\n\n }\n\n #header-info .comment::before {\n\n content: \"↓\";\n\n margin-right: 0.5em;\n\n }\n\n #header-info a.back,\n\n #header-info a.comment {\n\n font-size: 80%;\n\n font-weight: 600;\n\n border-bottom: none;\n\n text-transform: uppercase;\n\n color: #2e6db7;\n\n display: block;\n\n margin-top: 0.25em;\n\n letter-spacing: 0.25px;\n\n }\n\n This article is part of a discussion of the Ilyas et al. paper\n\n *“Adversarial examples are not bugs, they are features”.*\n\n You can learn more in the\n\n [main discussion article](https://distill.pub/2019/advex-bugs-discussion/) .\n\n \n\n[Other Comments](https://distill.pub/2019/advex-bugs-discussion/#commentaries)\n\n[Comment by Ilyas et al.](#rebuttal)\n\n about the trained model is being “leaked” into the dataset.\n\n Nevertheless, we show that this intuition fails — a model *can* generalize.\n\n examples on the left of the Figure below:\n\n \n\n[1](#figure-1)\n\n incorrectly labeled, unperturbed images but can still non-trivially generalize.\n\n \n\nThis is Model Distillation Using Incorrect Predictions\n\n------------------------------------------------------\n\n of model distillation — training on this dataset allows a new\n\n model to somewhat recover the features of the original model.\n\n \n\n another task.\n\n \n\n### Two-dimensional Illustration of Model Distillation\n\n (panel (a) in the Figure below).\n\n \n\n[2](#figure-2)\n\n panel (c).\n\n \n\n predictions.\n\n \n\n### \n\n Other Peculiar Forms of Distillation\n\n we learn about the original model? Can we use only *out-of-domain* data?\n\n \n\n labeled as an “8″.\n\n \n\n[3](#figure-3)\n\n training on erroneous predictions.\n\n \n\n### \n\n Summary\n\n Ilyas et al. (2019) are not necessary to enable learning.\n\n \n\n To cite Ilyas et al.’s response, please cite their\n\n**Response\n\n Summary**: Note that since our experiments work across different architectures,\n\n “distillation” in weight space does not occur. The only distillation that can\n\n have been able to “distill” a useful model from them. (In fact, one might think\n\n of normal model training as just “feature distillation” of the humans that\n\n labeled the dataset.) Furthermore, the hypothesis that all we need is enough\n\n model-consistent points in order to recover a model, seems to be disproven by\n\n and other (e.g. ) settings. \n\n**Response**: Since our experiments work across different architectures,\n\n “distillation” in weight space cannot arise. Thus, from what we understand, the\n\n “distillation” hypothesis suggested here is referring to “feature distillation”\n\n (i.e. getting models which use the same features as the original), which is\n\n actually precisely our hypothesis too. Notably, this feature distillation would\n\n are good for classification (see [World\n\n 1](https://distill.pub/2019/advex-bugs-discussion/original-authors/#world1) and\n\n model would only use features that generalize poorly, and would thus generalize\n\n poorly itself. \n\n Moreover, we would argue that in the experiments presented (learning from\n\n mislabeled data), the same kind of distillation is happening. For instance, a\n\n moderately accurate model might associate “green background” with “frog” thus\n\n labeling “green” images as “frogs” (e.g., the horse in the comment’s figure).\n\n Training a new model on this dataset will thus associate “green” with “frog”\n\n from Fashion-MNIST” experiment in the comment). This corresponds exactly to\n\n learning features from labels, akin to how deep networks “distill” a good\n\n decision boundary from human annotators. In fact, we find these experiments\n\n a very interesting illustration of feature distillation that complements\n\n our findings. \n\n We also note that an analogy to logistic regression here is only possible\n\n due to the low VC-dimension of linear classifiers (namely, these classifiers\n\n have dimension ddd). In particular, given any classifier with VC-dimension\n\n networks have been shown to have extremely large VC-dimension (in particular,\n\n bigger than the size of the training set ). So even though\n\n labelling d+1d+1d+1 random\n\n points model-consistently is sufficient to recover a linear model, it is not\n\n necessarily sufficient to recover a deep neural network. For instance, Milli et\n\n al. are not able to reconstruct a ResNet-18\n\n using only its predictions on random Gaussian inputs. (Note that we are using a\n\n ResNet-50 in our experiments.) \n\n Finally, it seems that the only potentially problematic explanation for\n\n our experiments (namely, that enough model-consistent points can recover a\n\n In particular, Preetum is able to design a\n\n dataset where training on mislabeled inputs *that are model-consistent*\n\n does not at all recover the decision boundary of the original model. More\n\n generally, the “model distillation” perspective raised here is unable to\n\n distinguish between the dataset created by Preetum below, and those created\n\n with standard PGD (as in our D^det\\widehat{\\mathcal{D}}\\_{det}D\n\ndet​ and\n\n D^rand\\widehat{\\mathcal{D}}\\_{rand}D\n\nrand​ datasets).\n\n \n\n", "bibliography_bib": [{"title": "Distilling the Knowledge in a Neural Network"}, {"title": "Model reconstruction from model explanations"}, {"title": "Understanding deep learning requires rethinking generalization"}], "filename": "A Discussion of 'Adversarial Examples Are Not Bugs, They Are Features' Learning from Incorrectly Labeled Data.html", "id": "863f07e74460850c4ee67925cfae6249"} {"url": "n/a", "source": "distill", "source_type": "html", "converted_with": "python", "title": "A Discussion of 'Adversarial Examples Are Not Bugs, They Are Features'", "authors": ["Logan Engstrom", "Justin Gilmer", "Gabriel Goh", "Dan Hendrycks", "Andrew Ilyas", "Aleksander Madry", "Reiichiro Nakano", "Preetum Nakkiran", "Shibani Santurkar", "Brandon Tran", "Dimitris Tsipras", "Eric Wallace"], "date_published": "2019-08-06", "abstract": " On May 6th, Andrew Ilyas and colleagues published a paper outlining two sets of experiments. Firstly, they showed that models trained on adversarial examples can transfer to real data, and secondly that models trained on a dataset derived from the representations of robust neural networks seem to inherit non-trivial robustness. They proposed an intriguing interpretation for their results: adversarial examples are due to “non-robust features” which are highly predictive but imperceptible to humans. ", "journal_ref": "distill-pub", "doi": "https://doi.org/10.23915/distill.00019", "text": "\n\n outlining two sets of experiments.\n\n seem to inherit non-trivial robustness.\n\n They proposed an intriguing interpretation for their results:\n\n humans.\n\n \n\n The paper was received with intense interest and discussion\n\n on social media, mailing lists, and reading groups around the world.\n\n How should we interpret these experiments?\n\n Would they replicate?\n\n disciplines of machine learning,\n\n because it requires researchers to play both attack and defense.\n\n It’s easy for even very rigorous researchers to accidentally use a weak attack.\n\n However, as we’ll see, Ilyas et al’s results have held up to initial scrutiny.\n\n \n\n And if non-robust features exist… what are they?\n\n \n\n \n\n \n\n Why not just have everyone write private blog posts like Ferenc?\n\n can give more researchers license to invest energy in discussing other’s work\n\n published.\n\n \n\n We invited a number of researchers\n\n \n\n The Machine Learning community\n\n [sometimes](https://www.machinelearningdebates.com/program)\n\n that peer review isn’t thorough enough.\n\n In contrast to this, we were struck by how deeply respondents engaged.\n\n deeply about the original paper.\n\n and forth!\n\n even running new experiments in response to comments.\n\n \n\n discussion articles in the future.\n\n \n\nDiscussion Themes\n\n-----------------\n\n**Clarifications**:\n\n Discussion between the respondents and original authors was able\n\n to surface several misunderstandings or opportunities to sharpen claims.\n\n The original authors summarize this in their rebuttal.\n\n \n\n**Successful Replication**:\n\n the non-robust dataset experiments.\n\n Preetum reproduced the D^det\\widehat{\\mathcal{D}}\\_{det}D\n\ndet​ non-robust dataset experiment as described in the\n\n paper, for L∞L\\_\\inftyL∞​ and L2L\\_2L2​ attacks.\n\n \n\n Gabriel repproduced both D^det\\widehat{\\mathcal{D}}\\_{det}D\n\ndet​ and D^rand\\widehat{\\mathcal{D}}\\_{rand}D\n\nrand​ for L2L\\_2L2​\n\n attacks.\n\n \n\n Preetum also replicated part of the robust dataset experiment by\n\n and hyperparameters he tried.\n\n \n\n**Exploring the Boundaries of Non-Robust Transfer**:\n\n where training on adversarial examples transfers to real data.\n\n When, how, and why does it happen?\n\n Gabriel Goh explores an alternative mechanism for the results,\n\n Preetum Nakkiran shows a special construction where it doesn’t happen,\n\n \n\n**Properties of Robust and Non-Robust Features**:\n\n distribution shift,\n\n \n\nComments\n\n--------\n\n Distill collected six comments on the original paper.\n\n They are presented in alphabetical order by the author’s last name,\n\n \n\n### \n\n[Adversarial Example Researchers Need to Expand What is Meant by\n\n “Robustness”](https://distill.pub/2019/advex-bugs-discussion/response-1/)\n\n### Authors\n\n### Affiliations\n\n[Justin Gilmer](https://www.linkedin.com/in/jmgilmer)\n\n[Google Brain Team](https://g.co/brain)\n\n[Dan Hendrycks](https://people.eecs.berkeley.edu/~hendrycks/)\n\n[UC Berkeley](https://www.berkeley.edu/)\n\n Justin and Dan discuss “non-robust features” as a special case\n\n of models being non-robust because they latch on to superficial correlations,\n\n a view often found in the distributional robustness literature.\n\n They emphasize we should think about a broader notion of robustness.\n\n#### Comment from original authors:\n\n appears “meaningless” to humans.\n\n to rely on.\n\n \n\n### \n\n### Authors\n\n### Affiliations\n\n[Gabriel Goh](https://gabgoh.github.io/)\n\n[OpenAI](https://openai.com/)\n\n results.\n\n D^rand\\widehat{\\mathcal{D}}\\_{rand}D\n\nrand​ experiment,\n\n but finds no evidence for it effecting the D^det\\widehat{\\mathcal{D}}\\_{det}D\n\ndet​ experiment.\n\n#### Comment from original authors:\n\n motivations for designing the D^det\\widehat{\\mathcal{D}}\\_{det}D\n\ndet​ dataset.\n\n \n\n### \n\n### Authors\n\n### Affiliations\n\n[Gabriel Goh](https://gabgoh.github.io/)\n\n[OpenAI](https://openai.com/)\n\n He provides two constructions:\n\n and “ensembles” that could be candidates for true useful non-robust features.\n\n#### Comment from original authors:\n\n features for real datasets (and thus a neat corroboration of their existence).\n\n interesting direction of developing a more fine-grained definition of features.\n\n \n\n### \n\n### Authors\n\n### \n\n[Reiichiro Nakano](https://reiinakano.com/)\n\n Reiichiro shows that adversarial robustness makes neural style transfer\n\n work by default on a non-VGG architecture.\n\n to humans.\n\n#### Comment from original authors:\n\n trained models will have in neural network art!\n\n interesting links between robustness and style transfer.\n\n \n\n### \n\n### Authors\n\n### Affiliations\n\n[Preetum Nakkiran](https://preetum.nakkiran.org/)\n\n[OpenAI](https://openai.com/) &\n\n [Harvard University](https://www.harvard.edu/)\n\n has no “non-robust features”.\n\n#### Comment from original authors:\n\n example of adversarial examples that arise from “bugs”.\n\n \n\n### \n\n### Authors\n\n### Affiliations\n\n[Eric Wallace](https://www.ericswallace.com/)\n\n[Allen Institute for AI](https://allenai.org/)\n\n Eric shows that training on a model’s training \n\nerrors,\n\n or on how it predicts examples form an unrelated \n\ndataset,\n\n can both transfer to the true test set.\n\n These experiments are analogous to the original \n\npaper’s non-robust transfer results — all three results are examples of a\n\n kind of “learning from incorrectly labeled data.”\n\n#### Comment from original authors:\n\n settings.\n\n \n\nOriginal Author Discussion and Responses\n\n----------------------------------------\n\n### \n\n### Authors\n\n### Affiliations\n\n[Logan Engstrom](http://loganengstrom.com/),\n\n [Andrew Ilyas](http://andrewilyas.com/),\n\n [Aleksander Madry](https://people.csail.mit.edu/madry/),\n\n [Shibani Santurkar](http://people.csail.mit.edu/shibani/),\n\n Brandon Tran,\n\n [Dimitris Tsipras](http://people.csail.mit.edu/tsipras/)\n\nMIT\n\n conversation.\n\n This article also contains their responses to each comment.\n\n", "bibliography_bib": [{"title": "Adversarial examples are not bugs, they are features"}], "filename": "A Discussion of 'Adversarial Examples Are Not Bugs, They Are Features'.html", "id": "90956d04ac982ab30c8a58976be7502c"} {"url": "n/a", "source": "distill", "source_type": "html", "converted_with": "python", "title": "Curve Detectors", "authors": ["Nick Cammarata", "Gabriel Goh", "Shan Carter", "Ludwig Schubert", "Michael Petrov", "Chris Olah"], "date_published": "2020-06-17", "abstract": "This article is part of the Circuits thread, an experimental format collecting invited short articles and critical commentary delving into the inner workings of neural networks.", "journal_ref": "distill-pub", "doi": "https://doi.org/10.23915/distill.00024.003", "text": "### Contents\n\n an experimental format collecting invited short articles and critical \n\ncommentary delving into the inner workings of neural networks.\n\n vision model we've explored in detail contains neurons which detect \n\ncurves. Curve detectors in vision models have been hinted at in the \n\n curve in our earlier overview of early vision, but wanted to examine \n\nthem in more depth. This article is the first part of a three article \n\ndeep dive into curve detectors: their behavior, how they're built from \n\nearlier neurons, and their prevalence across models.\n\nWe're doing \n\nthis because we believe that the interpretability community disagrees on\n\n several crucial questions. In particular, are neural network \n\nrepresentations composed of meaningful features — that is, features \n\ntracking articulable properties of images? On the one hand, there are a \n\nnumber of papers reporting on seemingly meaningful features, such as eye\n\n detectors, head detectors, car detectors, and so forth .\n\n At the same time, there's a significant amount of skepticism, only \n\npartially reflected in the literature. One concern is that features \n\nwhich seem superficially to be meaningful may in fact not be what they \n\n rather than the kind of meaningful features described earlier. Finally,\n\n even if some meaningful features exist, it's possible they don't play \n\nan especially important role in the network.\n\n Some reconcile these results by concluding that if one observes, for \n\nexample, what appears to be a dog head detector, it is actually a \n\ndetector for special textures correlated with dog heads.\n\nThis \n\ndisagreement really matters. If every neuron was meaningful, and their \n\nconnections formed meaningful circuits, we believe it would open a path \n\nto completely reverse engineering and interpreting neural networks. Of \n\ncourse, we know not every neuron is meaningful, As\n\n discussed in Zoom In, the main issue we see is what we call \n\npolysemantic neurons which respond to multiple different features, \n\nseemingly as a way to compress many features into a smaller number of \n\nneurons. We're hopeful this can be worked around. but we \n\nthink it's close enough for this path to be tractable. However, our \n\nposition is definitely not the consensus view. Moreover, it seems too \n\ngood to be true, and rings of the similar failed promises in other \n\nfieldsFor example, genetics seems to have \n\nbeen optimistic in the past that genes had individual functions and that\n\n the human genome project would allow us to “mine miracles,\" a position \n\nWe\n\n believe that curve detectors are a good vehicle for making progress on \n\nthis disagreement. Curve detectors seem like a modest step from \n\nedge-detecting Gabor filters, which the community widely agrees often \n\nform in the first convolutional layer. Furthermore, artificial curves \n\nare simple to generate, opening up lots of possibilities for rigorous \n\ninvestigation. And the fact that they're only a couple convolutional \n\nlayers deep means we can follow every string of neurons back to the \n\ninput. At the same time, the underlying algorithm the model has \n\nimplemented for curve detection is quite sophisticated. If this paper \n\npersuades skeptics that at least curve detectors exist, that seems like a\n\n substantial step forward. Similarly, if it surfaces a more precise \n\npoint of disagreement, that would also advance the dialogue.\n\n[A Simplified Story of Curve Neurons\n\n-----------------------------------](#a-simplified-story-of-curve-neurons)Before\n\n running detailed experiments, let's look at a high level and slightly \n\nsimplified story of how the curve 10 neurons in 3b work.\n\n curve detector implements a variant of the same algorithm: it responds \n\nto a wide variety of curves, preferring curves of a particular \n\norientation and gradually firing less as the orientation changes. Curve \n\nneurons are invariant to cosmetic properties such as brightness, \n\ntexture, and color. \n\n are normalized by the neuron's maximum activation. We'll examine \n\n across ImageNet, and usually activating weakly when they do fire. When \n\nthey activate strongly, it's in response to curves with similar \n\norientation and curvature to their feature visualization. \n\n---\n\nIt’s\n\n worth stepping back and reflecting on how surprising the existence of \n\nseemingly meaningful features like curve detectors is. There’s no \n\nexplicit incentive for the network to form meaningful neurons. It’s not \n\nlike we optimized these neurons to be curve detectors! Rather, \n\nInceptionV1 is trained to classify images into categories many levels of\n\n abstraction removed from curves and somehow curve detectors fell out of\n\n gradient descent.\n\nMoreover, detecting curves across a wide \n\nvariety of natural images is a difficult and arguably unsolved problem \n\nin classical computer visionThis is our \n\nsense from trying to implement programmatic curve detection to compare \n\nthem to curve neurons. We found that practitioners generally had to \n\nchoose between several algorithms, each with significant trade-offs such\n\n as robustness to different kinds of visual “noise” (for instance, \n\ntexture), even in images much less complex than the natural images in \n\n in general, is a very challenging one and, except for toy examples, \n\nthere are no good solutions.” Additionally, many classical curve \n\ndetection algorithms are too slow to run in real-time, or require often \n\nintractable amounts of memory.. InceptionV1 seems to learn a\n\n flexible and general solution to this problem, implemented using five \n\nconvolutional layers. We’ll see in the next article that the algorithm \n\nused is straightforward and understandable, and we’ve since \n\nreimplemented it by hand.\n\nWhat exactly are we claiming when we say\n\n these neurons detect curves? We think part of the reason there is \n\nsometimes disagreement about whether neurons detect particular stimuli \n\nis that there are a variety of claims one may be making. It’s pretty \n\neasy to show that, empirically, when a curve detector fires strongly the\n\n stimulus is reliably a curve. But there are several other claims which \n\nmight be more contentious:\n\n* **Causality** \n\nCurve detectors genuinely detect a curve feature, rather than another \n\nstimulus correlated with curves. We believe our feature visualization \n\nand visualizing attribution experiments establish a causal link, since \n\n“running it in reverse” produces a curve.\n\n* **Generality:**\n\n Curve detectors respond to a wide variety of curve stimuli. They \n\ntolerate a wide range of radii and are largely invariant to cosmetic \n\nattributes like color, brightness, and texture. We believe that our \n\nexperiments explicitly testing these invariances with synthetic stimuli \n\nare the most compelling evidence of this.\n\n* **Purity:**\n\n Curve detectors are not polysemantic and they have no meaningful \n\nsecondary function. Images that cause curve detectors to activate \n\nweakly, such as edges or angles, are a natural extension of the \n\nalgorithm that InceptionV1 uses to implement curve detection. We believe\n\n our experiments classifying dataset examples at different activation \n\nmagnitudes and visualizing their attributions show that any secondary \n\nfunction would need to be rare. In the next article, exploring the \n\nmechanics of the algorithm implementing curve detectors, we’ll provide \n\nfurther evidence for this claim.\n\n* **Family:** Curve neurons collectively span all orientations of curves.\n\n[Feature Visualization\n\n uses optimization to find the input to a neural network that maximizes a\n\n given objective. The objective we often use is to make the neuron fire \n\nas strongly as possible, but we'll use other objectives throughout in \n\nthis article. One reason feature visualization is powerful is that it \n\ntells us about causality. Since the input starts with random noise and \n\noptimizes pixels rather than a generative prior, we can be confident \n\nthat any property in the resulting image contributed to the objective.\n\n feature visualizations is a bit of a skill, and these images might feel\n\n disorienting if you haven't spent much time with them before. The most \n\nimportant thing to take away is the curve shape. You may also notice \n\nthat there are bright, opposite hue colors on each side of the curve: \n\nthis reflects a preference to see a change in color at the boundary of \n\nthe curve. Finally, if you look carefully, you will notice small lines \n\nperpendicular to the boundary of the curve. We call this weak preference\n\n for small perpendicular lines “combing\" and will discuss it later. \n\nEvery\n\n time we use feature visualization to make curve neurons fire as \n\nstrongly as possible we get images of curves, even when we explicitly \n\nFeature\n\n visualization finds images that maximally cause a neuron to fire, but \n\nare these superstimuli representative of the neuron's behavior? When we \n\nsee a feature visualization, we often imagine that the neuron fires \n\nstrongly for stimuli qualitatively similar to it, and gradually becomes \n\nweaker as the stimuli exhibit those visual features less. But one could \n\nimagine cases where the neuron's behavior is completely different in the\n\n non-extreme activations, or cases where it does fire weakly for messy \n\nversions of the extreme stimulus, but also has a secondary class of \n\nstimulus to which it responds weakly.\n\nIf we want to understand how\n\n a neuron behaves in practice, there's no substitute to simply looking \n\nat how it actually responds to images from the dataset.\n\n[Dataset Analysis\n\n because some experiments required non-trivial manual labor. However, \n\nthe core ideas in this section will apply to all curve detectors in 3b.\n\n fire? When it fires, how often does it fire strongly? And when it \n\ndoesn't fire, is it often strongly inhibited, or just on the verge of \n\nfiring? We can answer these questions by visualizing the distribution of\n\n activations across the dataset.\n\nWhen studying ReLU networks, we \n\nfind it helpful to look at the distribution of pre-activation values. \n\nSince ReLU just truncates the left hand side, it's easy to reason about \n\nthe post-activation values, but it also shows us how close the neuron \n\n has a pre-activation mean of about -200. It fires in just 11% of cases \n\nacross the dataset, since negative values will be set to zero by the \n\nReLU activation function.\n\n observation that neural network activations generally follow an \n\nexponential distribution was first made to us by Brice Ménard, who \n\nobserved it to be the case over all but the first layer of several \n\nnetworks. This is mildly surprising both because of how perfectly they \n\nseem to follow an exponential distribution, and also because one often \n\n looking at pre-ReLU values for 3b:379 activations, we see that both \n\npositive and negative values follow an exponential distribution. Since \n\nall negative values will be lifted to zero by the ReLU, 3b:379 \n\nactivations are sparse, with only 11% of stimuli across the dataset \n\ncausing activations.To understand different parts \n\nof this distribution qualitatively we can render a quilt of images by \n\n to activate different amounts. The quilt shows a pattern. Images that \n\ncause the strongest activations have curves that are similar to the \n\nneuron's feature visualization. Images that cause weakly positive \n\nactivations are imperfect curves, either too flat, off-orientation, or \n\nwith some other defect. Images that cause pre-ReLU activations near zero\n\n tend to be straight lines or images with no arcs, although some images \n\nare of curves about 45 degrees off orientation. Finally, the images that\n\n cause the strongest negative activations have curves with an \n\norientation more than 45 degrees away from the neuron's ideal curve.\n\n of images reveal patterns across a wide range of activations, but they \n\ncan be misleading. Since a neuron's activation to a receptive-field \n\nsized crop of an image is just a single number, we can't be sure which \n\npart of the image caused it. As a result, we could be fooled by spurious\n\n[Visualizing Attribution\n\n These methods attempt to describe which pixels or earlier neurons are \n\nresponsible for causing neurons to fire. In the general case of complex \n\nnon-linear functions, there is a lot of disagreement over which \n\nattribution methods are principled and whether they are reliable .\n\n But in the linear-case, attribution is generally agreed upon, with most\n\nSince\n\n a neuron’s pre-activation function and bias value is a linear function \n\nof neurons in the layer before it, we can use this generally agreed upon\n\n attribution method. In particular, curve detectors in 3b’s \n\npre-activation value is a linear function of 3a. The attribution tensor \n\ndescribing how all neurons in the previous layer influenced a given \n\nneuron is the activations pointwise multiplied by the weights.\n\nWe \n\nnormally use feature visualization to create a superstimulus that \n\nactivates a single neuron, but we can also use it to activate linear \n\ncombinations of neurons. By applying feature visualization to the \n\nattribution tensor, we are creating the stimulus that maximally \n\n to fire. Additionally, we will use the absolute value of the \n\nattribution tensor, which shows us features that caused the neuron to \n\nfire and also features that inhibited it. This can be useful for seeing \n\ncurve-related visual properties that influenced our cure neuron, even if\n\n that influence was to make it fire less.\n\n is the activations of the previous hidden layer. In practice, we find \n\nit helpful to parameterize these attribution visualizations to be \n\ngrayscale and transparent, making the visualization easier to read for \n\nnon-experts . Example code can be found in the notebook.\n\n the above experiment visualizes every neuron in 3a, attribution is a \n\npowerful and flexible tool that could be used to apply to studies of \n\ncircuits in a variety of ways. For instance, we could visualize how an \n\n before 3b, visualizing the image's activation vector and attribution \n\nvector to curve neurons at each family along the way. Each activation \n\nvector would show what a family saw in the image, and each attribution \n\nIn\n\n the next section we'll look at a less sophisticated technique for \n\nextracting information from dataset images: blindfolding yourself from \n\nseeing neuron activations and classifying images by hand.\n\n[Human Comparison\n\n----------------](#human-comparison)Nick\n\n Cammarata, an author of this paper, manually labelled over 800 images \n\ninto four groups: curve, imperfect curve, unrelated image, or opposing \n\n While labeling, Nick could only see the image's pixels, not additional \n\ninformation such as the neuron's activations or attribution \n\nvisualizations. He used the following rubric in labeling:\n\n* **Curves**:\n\n The Image has a curve with a similar orientation to the neuron's \n\nfeature visualization. The curve goes across most of the width of the \n\nimage.\n\n* **Imperfect Curve**: The image has a \n\ncurve that is similar to the neuron's feature visualization, but has at \n\nleast one significant defect. Perhaps it is too flat, has an angle \n\ninterrupting the arc, or the orientation is slightly off.\n\n* **Unrelated**: The image doesn't have a curve.\n\n during labeling Nick felt it was often difficult to place samples into \n\ngroups, as many images seemed to fall within the boundaries of the \n\nrubric. We were surprised when we saw that activations clearly separate \n\ninto different levels of activation..\n\n probability of each group by 3b:379 activation across our hand-labelled\n\n dataset of around 850 images. We see that the different human labels \n\nseparate into different ranges of activations.Still,\n\n there are many images that cause the neuron to activate but aren't \n\nclassified as curves or imperfect curves. When we visualize attribution \n\n examples that activate 3b:379 but were labelled \"unrelated\" by humans \n\noften contain subtle curves that are revealed by visualizing the image's\n\n that occurs when looking at one kind of stimulus for a long time. As a \n\nresult, he found it hard to tell whether subtle curves were simply \n\nperceptual illusions. By visualizing attribution, we can reveal the \n\ncurves that the neuron sees in the image, showing us curves that our \n\n**How important are different points on the activation spectrum?**\n\n seems to be highly selective for curve stimuli when it fires strongly, \n\nthis is only a tiny fraction of cases where it fires. Most of the time, \n\nit doesn't fire at all, and when it does it's usually very weakly. \n\nTo\n\n see this, we can look at the probability density over activation \n\nmagnitudes from all ImageNet examples, split into the same \n\nper-activation-magnitude (x-axis) ratio of classes as our hand labelled \n\ndataset.\n\n our hand-labelled dataset uniformly samples from activations, images of\n\n curves are rare within the dataset and 3b:379 activations follow an \n\nexponential distribution. In this plot we show 3b:379 activations split \n\ninto the conditional probabilities of different groups at a given \n\nactivation level within our hand-labelled dataset.From\n\n this perspective, we can't even see the cases where our neuron fires \n\nstrongly! Probability density exponentially decays as we move right, and\n\n so these activations are rare. To some extent, this is what we should \n\nexpect if these neurons really detect curves, since clear-cut curves \n\nrarely occur in images.\n\n only weakly fire or didn't fire, this graph seems to show that the \n\nmajority of stimuli classified as curves also fall in these cases, as a \n\nresult of neurons firing strongly being many orders of magnitude rarer. \n\nThis seems to be at least partly due to labeling error and the rarity of\n\n curves (see discussion later). But it makes things a bit hard to reason\n\n about. This is why we haven't provided a precision-recall curve: recall\n\n would be dominated by the cases where the neuron didn't fire strongly \n\nand be dominated by potential labeling error as a result.\n\nIt's not\n\n clear that probability density is really the right way to think about \n\nthe behavior of a neuron. The vast majority of cases are cases where the\n\n neuron didn't fire: are those actually important to think about? And if\n\n a neuron frequently barely fires, how important is that for \n\nunderstanding the role of a neuron in the network?\n\n This measure can be thought of as giving an approximation at how much \n\nthat activation value influences the output of the neuron, and by \n\nextension network behavior. There's still reason to think that high \n\nactivation cases may be disproportionately important beyond this (for \n\nexample, in max pooling only the highest value matters), but \n\ncontribution to expected value seems like a reasonable estimate.If\n\n one wanted to push further on exploring the importance of different \n\nparts of the activation spectrum, they might take some notion of \n\nattribution (methods for estimating the influence of one neuron on later\n\n neurons in a particular case) and estimate the contribution to the \n\nexpected value of the attribution to the logit. A simple version of this\n\n to the expected value of different activations, which shows how much \n\neach activation value influences the output of a neuron. Since curves \n\nare rare within the dataset, weak neuron activations contribute most to \n\n was really a curve detector in a meaningful sense. Even if it's highly \n\nselective when it fires strongly, how can that be what matters when it \n\nisn't even visible on a probability density plot? Contribution to \n\nexpected value shows us that even by a conservative measure, curves and \n\nimperfect curves form 55%. This seems consistent with the hypothesis \n\nthat it really is a curve detector, and the other stimuli causing it to \n\nfire are labeling errors or cases where noisy images cause the neuron to\n\n misfire.\n\n activations seem to correspond roughly to a human labelled judgement of\n\n whether images contain curves. Additionally, visualizing the \n\nattribution vector of these images tells us that the reason these images\n\n fire is because of the curves in the images, and we're not being fooled\n\n by spurious correlations. But these experiments are not enough to \n\ndefend the claim that curve neurons detect images of curves. Since \n\nimages of curves appear infrequently in the dataset, using it to \n\nsystematically study curve images is difficult. Our next few experiments\n\n will focus on this directly, studying how curve neurons respond to the \n\nspace of reasonable curve images.\n\n[Joint Tuning Curves\n\n-------------------](#joint-tuning-curves)Our\n\n first two experiments suggest that each curve detector responds to \n\ncurves at a different orientation. Our next experiment will help verify \n\nthat they really do detect rotated versions of the same feature, and \n\ncharacterize how sensitive each unit is to changes in orientation.\n\nWe do this by creating a **joint tuning curve**In\n\n neuroscience, tuning curves — charts of neural response to a continuous\n\n stimulus parameter — came to prominence in the early days of vision \n\nresearch. Observation of receptive fields and orientation-specific \n\nresponses in neurons gave rise to some of the earliest theories about \n\nhow low-level visual features might combine to create higher-level \n\nrepresentations. Since then they have been a mainstay technique in the \n\n collect dataset examples that maximally activate neuron. We rotate them\n\n by increments of 1 degree from 0 to 360 degrees and record activations.The\n\n activations are shifted so that the points where each neuron responds \n\nare aligned. The curves are then averaged to create a typical response \n\n tuning curves are useful for measuring neuron activations across \n\nperturbations in natural images, we're limited by the kinds of \n\nperturbations we can do on these images. In our next experiment we'll \n\nget access to a larger range of perturbations by rendering artificial \n\nstimuli from scratch.\n\n[Synthetic Curves\n\n----------------](#synthetic-curves)While\n\n the dataset gives us almost every imaginable curve, they don't come \n\nlabelled with data such as orientation or radius, making it hard to \n\nanswer questions that require systematically measuring responses to \n\nvisual properties. How sensitive are curve detectors to curvature? What \n\norientations do they respond to? Does it matter what colors are \n\ninvolved? One way to get more insight into these questions is to draw \n\nour own curves. Using synthetic stimuli like this is a common method in \n\nvisual neuroscience, and we've found it to also be very helpful in the \n\nstudy of artificial neural networks. The experiments in this section are\n\n specifically inspired by similar experiments probing for curve \n\ndetecting biological neurons .\n\nSince\n\n dataset examples suggest curve detectors are most sensitive to \n\norientation and curvature, we'll use them as parameters in our curve \n\nrenderer. We can use this to measure how changes in each property causes\n\n to fire. We find it helpful to present this as a heatmap, in order to \n\nget a higher resolution perspective on what causes the neuron to fire.\n\n find that simple drawings can be extraordinarily exciting. The curve \n\nimages that cause the strongest excitations — up to 24 standard \n\ndeviations above the average dataset activation! — have similar \n\norientation and curvature to the neuron's feature visualization.\n\nThe\n\n triangular geometry shows that curve detectors respond to a wider range\n\n of orientations in curves with higher curvature. This is because curves\n\n with more curvature contain more orientations. Consider that a line \n\ncontains no curve orientations and a circle contains every curve \n\norientation. Since the synthetic images closer to the top are closer to \n\nlines, their activations are more narrow.\n\nThe wisps show that tiny\n\n changes in orientation or curvature can cause dramatic changes in \n\nactivations, which indicate that curve detectors are fragile and \n\nnon-robust. Sadly, this is a more general problem across neuron \n\nVarying\n\n curves along just two variables reveals barely-perceptible \n\nperturbations that sway activations several standard deviations. This \n\nsuggests that the higher dimensional pixel-space contains more \n\npernicious exploits. We're excited about the research direction of \n\ncarefully studying neuron-specific adversarial attacks, particularly in \n\nearly vision. One benefit of studying early vision families is that it's\n\n tractible to follow the whole circuit back to the input, and this could\n\n be made simpler by extracting the important parts of a circuit and \n\nstudying it in isolation. Perhaps this simplified environment could give\n\n us clues into how to make neurons more robust or even protect whole \n\nmodels against adversarial attacks.\n\nIn addition to testing \n\norientation and curvature, we can also test other variants like whether \n\nthe curve shapes are filled, or if they have color. Dataset analyses \n\nhints that curve detectors are invariant to cosmetic properties like \n\nlighting, and color, and we can confirm this with synthetic stimuli. \n\n----------------](#synthetic-angles)Both\n\n our synthetic curve experiments and dataset analysis show that although\n\n curves are sensitive to orientation, they have a wide tolerance for the\n\n radius of curves. At the extreme, curve neurons partially respond to \n\nedges in a narrow band of orientations, which can be seen as a curve \n\nwith infinite radius. This may cause us to think curve neurons actually \n\nrespond to lots of shapes with the right orientation, rather than curves\n\n specifically. While we cannot systematically render all possible \n\nshapes, we think angles are a good test case for studying this \n\nhypothesis.\n\nIn the following experiment we vary synthetic angles \n\nsimilarly to our synthetic curves, with radius on the y axis and \n\norientation across the x axis.\n\n activations form two distinct lines, with the strongest activations \n\nwhere they touch. Each line is where one of the two lines in the angle \n\naligns with the tangent of the curve. The two lines touch where the \n\nangle most similar to a curve with an orientation that matches the \n\nneuron's feature visualization. The weaker activations on the right side\n\n of the activations have the same cause, but with the inhibitory half of\n\n the angle stimulus facing outwards instead of inwards. \n\n![](Curve%20Detectors_files/figure-f4e54f9da80ab5da3ac54a096ca11be4.svg)The\n\n first stimuli we looked at were synthetic curves and the second stimuli\n\n was synthetic angles. In the next examples we show a series of stimuli \n\nthat transition from angles to curves. Each column's strongest \n\nactivation is stronger than the column before it since rounder stimuli \n\nare closer to curves, causing curve neurons to fire more strongly. \n\nAdditionally, as each stimulus becomes rounder, their “triangles of \n\nactivation\" become increasingly filled as the two lines from the \n\noriginal angle stimuli transition into a smooth arc.\n\n transition from angles on the left to curves on the right, making the \n\nstimuli rounder at each step. Each step we see the maximum activation \n\nfor each neuron increase, and the activation \"triangle\" fill in as the \n\n interface is useful for seeing how different curve neurons respond to \n\nchanges in multiple stimuli properties, but it's bulky. In the next \n\nsection we'll be exploring curve families across different layers, and \n\nit will be helpful to have a more compact way to view activations of a \n\ncurve neuron family. For this, we'll introduce a *radial tuning curve*.\n\n[Radial Tuning Curve\n\n---------------------------------](#the-curve-families-of-inceptionv1)So\n\n far we've been looking at a set of curve neurons in 3b. But InceptionV1\n\n actually contains curve neurons in four contiguous layers, with 3b \n\nbeing the third of these layers.\n\n which we sometimes shorten to \"2\", is the third convolutional layer in \n\nInceptionV1. It contains two types of curve detectors: concentric curves\n\n and combed edges.\n\nConcentric curves are small curve detectors \n\nthat have a preference for multiple curves at the same orientation with \n\nincreasing radii. We believe this feature has a role in the development \n\nof curve detectors in 3a and 3b that are tolerant of a wide range of \n\nradii.\n\n![](Curve%20Detectors_files/imgsappnick-personal1XjsJAYh6F.png)Combed\n\n edges detect several lines protruding perpendicularly from a larger \n\nline. These protruding lines also detect curves, making them a type of \n\ncurve detector. These neurons are used to construct later curve \n\ndetectors and play a part in the [combing effect](#combing-effect).\n\nLooking\n\n at conv2d2 activations we see that curves respond to one contiguous \n\nrange like the ones in 3b, but also weakly activate to a range on the \n\nopposite side, 180 degrees away. We call this secondary range **echoes**.\n\n 3a non-concentric curve detectors have formed. In many ways they \n\nresemble the curve detectors in 3b, and in the next article we'll see \n\nhow they're used to build 3b curves. One difference is that the 3a \n\ncurves have echoes.\n\nYou\n\n may notice that there are two large angular gaps at the top of the \n\nradial tuning curve for 3b, and smaller ones at the bottom. Why is that?\n\n 4a the network constructs many complex shapes such as spirals and \n\nboundary detectors, and it is also the first layer to construct 3d \n\ngeometry. It has several curve detectors, but we believe they are better\n\n thought of as corresponding to specific worldly objects rather than \n\n appears to be a upwards facing curve detector with confusing secondary \n\nbehavior at two other angles. But dataset examples reveal its secret: \n\nit's detecting the tops of cups and pans viewed from an angle. In this \n\nsense, it is better viewed as a tilted 3d circle detector.\n\n is a good example of how neural network interpretability can be \n\nsubjective. We usually think of abstract concepts like curves and \n\nworldly objects like coffee cups as belonging as different kinds of \n\nthings — and for most of the network they are separate. But there's a \n\ntransition period where we have to make a judgement call, and 4a is that\n\n transition.\n\n---------------------------](#repurposing-curve-detectors)We\n\n started studying curve neurons to better understand neural networks, \n\nnot because we were intrinsically interested in curves. But during our \n\ninvestigation we became aware that curve detection is important for \n\nfields like aerial imaging, self-driving cars, and medical research, and\n\n there's a breadth of literature from classical computer vision on curve\n\n detection in each domain. We've prototyped a technique that leverages \n\nthe curve neuron family to do a couple different curve related computer \n\nvision tasks.\n\nOne task is *curve extraction* \n\n , the task of highlighting the pixels of the image that are part of \n\ncurves. Visualizing attribution to curve neurons, as we've been doing in\n\n this article, can be seen as a form of curve extraction. Here we \n\ncompare it to the commonly used Canny edge detection algorithm on an \n\nx-ray of blood vessels known as an angiogram, taken from , Figure 2.1.\n\n attribution visualization clearly separates and illuminates the lines \n\nand curves, and displays less visual artifacts. However, it displays a \n\nstrong [combing effect](#combing-effect) — unwanted \n\nperpendicular lines emanating from the edge being traced. We're unsure \n\nhow harmful these lines are in practice for this application, but we \n\nthink it's possible to remove them by editing the circuits of curve \n\nneurons.\n\nWe don't mean to suggest we've created a competitive \n\ncurve tracing algorithm. We haven't done a detailed comparison to state \n\nof the art curve detection algorithms, and believe it's likely that \n\nclassical algorithms tuned for precisely this goal outperform our \n\napproach. Instead, our goal here is to explore how leveraging internal \n\nneural network representations opens a vast space of visual operations, \n\nof which curve extraction is just one point.\n\n[### Spline Parameterization](#spline-parameterization)We\n\n can access more parts of this space by changing what we optimize. So \n\nfar we've been optimizing pixels, but we can also create a \n\nOcclusion![](Curve%20Detectors_files/source_006.png)Our\n\n splines can trace curves even if they have significant occlusion. \n\nFurthermore, we can use attribution to construct complex occlusion \n\nrules. For instance, we can strongly penalize our spline for overlapping\n\n with a particular object or texture, disincentivizing the spline from \n\n curve neurons are robust to a wide variety of natural visual features, \n\n seemingly unrelated visual operation is image segmentation. This can be\n\n done in an unsupervised way using non-negative matrix factorization \n\n(NMF) . We can \n\nvisualize attribution to each of these factors with our spline \n\nparameterization to trace the curves of different objects in the image.\n\n of factoring the activations of a single image, we can jointly \n\nfactorize lots of butterflies to find the neurons in the network that \n\nrespond to butterflies in general. One big difference between factoring \n\nactivations and normal image segmentation is that we get groups of \n\nneurons rather than pixels. These neuron groups can be applied to find \n\nbutterflies in images in general, and by composing this with \n\ndifferentiable spline parameterization we get a single optimization we \n\ncan apply to any image that automatically finds butterflies and gives us\n\n equations to splines that fit them.\n\n this above example we manipulated butterflies and curves without having\n\n to worry about the details of either. We delegated the intricacies of \n\nrecognizing butterflies of many species and orientations to the neurons,\n\n letting us work with the abstract concept of butterflies. \n\nWe \n\nthink this is one exciting way to fuse classical computer vision with \n\ndeep learning. There is plenty of low hanging fruit in extending the \n\ntechnique shown above, as our spline parameterization is just an early \n\nprototype and our optimizations are using a neural network that's half a\n\n decade old. However, we're more excited by investigations of how users \n\ncan explore the space between tasks than improvements in any particular \n\ntask. Once a task is set \n\nin stone, training a neural network for exactly that job will likely \n\ngive the best results. But real world tasks are rarely specified with \n\nprecision, and the harder challenge is to explore the space of tasks to \n\nfind which to commit to.\n\nFor instance, a more developed version of\n\n our algorithm that automatically finds the splines of butterflies in an\n\n image could be used as a basis for turning video footage of butterflies\n\n But an animator may wish to add texture neurons and change to a soft \n\nbrush parameterization to add a rotoscoping style to their animation. \n\nSince they have full access to every neuron in the render, they could \n\nmanipulate attribution to fur neuron families and specific dog breeds, \n\nchanging how fur is rendered on specific species of dogs across the \n\nentire movie. Since none of these algorithms require retraining a neural\n\n network or any training data, in-principle an animator could explore \n\nthis space of algorithms in real time, which is important because tight \n\nfeedback loops can be crucial in unlocking creative potential. \n\n[The Combing Phenomenon\n\n----------------------](#the-combing-phenomenon)One\n\n curious aspect of curve detectors is that they seem to be excited by \n\nsmall lines perpendicular to the curve, both inwards and outwards. You \n\ncan see this most easily by inspecting feature visualizations. We call \n\nthis phenomenon \"combing.\"\n\nCombing seems to occur across curve \n\ndetectors from many models, including models trained on Places365 \n\ninstead of ImageNet. In fact, there's some weak evidence it occurs in \n\nbiological neural networks as well: a team that ran a process similar to\n\n feature visualization on a biological neuron in a Macaque monkey's V4 \n\nregion of the visual cortex found a circular shape with outwardly \n\nprotruding lines to be one of the highest activating stimuli.\n\nOne\n\n hypothesis is that many important curves in the modern world have \n\nperpendicular lines, such as the spokes of a wheel or the markings on \n\nthe rim of a clock.\n\n related hypothesis is that combing might allow curve detectors to be \n\nused for fur detection in some contexts. Another hypothesis is that a \n\ncurve has higher \"contrast\" with perpendicular lines running towards. \n\nRecall that in the dataset examples, the strongest negative pre-ReLU \n\nactivations were curves at opposite orientations. If a curve detector \n\nwants to see a strong change in orientation between the curve and the \n\nspace around it, it may consider perpendicular lines to be more contrast\n\n than a solid color.\n\nFinally, we think it's possible that combing \n\nis really just a convenient way to implement curve detectors — a side \n\neffect of a shortcut in circuit construction rather than an \n\nintrinsically useful feature. In conv2d1, edge detectors are inhibited \n\nby perpendicular lines in conv2d0. One of the things a line or curve \n\ndetector needs to do is check that the image is not just a single \n\nrepeating texture, but that it has a strong line surrounded by contrast.\n\n It seems to do this by weakly inhibiting parallel lines alongside the \n\ntangent. Being excited by a perpendicular line may be an easy way to \n\nimplement a \"inhibit an excitatory neuron\" pattern which allows for \n\ncapped inhibition, without creating dedicated neurons at the previous \n\nlayer. \n\nCombing is not unique to curves. We also observe it in \n\nlines, and basically any shape feature like curves that is derivative of\n\n lines. A lot more work could be done exploring the combing phenomenon. \n\nWhy does combing form? Does it persist in adversarially robust models? \n\nIs it an example of what Ilyas et al call a \"non-robust feature\"? \n\n[Conclusion\n\n----------](#conclusion)Compared\n\n to fields like neuroscience, artificial neural networks make careful \n\ninvestigation easy. We can read and write to every weight in the neural \n\nnetwork, use gradients to optimize stimuli, and analyze billions of \n\nrealistic activations across a dataset. Composing these tools lets us \n\nrun a wide range of experiments that show us different perspectives on a\n\n neuron. If every perspective shows the same story, it's unlikely we're \n\nmissing something big. \n\nGiven this, it may seem odd to invest so \n\nmuch energy into just a handful of neurons. We agree. We first estimated\n\n it would take a week to understand the curve family. Instead, we spent \n\nmonths exploring the fractal of beauty and structure we found. \n\nMany\n\n paths led to new techniques for studying neurons in general, like \n\nsynthetic stimuli or using circuit editing to ablate neurons behavior. \n\nOthers are only relevant for some families, such as the equivariance \n\nmotif or our hand-trained “artificial artificial neural network\" that \n\nreimplements curve detectors. A couple were curve-specific, like \n\nexploring curve detectors as a type of curve analysis algorithms.\n\nIf\n\n our broader goal is fully reverse-engineer neural networks it may seem \n\nconcerning that studying just one family took so much effort. However, \n\nfrom our experience studying neuron families at a variety of depths, \n\n shows you feature visualizations, dataset examples, and soon weights in\n\n just a few seconds. Since feature visualization shows strong evidence \n\nof causal behavior and dataset examples show what neurons respond to in \n\npractice, these are collectively strong evidence of what a neuron does. \n\nIn fact, we understood the basics of curves at our first glance at them.\n\n \n\nWhile it's usually possible to understand the main function of a\n\n neuron family at a glance, researchers engaging in closer inquiry of \n\nneuron families will be rewarded with deeper beauty.\n\nWhen we started, we were nervous that 10 neurons was too narrow a topic \n\nfor a paper, but now we realize a complete investigation would take a \n\nbook.\n\n an experimental format collecting invited short articles and critical \n\ncommentary delving into the inner workings of neural networks.\n\n", "bibliography_bib": [{"title": "Visualizing and understanding convolutional networks"}, {"title": "Complex pattern selectivity in macaque primary visual cortex revealed by large-scale two-photon imaging"}, {"title": "Distributed representations of words and phrases and their compositionality"}, {"title": "Visualizing and understanding recurrent networks"}, {"title": "Learning to generate reviews and discovering sentiment"}, {"title": "Object detectors emerge in deep scene cnns"}, {"title": "Feature Visualization"}, {"title": "Network Dissection: Quantifying Interpretability of Deep Visual Representations"}, {"title": "On Interpretability and Feature Representations: An Analysis of the Sentiment Neuron"}, {"title": "Measuring the tendency of CNNs to Learn Surface Statistical Regularities"}, {"title": "ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness"}, {"title": "Approximating cnns with bag-of-local-features models works surprisingly well on imagenet"}, {"title": "Adversarial examples are not bugs, they are features"}, {"title": "On the importance of single directions for generalization"}, {"title": "Visualizing higher-layer features of a deep network"}, {"title": "Deep inside convolutional networks: Visualising image classification models and saliency maps"}, {"title": "Deep neural networks are easily fooled: High confidence predictions for unrecognizable images"}, {"title": "Inceptionism: Going deeper into neural networks"}, {"title": "Plug & play generative networks: Conditional iterative generation of images in latent space"}, {"title": "Axiomatic attribution for deep networks"}, {"title": "Grad-cam: Why did you say that? visual explanations from deep networks via gradient-based localization"}, {"title": "PatternNet and PatternLRP--Improving the interpretability of neural networks"}, {"title": "The (Un)reliability of saliency methods"}, {"title": "Sanity checks for saliency maps"}, {"title": "Differentiable Image Parameterizations"}, {"title": "An overview of early vision in inceptionv1"}, {"title": "Responses to contour features in macaque area V4"}, {"title": "Discrete neural clusters encode orientation, curvature and corners in macaque V4"}, {"title": "Curve tracing and curve detection in images"}, {"title": "Synthetic Abstractions"}, {"title": "Neural painters: A learned differentiable constraint for generating brushstroke paintings"}, {"title": "The Building Blocks of Interpretability"}, {"title": "Using Artificial Intelligence to Augment Human Intelligence"}], "filename": "Curve Detectors.html", "id": "9d1f1fef5d47cc7cc6784974eee22057"} {"url": "n/a", "source": "distill", "source_type": "html", "converted_with": "python", "title": "Exploring Bayesian Optimization", "authors": ["Apoorv Agnihotri", "Nipun Batra"], "date_published": "2020-05-05", "abstract": " Many modern machine learning algorithms have a large number of hyperparameters. To effectively use these algorithms, we need to pick good hyperparameter values. In this article, we talk about Bayesian Optimization, a suite of techniques often used to tune hyperparameters. More generally, Bayesian Optimization can be used to optimize any black-box function. ", "journal_ref": "distill-pub", "doi": "https://doi.org/10.23915/distill.00026", "text": "\n\n Many modern machine learning algorithms have a large number of \n\nhyperparameters. To effectively use these algorithms, we need to pick \n\ngood hyperparameter values.\n\n In this article, we talk about Bayesian Optimization, a suite of \n\ntechniques often used to tune hyperparameters. More generally, Bayesian \n\nOptimization can be used to optimize any black-box function.\n\n \n\nMining Gold!\n\n============\n\n For now, we assume that the gold is distributed about a line. We \n\nwant to find the location along this line with the maximum gold while \n\nonly drilling a few times (as drilling is expensive).\n\n \n\n \n\n![](Exploring%20Bayesian%20Optimization_files/GT.svg)\n\n Initially, we have no idea about the gold distribution. We can \n\nlearn the gold distribution by drilling at different locations. However,\n\n \n\n We now discuss two common objectives for the gold mining problem.\n\n \n\n* **Problem 1: Best Estimate of Gold Distribution (Active Learning)** \n\n **Active Learning**.\n\n* **Problem 2: Location of Maximum Gold (Bayesian Optimization)** \n\n In this problem, we want to find the location of the maximum \n\ngold content. We, again, can not drill at every location. Instead, we \n\n **Bayesian Optimization**.\n\n We will soon see how these two problems are related, but not the same.\n\n \n\nActive Learning\n\n---------------\n\n For many machine learning problems, unlabeled data is readily \n\navailable. However, labeling (or querying) is often expensive. As an \n\nexample, for a speech-to-text task, the annotation requires expert(s) to\n\n label words and sentences manually. Similarly, in our gold mining \n\nproblem, drilling (akin to labeling) is expensive. \n\n \n\n Active learning minimizes labeling costs while maximizing modeling\n\n accuracy. While there are various methods in active learning \n\nliterature, we look at **uncertainty reduction**. This \n\nmethod proposes labeling the point whose model uncertainty is the \n\nhighest. Often, the variance acts as a measure of uncertainty.\n\n \n\n for the values our function takes elsewhere. This surrogate should be \n\nflexible enough to model the true function. Using a Gaussian Process \n\n(GP) is a common choice, both because of its flexibility and its ability\n\n to give us uncertainty estimates\n\n \n\n Gaussian Process supports setting of priors by using specific \n\nkernels and mean functions. One might want to look at this excellent \n\nDistill article on Gaussian Processes to learn more. \n\n \n\n .\n\n \n\n \n\n .\n\n \n\n![](Exploring%20Bayesian%20Optimization_files/prior2posterior.png)\n\n Each new data point updates our surrogate model, moving it closer \n\nto the ground truth. The black line and the grey shaded region indicate \n\n \n\n \n\n However, we want to minimize the number of evaluations. Thus, we \n\nshould choose the next query point “smartly” using active learning. \n\nAlthough there are many ways to pick smart points, we will be picking \n\nthe most uncertain one.\n\n \n\n This gives us the following procedure for Active Learning:\n\n \n\n2. Train on the new training set\n\n3. Go to #1 till convergence or budget elapsed\n\n \n\n![](Exploring%20Bayesian%20Optimization_files/0_004.png)\n\nThe visualization shows that one can estimate the true \n\ndistribution in a few iterations. Furthermore, the most uncertain \n\npositions are often the farthest points from the current evaluation \n\n \n\nBayesian Optimization\n\n---------------------\n\n In the previous section, we picked points in order to determine an\n\n accurate model of the gold content. But what if our goal is simply to \n\nfind the location of maximum gold content? Of course, we could do active\n\n learning to estimate the true function accurately and then find its \n\nmaximum. But that seems pretty wasteful — why should we use evaluations \n\nimproving our estimates of regions where the function expects low gold \n\ncontent when we only care about the maximum?\n\n \n\n This is the core question in Bayesian Optimization: “Based on what\n\n we know so far, which point should we evaluate next?” Remember that \n\nevaluating each point is expensive, so we want to pick carefully! In the\n\n active learning case, we picked the most uncertain point, exploring the\n\n function. But in Bayesian Optimization, we need to balance exploring \n\nuncertain regions, which might unexpectedly have high gold content, \n\nagainst focusing on regions we already know have higher gold content (a \n\nkind of exploitation).\n\n \n\n We make this decision with something called an acquisition \n\nfunction. Acquisition functions are heuristics for how desirable it is \n\n \n\n This brings us to how Bayesian Optimization works. At every step, \n\nwe determine what the best point to evaluate next is according to the \n\nacquisition function by optimizing it. We then update our model and \n\nrepeat this process to determine the next point to evaluate.\n\n \n\n You may be wondering what’s “Bayesian” about Bayesian Optimization\n\n if we’re just optimizing these acquisition functions. Well, at every \n\nstep we maintain a model describing our estimates and uncertainty at \n\n \n\n### Formalizing Bayesian Optimization\n\n We present the general constraints in Bayesian Optimization and \n\n * [Youtube talk](https://www.youtube.com/watch?v=c4KKvyWW_Xk),\n\n.\n\n| General Constraints | Constraints in Gold Mining example |\n\n| --- | --- |\n\n| fff’s feasible set AAA is simple,\n\n| fff is continuous but lacks special structure,\n\n| fff is derivative-free:\n\n| fff is expensive to evaluate:\n\n the number of times we can evaluate it\n\n is severely limited. | Drilling is costly. |\n\nit is easy to incorporate normally distributed noise for GP regression). |\n\n To solve this problem, we will follow the following algorithm:\n\n \n\n### Acquisition Functions\n\n \n\n#### Probability of Improvement (PI)\n\n \n\nxt+1=argmax(αPI(x))=argmax(P(f(x)≥(f(x+)+ϵ)))\n\n x\\_{t+1} = argmax(\\alpha\\_{PI}(x)) = argmax(P(f(x) \\geq (f(x^+) +\\epsilon)))\n\n xt+1​=argmax(αPI​(x))=argmax(P(f(x)≥(f(x+)+ϵ)))\n\nxt+1=argmax(αPI(x))=argmax(P(f(x)≥(f(x+)+ϵ)))\n\n \\begin{aligned}\n\n x\\_{t+1} & = argmax(\\alpha\\_{PI}(x))\\\\\n\n & = argmax(P(f(x) \\geq (f(x^+) +\\epsilon)))\n\n \\end{aligned}\n\n xt+1​​=argmax(αPI​(x))=argmax(P(f(x)≥(f(x+)+ϵ)))​\n\n where, \n\n \n\n* P(⋅)P(\\cdot)P(⋅) indicates probability\n\n* ϵ\\epsilonϵ is a small positive number\n\n \n\n Looking closely, we are just finding the upper-tail probability \n\n(or the CDF) of the surrogate posterior. Moreover, if we are using a GP \n\nas a surrogate the expression above converts to,\n\n \n\n where, \n\n \n\n* Φ(⋅)\\Phi(\\cdot)Φ(⋅) indicates the CDF\n\n The violet region shows the probability density at each point. The grey\n\n regions show the probability density below the current max. The “area” \n\nof the violet region at each point represents the “probability of \n\nimprovement over current maximum”. The next point to evaluate via the PI\n\n criteria (shown in dashed blue line) is x=6x = 6x=6.\n\n \n\n![](Exploring%20Bayesian%20Optimization_files/density_pi.png)\n\n##### Intuition behind ϵ\\epsilonϵ in PI\n\n PI uses ϵ\\epsilonϵ to strike a balance between exploration and exploitation. \n\n \n\n \n\n![](Exploring%20Bayesian%20Optimization_files/0_009.png)\n\n This observation also shows that we do not need to construct an \n\naccurate estimate of the black-box function to find its maximum.\n\n \n\n![](Exploring%20Bayesian%20Optimization_files/0_013.png)\n\n \n\n What happens if we increase ϵ\\epsilonϵ a bit more?\n\n \n\n![](Exploring%20Bayesian%20Optimization_files/0_012.png)\n\n We see that we made things worse! Our model now uses ϵ=3\\epsilon = 3ϵ=3,\n\n and we are unable to exploit when we land near the global maximum. \n\nMoreover, with high exploration, the setting becomes similar to active \n\nlearning.\n\n \n\n \n\n#### Expected Improvement (EI)\n\n The idea is fairly simple — choose the next query point as the one\n\n \n\n \n\nxt+1=argminxE(∣∣ht+1(x)−f(x⋆)∣∣ ∣ Dt)\n\n xt+1​=argminx​E(∣∣ht+1​(x)−f(x⋆)∣∣ ∣ Dt​)\n\n \n\n In essence, we are trying to select the point that minimizes the \n\ndistance to the objective evaluated at the maximum. Unfortunately, we do\n\n not know the ground truth function, fff. Mockus proposed\n\n the following acquisition function to overcome the issue.\n\n \n\nxt+1=argmaxxE(max{0, ht+1(x)−f(x+)} ∣ Dt)\n\n xt+1​=argmaxx​E(max{0, ht+1​(x)−f(x+)} ∣ Dt​)\n\nxt+1= argmaxxE(max{0, ht+1(x)−f(x+)} ∣ Dt)\n\n \\begin{aligned}\n\n x\\_{t+1} = \\ & argmax\\_x \\mathbb{E} \\\\\n\n & \\left( {max} \\{ 0, \\ h\\_{t+1}(x) - f(x^+) \\} \\ | \\ \\mathcal{D}\\_t \\right)\n\n \\end{aligned}\n\n xt+1​= ​argmaxx​E(max{0, ht+1​(x)−f(x+)} ∣ Dt​)​\n\n \n\nEI(x)={(μt(x)−f(x+)−ϵ)Φ(Z)+σt(x)ϕ(Z),if σt(x)>00,if σt(x)=0\n\n EI(x)=\n\n \\begin{cases}\n\n 0, & \\text{if}\\ \\sigma\\_t(x) = 0\n\n \\end{cases}\n\n EI(x)={(μt​(x)−f(x+)−ϵ)Φ(Z)+σt​(x)ϕ(Z),0,​if σt​(x)>0if σt​(x)=0​\n\nEI(x)={[(μt(x)−f(x+)−ϵ) σt(x)>0∗Φ(Z)]+σt(x)ϕ(Z),0, σt(x)=0\n\n EI(x)= \\begin{cases}\n\n [(\\mu\\_t(x) - f(x^+) - \\epsilon) & \\ \\sigma\\_t(x) > 0 \\\\\n\n \\quad \\* \\Phi(Z)] + \\sigma\\_t(x)\\phi(Z),\\\\\n\n 0, & \\ \\sigma\\_t(x) = 0\n\n \\end{cases}\n\n EI(x)=⎩⎪⎨⎪⎧​[(μt​(x)−f(x+)−ϵ)∗Φ(Z)]+σt​(x)ϕ(Z),0,​ σt​(x)>0 σt​(x)=0​\n\n where Φ(⋅)\\Phi(\\cdot)Φ(⋅) indicates CDF and ϕ(⋅)\\phi(\\cdot)ϕ(⋅) indicates pdf.\n\n \n\n \n\n \n\n![](Exploring%20Bayesian%20Optimization_files/0_002.png)\n\n We now increase ϵ\\epsilonϵ to explore more.\n\n \n\n![](Exploring%20Bayesian%20Optimization_files/0_010.png)\n\n As we expected, increasing the value to ϵ=0.3\\epsilon = 0.3ϵ=0.3\n\n makes the acquisition function explore more. Compared to the earlier \n\nevaluations, we see less exploitation. We see that it evaluates only two\n\n points near the global maxima.\n\n \n\n Let us increase ϵ\\epsilonϵ even more.\n\n \n\n![](Exploring%20Bayesian%20Optimization_files/0_005.png)\n\n \n\n#### PI vs Ei\n\n \n\n![](Exploring%20Bayesian%20Optimization_files/0.svg)\n\n dot is a point in the search space. Additionally, the training set used\n\n \n\n### Thompson Sampling\n\n Another common acquisition function is Thompson Sampling .\n\n At every step, we sample a function from the surrogate’s posterior and \n\noptimize it. For example, in the case of gold mining, we would sample a \n\nplausible distribution of the gold given the evidence and evaluate \n\n(drill) wherever it peaks.\n\n \n\n Below we have an image showing three sampled functions from the \n\nlearned surrogate posterior for our gold mining problem. The training \n\n \n\n![](Exploring%20Bayesian%20Optimization_files/thompson.svg)\n\n We can understand the intuition behind Thompson sampling by two observations:\n\n \n\n* Locations with high uncertainty (σ(x) \\sigma(x) σ(x))\n\n will show a large variance in the functional values sampled from the \n\nsurrogate posterior. Thus, there is a non-trivial probability that a \n\nsample can take high value in a highly uncertain region. Optimizing such\n\n samples can aid **exploration**.\n\n \n\n* The sampled functions must pass through the current max \n\nvalue, as there is no uncertainty at the evaluated locations. Thus, \n\n \n\n![](Exploring%20Bayesian%20Optimization_files/0.png)\n\nThe visualization above uses Thompson sampling for optimization. Again, \n\nwe can reach the global optimum in relatively few iterations.\n\n \n\n### Random\n\n We have been using intelligent acquisition functions until now.\n\n We can create a random acquisition function by sampling xxx\n\n randomly. \n\n![](Exploring%20Bayesian%20Optimization_files/0_014.png)\n\n The visualization above shows that the performance of the random \n\nacquisition function is not that bad! However, if our optimization was \n\nmore complex (more dimensions), then the random acquisition might \n\nperform poorly.\n\n### Summary of Acquisition Functions\n\n \n\n Let us now summarize the core ideas associated with acquisition \n\nfunctions: i) they are heuristics for evaluating the utility of a point;\n\n ii) they are a function of the surrogate posterior; iii) they combine \n\nexploration and exploitation; and iv) they are inexpensive to evaluate.\n\n#### Other Acquisition Functions\n\n \n\nWe have seen various acquisition functions until now. One \n\ntrivial way to come up with acquisition functions is to have a \n\nexplore/exploit combination.\n\n \n\n### Upper Confidence Bound (UCB)\n\n One such trivial acquisition function that combines the \n\nexploration/exploitation tradeoff is a linear combination of the mean \n\nand uncertainty of our surrogate model. The model mean signifies \n\nexploitation (of our model’s knowledge) and model uncertainty signifies \n\nexploration (due to our model’s lack of observations).\n\n α(x)=μ(x)+λ×σ(x)\\alpha(x) = \\mu(x) + \\lambda \\times \\sigma(x)α(x)=μ(x)+λ×σ(x)\n\n The intuition behind the UCB acquisition function is weighing \n\nof the importance between the surrogate’s mean vs. the surrogate’s \n\n \n\n We can further form acquisition functions by combining the \n\nexisting acquisition functions though the physical interpretability of \n\nsuch combinations might not be so straightforward. One reason we might \n\nwant to combine two methods is to overcome the limitations of the \n\nindividual methods.\n\n \n\n One such combination can be a linear combination of PI and EI.\n\n We know PI focuses on the probability of improvement, whereas \n\nEI focuses on the expected improvement. Such a combination could help in\n\n having a tradeoff between the two based on the value of λ\\lambdaλ.\n\n \n\n### Gaussian Process Upper Confidence Bound (GP-UCB)\n\n a−ba -\n\n GP-UCB’s formulation is given by:\n\n \n\nαGP−UCB(x)=μt(x)+βtσt(x)\n\n \\alpha\\_{GP-UCB}(x) = \\mu\\_t(x) + \\sqrt{\\beta\\_t}\\sigma\\_t(x)\n\n αGP−UCB​(x)=μt​(x)+βt​\n\n​σt​(x)\n\n Where ttt is the timestep.\n\n \n\n \n\n### Comparison\n\n slides from Nando De Freitas. We have used the \n\noptimum hyperparameters for each acquisition function.\n\n We ran the random acquisition function several times with \n\ndifferent seeds and plotted the mean gold sensed at every iteration.\n\n \n\n![](Exploring%20Bayesian%20Optimization_files/comp.svg)\n\n strategy grows slowly. In comparison, the other acquisition functions \n\ncan find a good solution in a small number of iterations. In fact, most \n\nacquisition functions reach fairly close to the global maxima in as few \n\nas three iterations.\n\n \n\nHyperparameter Tuning\n\n---------------------\n\nBefore we talk about Bayesian optimization for hyperparameter tuning,\n\n we will quickly differentiate between hyperparameters and parameters: \n\nhyperparameters are set before learning and the parameters are learned \n\nfrom the data. To illustrate the difference, we take the example of \n\nRidge regression.\n\n \n\nθ^ridge=argminθ ∈ Rp∑i=1n(yi−xiTθ)2+λ∑j=1pθj2\n\n \\hat{\\theta}\\_{ridge} = argmin\\_{\\theta\\ \\in \\ \\mathbb{R}^p} \n\n\\sum\\limits\\_{i=1}^{n} \\left(y\\_i - x\\_i^T\\theta \\right)^2 + \\lambda \n\n\\sum\\limits\\_{j=1}^{p} \\theta^2\\_j\n\n θ^ridge​=argminθ ∈ Rp​i=1∑n​(yi​−xiT​θ)2+λj=1∑p​θj2​\n\nθ^ridge=argminθ ∈ Rp∑i=1n(yi−xiTθ)2+λ∑j=1pθj2\n\n \\begin{aligned}\n\n \\hat{\\theta}\\_{ridge} = & argmin\\_{\\theta\\ \\in \\ \\mathbb{R}^p}\n\n \\sum\\limits\\_{i=1}^{n} \\left(y\\_i - x\\_i^T\\theta \\right)^2 \\\\\n\n & + \\lambda \\sum\\limits\\_{j=1}^{p} \\theta^2\\_j\n\n \\end{aligned}\n\n θ^ridge​=​argminθ ∈ Rp​i=1∑n​(yi​−xiT​θ)2+λj=1∑p​θj2​​\n\n If we solve the above regression problem via gradient descent \n\noptimization, we further introduce another optimization parameter, the \n\nlearning rate α\\alphaα.\n\n \n\nWhen training a model is not expensive and time-consuming, we \n\ncan do a grid search to find the optimum hyperparameters. However, grid \n\nsearch is not feasible if function evaluations are costly, as in the \n\ncase of a large neural network that takes days to train. Further, grid \n\nsearch scales poorly in terms of the number of hyperparameters.\n\n \n\n### Example 1 — Support Vector Machine (SVM)\n\nIn this example, we use an SVM to classify on sklearn’s moons \n\ndataset and use Bayesian Optimization to optimize SVM hyperparameters.\n\n \n\n Let us have a look at the dataset now, which has two classes and two features.\n\n![](Exploring%20Bayesian%20Optimization_files/moons.svg)\n\n the surface plots you see for the Ground Truth Accuracies below were \n\ncalculated for each possible hyperparameter for showcasing purposes \n\nonly. We do not have these values in real applications.\n\n \n\n![](Exploring%20Bayesian%20Optimization_files/0_003.png)\n\n![](Exploring%20Bayesian%20Optimization_files/0_006.png)\n\n### Comparison\n\n \n\n![](Exploring%20Bayesian%20Optimization_files/comp3d.svg)\n\n method seemed to perform much better initially, but it could not reach \n\nthe global optimum, whereas Bayesian Optimization was able to get fairly\n\n close. The initial subpar performance of Bayesian Optimization can be \n\nattributed to the initial exploration.\n\n \n\n#### Other Examples\n\n### Example 2 — Random Forest\n\n Using Bayesian Optimization in a Random Forest Classifier.\n\n We will continue now to train a Random Forest on the moons \n\ndataset we had used previously to learn the Support Vector Machine \n\nmodel. The primary hyperparameters of Random Forests we would like to \n\noptimize our accuracy are the **number** of\n\n \n\n \n\n We will be again using Gaussian Processes with Matern kernel \n\nto estimate and predict the accuracy function over the two \n\nhyperparameters.\n\n \n\n![](Exploring%20Bayesian%20Optimization_files/0_007.png)\n\n \n\n![](Exploring%20Bayesian%20Optimization_files/0_015.png)\n\n![](Exploring%20Bayesian%20Optimization_files/0_008.png)\n\n \n\n![](Exploring%20Bayesian%20Optimization_files/0_011.png)\n\nLet us now use the Random acquisition function.\n\n![](Exploring%20Bayesian%20Optimization_files/RFcomp3d.svg)\n\n The optimization strategies seemed to struggle in this \n\nexample. This can be attributed to the non-smooth ground truth. This \n\nshows that the effectiveness of Bayesian Optimization depends on the \n\nsurrogate’s efficiency to model the actual black-box function. It is \n\ninteresting to notice that the Bayesian Optimization framework still \n\nbeats the *random* strategy using various acquisition functions.\n\n \n\n### Example 3 — Neural Networks\n\n Let us take this example to get an idea of how to apply \n\n which also provides us support for optimizing function with a search \n\nspace of categorical, integral, and real variables. We will not be \n\nplotting the ground truth here, as it is extremely costly to do so. \n\nBelow are some code snippets that show the ease of using Bayesian \n\nOptimization packages for hyperparameter tuning.\n\n \n\n The code initially declares a search space for the \n\noptimization problem. We limit the search space to be the following:\n\n \n\n* batch\\_size — This hyperparameter sets the number of \n\ntraining examples to combine to find the gradients for a single step in \n\ngradient descent. \n\n \n\n* learning rate — This hyperparameter sets the stepsize with\n\n which we will perform gradient descent in the neural network. \n\n* activation — We will have one categorical variable, i.e. \n\nthe activation to apply to our neural network layers. This variable can \n\ntake on values in the set {relu, sigmoid}\\{ relu, \\ sigmoid \\}{relu, sigmoid}.\n\n log\\_batch\\_size = Integer(\n\n low=2,\n\n high=7,\n\n name='log\\_batch\\_size'\n\n )\n\n lr = Real(\n\n low=1e-6,\n\n high=1e0,\n\n prior='log-uniform',\n\n name='lr'\n\n )\n\n activation = Categorical(\n\n categories=['relu', 'sigmoid'],\n\n name='activation'\n\n )\n\n dimensions = [\n\n dim\\_num\\_batch\\_size\\_to\\_base,\n\n dim\\_learning\\_rate,\n\n dim\\_activation\n\n ]\n\n \n\n \n\n # initial parameters (1st point)\n\n default\\_parameters = \n\n [4, 1e-1, 'relu']\n\n # bayesian optimization\n\n search\\_result = gp\\_minimize(\n\n func=train,\n\n dimensions=dimensions,\n\n acq\\_func='EI', # Expctd Imprv.\n\n n\\_calls=11,\n\n x0=default\\_parameters\n\n )\n\n \n\n![](Exploring%20Bayesian%20Optimization_files/conv.svg)\n\n \n\n Looking at the above example, we can see that incorporating \n\nBayesian Optimization is not difficult and can save a lot of time. \n\nOptimizing to get an accuracy of nearly one in around seven iterations \n\n Let us get the numbers into perspective. If we had run this \n\n iterations. Whereas Bayesian Optimization only took seven iterations. \n\nEach iteration took around fifteen minutes; this sets the time required \n\nfor the grid search to complete around seventeen hours!\n\n \n\nConclusion and Summary\n\n======================\n\n In this article, we looked at Bayesian Optimization for \n\noptimizing a black-box function. Bayesian Optimization is well suited \n\nwhen the function evaluations are expensive, making grid or exhaustive \n\nsearch impractical. We looked at the key components of Bayesian \n\nOptimization. First, we looked at the notion of using a surrogate \n\nfunction (with a prior over the space of objective functions) to model \n\nour black-box function. Next, we looked at the “Bayes” in Bayesian \n\nOptimization — the function evaluations are used as data to obtain the \n\nsurrogate posterior. We look at acquisition functions, which are \n\nfunctions of the surrogate posterior and are optimized sequentially. \n\nThis new sequential optimization is in-expensive and thus of utility of \n\nus. We also looked at a few acquisition functions and showed how these \n\ndifferent functions balance exploration and exploitation. Finally, we \n\nlooked at some practical examples of Bayesian Optimization for \n\noptimizing hyper-parameters for machine learning models.\n\n \n\n \n\n", "bibliography_bib": [{"title": "A statistical approach to some basic mine valuation problems on the Witwatersrand "}, {"title": "Active Learning Literature Survey"}, {"title": "Active learning: theory and applications"}, {"title": "Taking the Human Out of the Loop: A Review of Bayesian Optimization"}, {"title": "A\n Tutorial on Bayesian Optimization of Expensive Cost Functions, with \nApplication to Active User Modeling and Hierarchical Reinforcement \nLearning"}, {"title": "A Visual Exploration of Gaussian Processes"}, {"title": "Gaussian Processes in Machine Learning"}, {"title": "Bayesian approach to global optimization and application to multiobjective and constrained problems"}, {"title": "On The Likelihood That One Unknown Probability Exceeds Another In View Of The Evidence Of Two Samples"}, {"title": "Using Confidence Bounds for Exploitation-Exploration Trade-Offs"}, {"title": "Gaussian Process Optimization in the Bandit Setting: No Regret and Experimental Design"}, {"title": "Practical Bayesian Optimization of Machine Learning Algorithms"}, {"title": "Algorithms for Hyper-Parameter Optimization"}, {"title": "Making a Science of Model Search: Hyperparameter Optimization in Hundreds of Dimensions for Vision Architectures"}, {"title": "Scikit-learn: Machine Learning in {P}ython"}, {"title": "Bayesian Optimization with Gradients"}, {"title": "Hyperband: A Novel Bandit-Based Approach to Hyperparameter Optimization"}, {"title": "Fast Bayesian Optimization of Machine Learning Hyperparameters on Large Datasets"}, {"title": "Safe Exploration for Optimization with Gaussian Processes"}, {"title": "Scalable Bayesian Optimization Using Deep Neural Networks"}, {"title": "Portfolio Allocation for Bayesian Optimization"}, {"title": "Bayesian Optimization for Sensor Set Selection"}, {"title": "Constrained Bayesian Optimization with Noisy Experiments"}, {"title": "Parallel Bayesian Global Optimization of Expensive Functions"}], "filename": "Exploring Bayesian Optimization.html", "id": "0886a498d4ade49a570900e7cec49e9b"} {"url": "n/a", "source": "distill", "source_type": "html", "converted_with": "python", "title": "Feature-wise transformations", "authors": ["Vincent Dumoulin", "Ethan Perez", "Nathan Schucher", "Florian Strub", "Harm de Vries", "Aaron Courville", "Yoshua Bengio"], "date_published": "2018-07-09", "abstract": " Many real-world problems require integrating multiple sources of information. Sometimes these problems involve multiple, distinct modalities of information — vision, language, audio, etc. — as is required to understand a scene in a movie or answer a question about an image. Other times, these problems involve multiple sources of the same kind of input, i.e. when summarizing several documents or drawing one image in the style of another. ", "journal_ref": "distill-pub", "doi": "https://doi.org/10.23915/distill.00011", "text": "\n\n Many real-world problems require integrating multiple sources of information.\n\n Sometimes these problems involve multiple, distinct modalities of\n\n information — vision, language, audio, etc. — as is required\n\n to understand a scene in a movie or answer a question about an image.\n\n Other times, these problems involve multiple sources of the same\n\n kind of input, i.e. when summarizing several documents or drawing one\n\n image in the style of another.\n\n \n\n When approaching such problems, it often makes sense to process one source\n\n of information *in the context of* another; for instance, in the\n\n right example above, one can extract meaning from the image in the context\n\n of the question. In machine learning, we often refer to this context-based\n\n processing as *conditioning*: the computation carried out by a model\n\n is conditioned or *modulated* by information extracted from an\n\n auxiliary input.\n\n \n\n Finding an effective way to condition on or fuse sources of information\n\n is an open research problem, and\n\n \n\n in this article, we concentrate on a specific family of approaches we call\n\n *feature-wise transformations*.\n\n \n\n We will examine the use of feature-wise transformations in many neural network\n\n architectures to solve a surprisingly large and diverse set of problems;\n\n \n\n their success, we will argue, is due to being flexible enough to learn an\n\n effective representation of the conditioning input in varied settings.\n\n In the language of multi-task learning, where the conditioning signal is\n\n taken to be a task description, feature-wise transformations\n\n learn a task representation which allows them to capture and leverage the\n\n relationship between multiple sources of information, even in remarkably\n\n different problem settings.\n\n \n\n---\n\nFeature-wise transformations\n\n----------------------------\n\n To motivate feature-wise transformations, we start with a basic example,\n\n where the two inputs are images and category labels, respectively. For the\n\n purpose of this example, we are interested in building a generative model of\n\n images of various classes (puppy, boat, airplane, etc.). The model takes as\n\n input a class and a source of random noise (e.g., a vector sampled from a\n\n normal distribution) and outputs an image sample for the requested class.\n\n \n\n Our first instinct might be to build a separate model for each\n\n class. For a small number of classes this approach is not too bad a solution,\n\n but for thousands of classes, we quickly run into scaling issues, as the number\n\n of parameters to store and train grows with the number of classes.\n\n We are also missing out on the opportunity to leverage commonalities between\n\n classes; for instance, different types of dogs (puppy, terrier, dalmatian,\n\n etc.) share visual traits and are likely to share computation when\n\n mapping from the abstract noise vector to the output image.\n\n \n\n Now let’s imagine that, in addition to the various classes, we also need to\n\n model attributes like size or color. In this case, we can’t\n\n reasonably expect to train a separate network for *each* possible\n\n conditioning combination! Let’s examine a few simple options.\n\n \n\n A quick fix would be to concatenate a representation of the conditioning\n\n information to the noise vector and treat the result as the model’s input.\n\n This solution is quite parameter-efficient, as we only need to increase\n\n Maybe this assumption is correct, or maybe it’s not; perhaps the\n\n model does not need to incorporate the conditioning information until late\n\n into the generation process (e.g., right before generating the final pixel\n\n carry this information around unaltered for many layers.\n\n \n\n Because this operation is cheap, we might as well avoid making any such\n\n assumptions and concatenate the conditioning representation to the input of\n\n *all* layers in the network. Let’s call this approach\n\n *concatenation-based conditioning*.\n\n \n\n Another efficient way to integrate conditioning information into the network\n\n is via *conditional biasing*, namely, by adding a *bias* to\n\n the hidden layers based on the conditioning representation.\n\n \n\n Interestingly, conditional biasing can be thought of as another way to\n\n implement concatenation-based conditioning. Consider a fully-connected\n\n linear layer applied to the concatenation of an input\n\n x\\mathbf{x}x and a conditioning representation\n\n z\\mathbf{z}z:\n\n \n\n The same argument applies to convolutional networks, provided we ignore\n\n the border effects due to zero-padding.\n\n \n\n Yet another efficient way to integrate class information into the network is\n\n via *conditional scaling*, i.e., scaling hidden layers\n\n based on the conditioning representation.\n\n \n\n A special instance of conditional scaling is feature-wise sigmoidal gating:\n\n we scale each feature by a value between 000 and\n\n 111 (enforced by applying the logistic function), as a\n\n function of the conditioning representation. Intuitively, this gating allows\n\n the conditioning information to select which features are passed forward\n\n and which are zeroed out.\n\n \n\n Given that both additive and multiplicative interactions seem natural and\n\n intuitive, which approach should we pick? One argument in favor of\n\n *multiplicative* interactions is that they are useful in learning\n\n relationships between inputs, as these interactions naturally identify\n\n “matches”: multiplying elements that agree in sign yields larger values than\n\n multiplying elements that disagree. This property is why dot products are\n\n often used to determine how similar two vectors are.\n\n \n\n Multiplicative interactions alone have had a history of success in various\n\n domains — see [Bibliographic Notes](#bibliographic-notes).\n\n \n\n One argument in favor of *additive* interactions is that they are\n\n more natural for applications that are less strongly dependent on the\n\n joint values of two inputs, like feature aggregation or feature detection\n\n (i.e., checking if a feature is present in either of two inputs).\n\n \n\n In the spirit of making as few assumptions about the problem as possible,\n\n we may as well combine *both* into a\n\n conditional *affine transformation*.\n\n \n\n An affine transformation is a transformation of the form\n\n y=m∗x+by = m \\* x + by=m∗x+b.\n\n \n\n All methods outlined above share the common trait that they act at the\n\n *feature* level; in other words, they leverage *feature-wise*\n\n interactions between the conditioning representation and the conditioned\n\n network. It is certainly possible to use more complex interactions,\n\n but feature-wise interactions often strike a happy compromise between\n\n effectiveness and efficiency: the number of scaling and/or shifting\n\n coefficients to predict scales linearly with the number of features in the\n\n network. Also, in practice, feature-wise transformations (often compounded\n\n across multiple layers) frequently have enough capacity to model complex\n\n phenomenon in various settings.\n\n \n\n Lastly, these transformations only enforce a limited inductive bias and\n\n remain domain-agnostic. This quality can be a downside, as some problems may\n\n be easier to solve with a stronger inductive bias. However, it is this\n\n characteristic which also enables these transformations to be so widely\n\n effective across problem domains, as we will later review.\n\n \n\n### Nomenclature\n\n To continue the discussion on feature-wise transformations we need to\n\n abstract away the distinction between multiplicative and additive\n\n interactions. Without losing generality, let’s focus on feature-wise affine\n\n transformations, and let’s adopt the nomenclature of Perez et al.\n\n , which formalizes conditional affine\n\n transformations under the acronym *FiLM*, for Feature-wise Linear\n\n Modulation.\n\n \n\n Strictly speaking, *linear* is a misnomer, as we allow biasing, but\n\n we hope the more rigorous-minded reader will forgive us for the sake of a\n\n better-sounding acronym.\n\n \n\n We say that a neural network is modulated using FiLM, or *FiLM-ed*,\n\n after inserting *FiLM layers* into its architecture. These layers are\n\n parametrized by some form of conditioning information, and the mapping from\n\n conditioning information to FiLM parameters (i.e., the shifting and scaling\n\n coefficients) is called the *FiLM generator*.\n\n In other words, the FiLM generator predicts the parameters of the FiLM\n\n layers based on some auxiliary input.\n\n Note that the FiLM parameters are parameters in one network but predictions\n\n from another network, so they aren’t learnable parameters with fixed\n\n weights as in the fully traditional sense.\n\n For simplicity, you can assume that the FiLM generator outputs the\n\n concatenation of all FiLM parameters for the network architecture.\n\n \n\n As the name implies, a FiLM layer applies a feature-wise affine\n\n transformation to its input. By *feature-wise*, we mean that scaling\n\n and shifting are applied element-wise, or in the case of convolutional\n\n networks, feature map -wise.\n\n \n\n To expand a little more on the convolutional case, feature maps can be\n\n thought of as the same feature detector being evaluated at different\n\n spatial locations, in which case it makes sense to apply the same affine\n\n transformation to all spatial locations.\n\n \n\n In other words, assuming x\\mathbf{x}x is a FiLM layer’s\n\n input, z\\mathbf{z}z is a conditioning input, and\n\n γ\\gammaγ and β\\betaβ are\n\n z\\mathbf{z}z-dependent scaling and shifting vectors,\n\n FiLM(x)=γ(z)⊙x+β(z).\n\n \\textrm{FiLM}(\\mathbf{x}) = \\gamma(\\mathbf{z}) \\odot \\mathbf{x}\n\n + \\beta(\\mathbf{z}).\n\n FiLM(x)=γ(z)⊙x+β(z).\n\n You can interact with the following fully-connected and convolutional FiLM\n\n layers to get an intuition of the sort of modulation they allow:\n\n \n\n In addition to being a good abstraction of conditional feature-wise\n\n transformations, the FiLM nomenclature lends itself well to the notion of a\n\n *task representation*. From the perspective of multi-task learning,\n\n we can view the conditioning signal as the task description. More\n\n specifically, we can view the concatenation of all FiLM scaling and shifting\n\n coefficients as both an instruction on *how to modulate* the\n\n conditioned network and a *representation* of the task at hand. We\n\n will explore and illustrate this idea later on.\n\n \n\n---\n\nFeature-wise transformations in the literature\n\n----------------------------------------------\n\n Feature-wise transformations find their way into methods applied to many\n\n problem settings, but because of their simplicity, their effectiveness is\n\n seldom highlighted in lieu of other novel research contributions. Below are\n\n a few notable examples of feature-wise transformations in the literature,\n\n grouped by application domain. The diversity of these applications\n\n underscores the flexible, general-purpose ability of feature-wise\n\n interactions to learn effective task representations.\n\n \n\nexpand all\n\nVisual question-answering+\n\n Perez et al. use\n\n FiLM layers to build a visual reasoning model\n\n trained on the CLEVR dataset to\n\n answer multi-step, compositional questions about synthetic images.\n\n \n\n The model’s linguistic pipeline is a FiLM generator which\n\n extracts a question representation that is linearly mapped to\n\n FiLM parameter values. Using these values, FiLM layers inserted within each\n\n residual block condition the visual pipeline. The model is trained\n\n end-to-end on image-question-answer triples. Strub et al.\n\n later on improved on the model by\n\n using an attention mechanism to alternate between attending to the language\n\n input and generating FiLM parameters layer by layer. This approach was\n\n better able to scale to settings with longer input sequences such as\n\n dialogue and was evaluated on the GuessWhat?! \n\n and ReferIt datasets.\n\n \n\n de Vries et al. leverage FiLM\n\n to condition a pre-trained network. Their model’s linguistic pipeline\n\n modulates the visual pipeline via conditional batch normalization,\n\n real-world images on the GuessWhat?! \n\n and VQAv1 datasets.\n\n \n\n The visual pipeline consists of a pre-trained residual network that is\n\n fixed throughout training. The linguistic pipeline manipulates the visual\n\n pipeline by perturbing the residual network’s batch normalization\n\n parameters, which re-scale and re-shift feature maps after activations\n\n have been normalized to have zero mean and unit variance. As hinted\n\n earlier, conditional batch normalization can be viewed as an instance of\n\n FiLM where the post-normalization feature-wise affine transformation is\n\n replaced with a FiLM layer.\n\n \n\nStyle transfer+\n\n Dumoulin et al. use\n\n feature-wise affine transformations — in the form of conditional\n\n instance normalization layers — to condition a style transfer\n\n network on a chosen style image. Like conditional batch normalization\n\n discussed in the previous subsection,\n\n conditional instance normalization can be seen as an instance of FiLM\n\n where a FiLM layer replaces the post-normalization feature-wise affine\n\n instance normalization parameters, and it applies normalization with these\n\n style-specific parameters.\n\n \n\n Dumoulin et al. use a simple\n\n embedding lookup to produce instance normalization parameters, while\n\n Ghiasi et al. further\n\n introduce a *style prediction network*, trained jointly with the\n\n style transfer network to predict the conditioning parameters directly from\n\n a given style image. In this article we opt to use the FiLM nomenclature\n\n because it is decoupled from normalization operations, but the FiLM\n\n layers used by Perez et al. were\n\n themselves heavily inspired by the conditional normalization layers used\n\n by Dumoulin et al. .\n\n \n\n Yang et al. use a related\n\n architecture for video object segmentation — the task of segmenting a\n\n particular object throughout a video given that object’s segmentation in the\n\n first frame. Their model conditions an image segmentation network over a\n\n video frame on the provided first frame segmentation using feature-wise\n\n scaling factors, as well as on the previous frame using position-wise\n\n biases.\n\n \n\n So far, the models we covered have two sub-networks: a primary\n\n network in which feature-wise transformations are applied and a secondary\n\n network which outputs parameters for these transformations. However, this\n\n distinction between *FiLM-ed network* and *FiLM generator*\n\n is not strictly necessary. As an example, Huang and Belongie\n\n propose an alternative\n\n style transfer network that uses adaptive instance normalization layers,\n\n which compute normalization parameters using a simple heuristic.\n\n \n\n Adaptive instance normalization can be interpreted as inserting a FiLM\n\n layer midway through the model. However, rather than relying\n\n on a secondary network to predict the FiLM parameters from the style\n\n image, the main network itself is used to extract the style features\n\n used to compute FiLM parameters. Therefore, the model can be seen as\n\n *both* the FiLM-ed network and the FiLM generator.\n\n \n\nImage recognition+\n\n neural network’s activations *themselves* as conditioning\n\n information. This idea gives rise to\n\n *self-conditioned* models.\n\n \n\n Highway Networks are a prime\n\n example of applying this self-conditioning principle. They take inspiration\n\n from the LSTMs’ heavy use of\n\n feature-wise sigmoidal gating in their input, forget, and output gates to\n\n regulate information flow:\n\n \n\ninputsub-networksigmoidal layer1 - xoutput\n\n The ImageNet 2017 winning model also\n\n employs feature-wise sigmoidal gating in a self-conditioning manner, as a\n\n way to “recalibrate” a layer’s activations conditioned on themselves.\n\n \n\nNatural language processing+\n\n For statistical language modeling (i.e., predicting the next word\n\n in a sentence), the LSTM \n\n constitutes a popular class of recurrent network architectures. The LSTM\n\n relies heavily on feature-wise sigmoidal gating to control the\n\n information flow in and out of the memory or context cell\n\n c\\mathbf{c}c, based on the hidden states\n\n h\\mathbf{h}h and inputs x\\mathbf{x}x at\n\n every timestep t\\mathbf{t}t.\n\n \n\nct-1tanhcthtsigmoidsigmoidtanhsigmoidlinearlinearlinearlinearht-1xtconcatenate\n\n Also in the domain of language modeling, Dauphin et al. use sigmoidal\n\n gating in their proposed *gated linear unit*, which uses half of the\n\n input features to apply feature-wise sigmoidal gating to the other half.\n\n Gehring et al. adopt this\n\n architectural feature, introducing a fast, parallelizable model for machine\n\n translation in the form of a fully convolutional network.\n\n \n\n The Gated-Attention Reader \n\n uses feature-wise scaling, extracting information\n\n from text by conditioning a document-reading network on a query. Its\n\n architecture consists of multiple Gated-Attention modules, which involve\n\n element-wise multiplications between document representation tokens and\n\n token-specific query representations extracted via soft attention on the\n\n query representation tokens.\n\n \n\nReinforcement learning+\n\n The Gated-Attention architecture \n\n uses feature-wise sigmoidal gating to fuse linguistic and visual\n\n information in an agent trained to follow simple “go-to” language\n\n instructions in the VizDoom 3D\n\n environment.\n\n \n\n Bahdanau et al. use FiLM\n\n layers to condition Neural Module Network\n\n and LSTM -based policies to follow\n\n basic, compositional language instructions (arranging objects and going\n\n to particular locations) in a 2D grid world. They train this policy\n\n in an adversarial manner using rewards from another FiLM-based network,\n\n trained to discriminate between ground-truth examples of achieved\n\n instruction states and failed policy trajectories states.\n\n \n\n Outside instruction-following, Kirkpatrick et al.\n\n also use\n\n game-specific scaling and biasing to condition a shared policy network\n\n trained to play 10 different Atari games.\n\n \n\nGenerative modeling+\n\n The conditional variant of DCGAN ,\n\n a well-recognized network architecture for generative adversarial networks\n\n , uses concatenation-based\n\n conditioning. The class label is broadcasted as a feature map and then\n\n concatenated to the input of convolutional and transposed convolutional\n\n layers in the discriminator and generator networks.\n\n \n\n For convolutional layers, concatenation-based conditioning requires the\n\n network to learn redundant convolutional parameters to interpret each\n\n constant, conditioning feature map; as a result, directly applying a\n\n conditional bias is more parameter efficient, but the two approaches are\n\n still mathematically equivalent.\n\n \n\n PixelCNN \n\n and WaveNet  — two recent\n\n advances in autoregressive, generative modeling of images and audio,\n\n respectively — use conditional biasing. The simplest form of\n\n conditioning in PixelCNN adds feature-wise biases to all convolutional layer\n\n outputs. In FiLM parlance, this operation is equivalent to inserting FiLM\n\n layers after each convolutional layer and setting the scaling coefficients\n\n to a constant value of 1.\n\n \n\n The authors also describe a location-dependent biasing scheme which\n\n cannot be expressed in terms of FiLM layers due to the absence of the\n\n feature-wise property.\n\n \n\n WaveNet describes two ways in which conditional biasing allows external\n\n information to modulate the audio or speech generation process based on\n\n conditioning input:\n\n \n\n1. **Global conditioning** applies the same conditional bias\n\n to the whole generated sequence and is used e.g. to condition on speaker\n\n identity.\n\n2. **Local conditioning** applies a conditional bias which\n\n varies across time steps of the generated sequence and is used e.g. to\n\n let linguistic features in a text-to-speech model influence which sounds\n\n are produced.\n\n As in PixelCNN, conditioning in WaveNet can be viewed as inserting FiLM\n\n layers after each convolutional layer. The main difference lies in how\n\n the FiLM-generating network is defined: global conditioning\n\n expresses the FiLM-generating network as an embedding lookup which is\n\n broadcasted to the whole time series, whereas local conditioning expresses\n\n it as a mapping from an input sequence of conditioning information to an\n\n output sequence of FiLM parameters.\n\n \n\nSpeech recognition+\n\n Kim et al. modulate a deep\n\n bidirectional LSTM using a form\n\n of conditional normalization. As discussed in the\n\n *Visual question-answering* and *Style transfer* subsections,\n\n conditional normalization can be seen as an instance of FiLM where\n\n the post-normalization feature-wise affine transformation is replaced\n\n with a FiLM layer.\n\n \n\n The key difference here is that the conditioning signal does not come from\n\n an external source but rather from utterance\n\n summarization feature vectors extracted in each layer to adapt the model.\n\n \n\nDomain adaptation and few-shot learning+\n\n For domain adaptation, Li et al. \n\n find it effective to update the per-channel batch normalization\n\n statistics (mean and variance) of a network trained on one domain with that\n\n network’s statistics in a new, target domain. As discussed in the\n\n *Style transfer* subsection, this operation is akin to using the network as\n\n both the FiLM generator and the FiLM-ed network. Notably, this approach,\n\n along with Adaptive Instance Normalization, has the particular advantage of\n\n not requiring any extra trainable parameters.\n\n \n\n For few-shot learning, Oreshkin et al.\n\n explore the use of FiLM layers to\n\n provide more robustness to variations in the input distribution across\n\n few-shot learning episodes. The training set for a given episode is used to\n\n produce FiLM parameters which modulate the feature extractor used in a\n\n Prototypical Networks \n\n meta-training procedure.\n\n \n\n---\n\nRelated ideas\n\n-------------\n\n Aside from methods which make direct use of feature-wise transformations,\n\n the FiLM framework connects more broadly with the following methods and\n\n concepts.\n\n \n\nexpand all\n\nZero-shot learning+\n\n The idea of learning a task representation shares a strong connection with\n\n zero-shot learning approaches. In zero-shot learning, semantic task\n\n embeddings may be learned from external information and then leveraged to\n\n make predictions about classes without training examples. For instance, to\n\n generalize to unseen object categories for image classification, one may\n\n construct semantic task embeddings from text-only descriptions and exploit\n\n objects’ text-based relationships to make predictions for unseen image\n\n categories. Frome et al. , Socher et\n\n al. , and Norouzi et al.\n\n are a few notable exemplars\n\n of this idea.\n\n \n\nHyperNetworks+\n\n The notion of a secondary network predicting the parameters of a primary\n\n (e.g., a recurrent neural network layer). From this perspective, the FiLM\n\n generator is a specialized HyperNetwork that predicts the FiLM parameters of\n\n the FiLM-ed network. The main distinction between the two resides in the\n\n number and specificity of predicted parameters: FiLM requires predicting far\n\n fewer parameters than Hypernetworks, but also has less modulation potential.\n\n The ideal trade-off between a conditioning mechanism’s capacity,\n\n regularization, and computational complexity is still an ongoing area of\n\n investigation, and many proposed approaches lie on the spectrum between FiLM\n\n and HyperNetworks (see [Bibliographic Notes](#bibliographic-notes)).\n\n \n\nAttention+\n\n Some parallels can be drawn between attention and FiLM, but the two operate\n\n in different ways which are important to disambiguate.\n\n \n\n This difference stems from distinct intuitions underlying attention and\n\n FiLM: the former assumes that specific spatial locations or time steps\n\n contain the most useful information, whereas the latter assumes that\n\n specific features or feature maps contain the most useful information.\n\n \n\nBilinear transformations+\n\n With a little bit of stretching, FiLM can be seen as a special case of a\n\n bilinear transformation\n\n with low-rank weight\n\n matrices. A bilinear transformation defines the relationship between two\n\n inputs x\\mathbf{x}x and z\\mathbf{z}z and the\n\n kthk^{th}kth output feature yky\\_kyk​ as\n\n yk=xTWkz.\n\n y\\_k = \\mathbf{x}^T W\\_k \\mathbf{z}.\n\n yk​=xTWk​z.\n\n Note that for each output feature yky\\_kyk​ we have a separate\n\n matrix WkW\\_kWk​, so the full set of weights forms a\n\n multi-dimensional array.\n\n \n\n If we view z\\mathbf{z}z as the concatenation of the scaling\n\n and shifting vectors γ\\gammaγ and β\\betaβ and\n\n if we augment the input x\\mathbf{x}x with a 1-valued feature,\n\n \n\n As is commonly done to turn a linear transformation into an affine\n\n transformation.\n\n \n\n we can represent FiLM using a bilinear transformation by zeroing out the\n\n appropriate weight matrix entries:\n\n \n\n For some applications of bilinear transformations,\n\n see the [Bibliographic Notes](#bibliographic-notes).\n\n \n\n---\n\nProperties of the learned task representation\n\n---------------------------------------------\n\n As hinted earlier, in adopting the FiLM perspective we implicitly introduce\n\n a notion of *task representation*: each task — be it a question\n\n about an image or a painting style to imitate — elicits a different\n\n set of FiLM parameters via the FiLM generator which can be understood as its\n\n representation in terms of how to modulate the FiLM-ed network. To help\n\n better understand the properties of this representation, let’s focus on two\n\n FiLM-ed models used in fairly different problem settings:\n\n \n\n* The visual reasoning model of Perez et al.\n\n , which uses FiLM\n\n to modulate a visual processing pipeline based off an input question.\n\n \n\n* The artistic style transfer model of Ghiasi et al.\n\n , which uses FiLM to modulate a\n\n feed-forward style transfer network based off an input style image.\n\n \n\n As a starting point, can we discern any pattern in the FiLM parameters as a\n\n function of the task description? One way to visualize the FiLM parameter\n\n space is to plot γ\\gammaγ against β\\betaβ,\n\n with each point corresponding to a specific task description and a specific\n\n feature map. If we color-code each point according to the feature map it\n\n belongs to we observe the following:\n\n \n\n The plots above allow us to make several interesting observations. First,\n\n FiLM parameters cluster by feature map in parameter space, and the cluster\n\n locations are not uniform across feature maps. The orientation of these\n\n clusters is also not uniform across feature maps: the main axis of variation\n\n can be γ\\gammaγ-aligned, β\\betaβ-aligned, or\n\n diagonal at varying angles. These findings suggest that the affine\n\n transformation in FiLM layers is not modulated in a single, consistent way,\n\n i.e., using γ\\gammaγ only, β\\betaβ only, or\n\n γ\\gammaγ and β\\betaβ together in some specific\n\n way. Maybe this is due to the affine transformation being overspecified, or\n\n maybe this shows that FiLM layers can be used to perform modulation\n\n operations in several distinct ways.\n\n \n\n Nevertheless, the fact that these parameter clusters are often somewhat\n\n “dense” may help explain why the style transfer model of Ghiasi et al.\n\n is able to perform style\n\n interpolations: any convex combination of FiLM parameters is likely to\n\n correspond to a meaningful parametrization of the FiLM-ed network.\n\n \n\nStyle 1Style 2 w InterpolationContent Image\n\n To some extent, the notion of interpolating between tasks using FiLM\n\n parameters can be applied even in the visual question-answering setting.\n\n Using the model trained in Perez et al. ,\n\n we interpolated between the model’s FiLM parameters for two pairs of CLEVR\n\n questions. Here we visualize the input locations responsible for\n\n \n\n The network seems to be softly switching where in the image it is looking,\n\n based on the task description. It is quite interesting that these semantically\n\n meaningful interpolation behaviors emerge, as the network has not been\n\n trained to act this way.\n\n \n\n Despite these similarities across problem settings, we also observe\n\n qualitative differences in the way in which FiLM parameters cluster as a\n\n function of the task description. Unlike the style transfer model, the\n\n visual reasoning model sometimes exhibits several FiLM parameter\n\n sub-clusters for a given feature map.\n\n \n\n At the very least, this may indicate that FiLM learns to operate in ways\n\n that are problem-specific, and that we should not expect to find a unified\n\n and problem-independent explanation for FiLM’s success in modulating FiLM-ed\n\n networks. Perhaps the compositional or discrete nature of visual reasoning\n\n requires the model to implement several well-defined modes of operation\n\n which are less necessary for style transfer.\n\n \n\n Focusing on individual feature maps which exhibit sub-clusters, we can try\n\n to infer how questions regroup by color-coding the scatter plots by question\n\n type.\n\n \n\n Sometimes a clear pattern emerges, as in the right plot, where color-related\n\n questions concentrate in the top-right cluster — we observe that\n\n questions either are of type *Query color* or *Equal color*,\n\n or contain concepts related to color. Sometimes it is harder to draw a\n\n conclusion, as in the left plot, where question types are scattered across\n\n the three clusters.\n\n \n\n In cases where question types alone cannot explain the clustering of the\n\n FiLM parameters, we can turn to the conditioning content itself to gain\n\n an understanding of the mechanism at play. Let’s take a look at two more\n\n plots: one for feature map 26 as in the previous figure, and another\n\n for a different feature map, also exhibiting several subclusters. This time\n\n we regroup points by the words which appear in their associated question.\n\n \n\n In the left plot, the left subcluster corresponds to questions involving\n\n objects positioned *in front* of other objects, while the right\n\n subcluster corresponds to questions involving objects positioned\n\n *behind* other objects. In the right plot we see some evidence of\n\n separation based on object material: the left subcluster corresponds to\n\n questions involving *matte* and *rubber* objects, while the\n\n right subcluster contains questions about *shiny* or\n\n *metallic* objects.\n\n \n\n The presence of sub-clusters in the visual reasoning model also suggests\n\n that question interpolations may not always work reliably, but these\n\n sub-clusters don’t preclude one from performing arithmetic on the question\n\n representations, as Perez et al. \n\n report.\n\n \n\n Perez et al. report that this sort of\n\n task analogy is not always successful in correcting the model’s answer, but\n\n it does point to an interesting fact about FiLM-ed networks: sometimes the\n\n model makes a mistake not because it is incapable of computing the correct\n\n output, but because it fails to produce the correct FiLM parameters for a\n\n given task description. The reverse can also be true: if the set of tasks\n\n the model was trained on is insufficiently rich, the computational\n\n primitives learned by the FiLM-ed network may be insufficient to ensure good\n\n generalization. For instance, a style transfer model may lack the ability to\n\n produce zebra-like patterns if there are no stripes in the styles it was\n\n trained on. This could explain why Ghiasi et al.\n\n report that their style transfer\n\n model’s ability to produce pastiches for new styles degrades if it has been\n\n trained on an insufficiently large number of styles. Note however that in\n\n that case the FiLM generator’s failure to generalize could also play a role,\n\n and further analysis would be needed to draw a definitive conclusion.\n\n \n\n This points to a separation between the various computational\n\n primitives learned by the FiLM-ed network and the “numerical recipes”\n\n learned by the FiLM generator: the model’s ability to generalize depends\n\n both on its ability to parse new forms of task descriptions and on it having\n\n learned the required computational primitives to solve those tasks. We note\n\n that this multi-faceted notion of generalization is inherited directly from\n\n the multi-task point of view adopted by the FiLM framework.\n\n \n\n Let’s now turn our attention back to the overal structural properties of FiLM\n\n parameters observed thus far. The existence of this structure has already\n\n been explored, albeit more indirectly, by Ghiasi et al.\n\n as well as Perez et al.\n\n , who applied t-SNE\n\n on the FiLM parameter values.\n\n \n\n t-SNE projection of FiLM parameters for many task descriptions.\n\n \n\n The projection on the left is inspired by a similar projection done by Perez\n\n et al. for their visual reasoning\n\n model trained on CLEVR and shows how questions group by question type.\n\n The projection on the right is inspired by a similar projection done by\n\n Ghiasi et al. for their style\n\n transfer network. The projection does not cluster artists as neatly as the\n\n projection on the left, but this is to be expected, given that an artist’s\n\n style may vary widely over time. However, we can still detect interesting\n\n patterns in the projection: note for instance the isolated cluster (circled\n\n in the figure) in which paintings by Ivan Shishkin and Rembrandt are\n\n aggregated. While these two painters exhibit fairly different styles, the\n\n cluster is a grouping of their sketches.\n\n \n\n To summarize, the way neural networks learn to use FiLM layers seems to\n\n vary from problem to problem, input to input, and even from feature to\n\n feature; there does not seem to be a single mechanism by which the\n\n network uses FiLM to condition computation. This flexibility may\n\n explain why FiLM-related methods have been successful across such a\n\n wide variety of domains.\n\n \n\n---\n\nDiscussion\n\n----------\n\n Looking forward, there are still many unanswered questions.\n\n Do these experimental observations on FiLM-based architectures generalize to\n\n other related conditioning mechanisms, such as conditional biasing, sigmoidal\n\n gating, HyperNetworks, and bilinear transformations? When do feature-wise\n\n transformations outperform methods with stronger inductive biases and vice\n\n versa? Recent work combines feature-wise transformations with stronger\n\n inductive bias methods\n\n ,\n\n which could be an optimal middle ground. Also, to what extent are FiLM’s\n\n task representation properties\n\n inherent to FiLM, and to what extent do they emerge from other features\n\n of neural networks (i.e. non-linearities, FiLM generator\n\n depth, etc.)? If you are interested in exploring these or other\n\n questions about FiLM, we recommend looking into the code bases for\n\n FiLM models for [visual reasoning](https://github.com/ethanjperez/film)\n\n which we used as a\n\n starting point for our experiments here.\n\n \n\n Finally, the fact that changes on the feature level alone are able to\n\n compound into large and meaningful modulations of the FiLM-ed network is\n\n still very surprising to us, and hopefully future work will uncover deeper\n\n explanations. For now, though, it is a question that\n\n evokes the even grander mystery of how neural networks in general compound\n\n simple operations like matrix multiplications and element-wise\n\n non-linearities into semantically meaningful transformations.\n\n \n\n", "bibliography_bib": [{"title": "FiLM: Visual Reasoning with a General Conditioning Layer"}, {"title": "Learning visual reasoning without strong priors"}, {"title": "CLEVR: A diagnostic dataset for compositional language and elementary visual reasoning"}, {"title": "Visual Reasoning with Multi-hop Feature Modulation"}, {"title": "GuessWhat?! Visual object discovery through multi-modal dialogue"}, {"title": "ReferItGame: Referring to objects in photographs of natural scenes"}, {"title": "Modulating early visual processing by language"}, {"title": "VQA: visual question answering"}, {"title": "A learned representation for artistic style"}, {"title": "Exploring the structure of a real-time, arbitrary neural artistic stylization network"}, {"title": "Efficient video object segmentation via network modulation"}, {"title": "Arbitrary style transfer in real-time with adaptive instance normalization"}, {"title": "Highway networks"}, {"title": "Long short-term memory"}, {"title": "Squeeze-and-Excitation networks"}, {"title": "On the state of the art of evaluation in neural language models"}, {"title": "Language modeling with gated convolutional networks"}, {"title": "Convolution sequence-to-sequence learning"}, {"title": "Gated-attention readers for text comprehension"}, {"title": "Gated-attention architectures for task-oriented language grounding"}, {"title": "Vizdoom: A doom-based AI research platform for visual reinforcement learning"}, {"title": "Learning to follow language instructions with adversarial reward induction"}, {"title": "Neural module networks"}, {"title": "Overcoming catastrophic forgetting in neural networks"}, {"title": "Unsupervised representation learning with deep convolutional generative adversarial networks"}, {"title": "Generative adversarial nets"}, {"title": "Conditional image generation with PixelCNN decoders"}, {"title": "WaveNet: A generative model for raw audio"}, {"title": "Dynamic layer normalization for adaptive neural acoustic modeling in speech recognition"}, {"title": "Adaptive batch normalization for practical domain adaptation"}, {"title": "TADAM: Task dependent adaptive metric for improved few-shot learning"}, {"title": "Prototypical networks for few-shot learning"}, {"title": "Devise: A deep visual-semantic embedding model"}, {"title": "Zero-shot learning through cross-modal transfer"}, {"title": "Zero-shot learning by convex combination of semantic embeddings"}, {"title": "HyperNetworks"}, {"title": "Separating style and content with bilinear models"}, {"title": "Visualizing data using t-SNE"}, {"title": "A dataset and architecture for visual reasoning with a working memory"}, {"title": "A parallel computation that assigns canonical object-based frames of reference"}, {"title": "The correlation theory of brain function"}, {"title": "Generating text with recurrent neural networks"}, {"title": "Robust boltzmann machines for recognition and denoising"}, {"title": "Factored conditional restricted Boltzmann machines for modeling motion style"}, {"title": "Combining discriminative features to infer complex trajectories"}, {"title": "Learning where to attend with deep architectures for image tracking"}, {"title": "Learning hierarchical invariant spatio-temporal features for action recognition with independent subspace analysis"}, {"title": "Convolutional learning of spatio-temporal features"}, {"title": "Learning to relate images"}, {"title": "Incorporating side information by adaptive convolution"}, {"title": "Learning multiple visual domains with residual adapters"}, {"title": "Predicting deep zero-shot convolutional neural networks using textual descriptions"}, {"title": "Zero-shot task generalization with multi-task deep reinforcement learning"}, {"title": "Separating style and content"}, {"title": "Facial expression space learning"}, {"title": "Personalized recommendation on dynamic content using predictive bilinear models"}, {"title": "Like like alike: joint friendship and interest propagation in social networks"}, {"title": "Matrix factorization techniques for recommender systems"}, {"title": "Bilinear CNN models for fine-grained visual recognition"}, {"title": "Convolutional two-stream network fusion for video action recognition"}, {"title": "Multimodal compact bilinear pooling for visual question answering and visual grounding"}], "filename": "Feature-wise transformations.html", "id": "d25b2a419abe7ad732d092f44d6024a5"} {"url": "n/a", "source": "distill", "source_type": "html", "converted_with": "python", "title": "Visualizing the Impact of Feature Attribution Baselines", "authors": ["Pascal Sturmfels", "Scott Lundberg", "Su-In Lee"], "date_published": "2020-01-10", "abstract": "Path attribution methods are a gradient-based way of explaining deep models. These methods require choosing a hyperparameter known as the baseline input. What does this hyperparameter mean, and how important is it? In this article, we investigate these questions using image classification networks as a case study. We discuss several different ways to choose a baseline input and the assumptions that are implicit in each baseline. Although we focus here on path attribution methods, our discussion of baselines is closely connected with the concept of missingness in the feature space - a concept that is critical to interpretability research. ", "journal_ref": "distill-pub", "doi": "https://doi.org/10.23915/distill.00022", "text": "\n\nPath attribution methods are a gradient-based way\n\n of explaining deep models. These methods require choosing a\n\n hyperparameter known as the *baseline input*.\n\n What does this hyperparameter mean, and how important is it? In this article,\n\n we investigate these questions using image classification networks\n\n as a case study. We discuss several different ways to choose a baseline\n\n input and the assumptions that are implicit in each baseline.\n\n Although we focus here on path attribution methods, our discussion of baselines\n\n is closely connected with the concept of missingness in the feature space -\n\n a concept that is critical to interpretability research.\n\n \n\nIntroduction\n\n------------\n\n If you are in the business of training neural networks,\n\n you might have heard of the integrated gradients method, which\n\n was introduced at \n\n .\n\n The method computes which features are important \n\n to a neural network when making a prediction on a \n\n particular data point. This helps users\n\n understand which features their network relies on.\n\n Since its introduction,\n\n integrated gradients has been used to interpret \n\n networks trained on a variety of data types, \n\n including retinal fundus images \n\n and electrocardiogram recordings .\n\n \n\n If you’ve ever used integrated gradients,\n\n you know that you need to define a baseline input x’x’x’ before\n\n using the method. Although the original paper discusses the need for a baseline\n\n and even proposes several different baselines for image data - including \n\n the constant black image and an image of random noise - there is\n\n little existing research about the impact of this baseline. \n\n Is integrated gradients sensitive to the \n\n hyperparameter choice? Why is the constant black image \n\n a “natural baseline” for image data? Are there any alternative choices?\n\n \n\n In this article, we will delve into how this hyperparameter choice arises,\n\n and why understanding it is important when you are doing model interpretation.\n\n As a case-study, we will focus on image classification models in order \n\n to visualize the effects of the baseline input. We will explore several \n\n notions of missingness, including both constant baselines and baselines\n\n defined by distributions. Finally, we will discuss different ways to compare\n\n baseline choices and talk about why quantitative evaluation\n\n remains a difficult problem.\n\n \n\nImage Classification\n\n--------------------\n\n We focus on image classification as a task, as it will allow us to visually\n\n plot integrated gradients attributions, and compare them with our intuition\n\n , a convolutional \n\n neural network designed for the ImageNet dataset ,\n\n On the ImageNet validation set, Inception V4 has a top-1 accuracy of over 80%.\n\n We download weights from TensorFlow-Slim ,\n\n and visualize the predictions of the network on four different images from the \n\n validation set.\n\n \n\n \n\n Right: The predicted logits of the network on the original image. The\n\n network correctly classifies all images with high confidence.\n\n relative to the true class label.\n\n \n\n Although state of the art models perform well on unseen data,\n\n users may still be left wondering: *how* did the model figure\n\n out which object was in the image? There are a myriad of methods to\n\n interpret machine learning models, including methods to\n\n visualize and understand how the network represents inputs internally , \n\n feature attribution methods that assign an importance score to each feature \n\n for a specific input ,\n\n and saliency methods that aim to highlight which regions of an image\n\n the model was looking at when making a decision\n\n .\n\n visualized as a saliency method, and a saliency method can assign importance\n\n scores to each individual pixel. In this article, we will focus\n\n on the feature attribution method integrated gradients.\n\n \n\n Formally, given a target input xxx and a network function fff, \n\n to the iiith feature value representing how much that feature\n\n indicates that feature strongly increases or decreases the network output \n\n the feature in question did not influence f(x)f(x)f(x).\n\n \n\n prediction using integrated gradients. \n\n The pixels in white indicate more important pixels. In order to plot\n\n attributions, we follow the same design choices as .\n\n That is, we plot the absolute value of the sum of feature attributions\n\n high-magnitude attributions dominating the color scheme.\n\n \n\nA Better Understanding of Integrated Gradients\n\n----------------------------------------------\n\n As you look through the attribution maps, you might find some of them\n\n To better understand this behavior, we need to explore how\n\n we generated feature attributions. Formally, integrated gradients\n\n defines the importance value for the iiith feature value as follows:\n\n \\times \\underbrace{\\int\\_{\\alpha = 0}^ 1}\\_{\\text{From baseline to input…}}\n\n where xxx is the current input,\n\n “absence” of feature input. The subscript iii is used\n\n to denote indexing into the iiith feature.\n\n \n\n As the formula above states, integrated gradients gets importance scores\n\n But why would doing this make sense? Recall that the gradient of\n\n a function represents the direction of maximum increase. The gradient\n\n is telling us which pixels have the steepest local slope with respect\n\n to the output. For this reason, the gradient of a network at the input\n\n was one of the earliest saliency methods.\n\n \n\n Unfortunately, there are many problems with using gradients to interpret\n\n deep neural networks . \n\n One specific issue is that neural networks are prone to a problem\n\n sample even if the network depends heavily on those features. This can happen\n\n Intuitively, shifting the pixels in an image by a small amount typically\n\n doesn’t change what the network sees in the image. We can illustrate\n\n saturation by plotting the network output at all\n\n images between the baseline x’x’x’ and the current image. The figure\n\n below displays that the network\n\n output for the correct class increases initially, but then quickly flattens.\n\n \n\n A plot of network outputs at x’+α(x−x’)x’ + \\alpha (x - x’)x’+α(x−x’).\n\n Notice that the network output saturates the correct class\n\n at small values of α\\alphaα. By the time α=1\\alpha = 1α=1,\n\n the network output barely changes.\n\n \n\n What we really want to know is how our network got from \n\n predicting essentially nothing at x’x’x’ to being \n\n completely saturated towards the correct output class at xxx.\n\n Which pixels, when scaled along this path, most\n\n increased the network output for the correct class? This is\n\n exactly what the formula for integrated gradients gives us.\n\n \n\n By integrating over a path, \n\n integrated gradients avoids problems with local gradients being\n\n saturated. We can break the original equation\n\n down and visualize it in three separate parts: the interpolated image between\n\n the baseline image and the target image, the gradients at the interpolated\n\n image, and accumulating many such gradients over α\\alphaα.\n\n \\int\\_{\\alpha’ = 0}^{\\alpha} \\underbrace{(x\\_i - x’\\_i) \\times \n\n {\\delta x\\_i} d \\alpha’}\\_{\\text{(2): Gradients at Interpolation}} \n\n approximation of the integral with 500 linearly-spaced points between 0 and 1.\n\n Integrated gradients, visualized. In the line chart, the red line refers to\n\n accumulate at small values of α\\alphaα.\n\n \n\n We have casually omitted one part of the formula: the fact\n\n that we multiply by a difference from a baseline. Although\n\n we won’t go into detail here, this term falls out because we\n\n care about the derivative of the network\n\n function fff with respect to the path we are integrating over.\n\n That is, if we integrate over the\n\n straight-line between x’x’x’ and xxx, which\n\n we can represent as γ(α)=x’+α(x−x’)\\gamma(\\alpha) =\n\n x’ + \\alpha(x - x’)γ(α)=x’+α(x−x’), then:\n\n δf(γ(α))δα=δf(γ(α))δγ(α)×δγ(α)δα=δf(x’+α’(x−x’))δxi×(xi−x’i)\n\n \\frac{\\delta f(\\gamma(\\alpha))}{\\delta \\alpha} =\n\n \\frac{\\delta f(\\gamma(\\alpha))}{\\delta \\gamma(\\alpha)} \\times \n\n \\frac{\\delta \\gamma(\\alpha)}{\\delta \\alpha} = \n\n \\frac{\\delta f(x’ + \\alpha’ (x - x’))}{\\delta x\\_i} \\times (x\\_i - x’\\_i) \n\n δαδf(γ(α))​=δγ(α)δf(γ(α))​×δαδγ(α)​=δxi​δf(x’+α’(x−x’))​×(xi​−x’i​)\n\n The difference from baseline term is the derivative of the \n\n path function γ\\gammaγ with respect to α\\alphaα.\n\n The theory behind integrated gradients is discussed\n\n in more detail in the original paper. In particular, the authors\n\n show that integrated gradients satisfies several desirable\n\n properties, including the completeness axiom:\n\n Axiom 1: Completeness∑iϕiIG(f,x,x’)=f(x)−f(x’)\n\n \\textrm{Axiom 1: Completeness}\\\\\n\n \\sum\\_i \\phi\\_i^{IG}(f, x, x’) = f(x) - f(x’)\n\n Axiom 1: Completenessi∑​ϕiIG​(f,x,x’)=f(x)−f(x’)\n\n Note that this theorem holds for any baseline x’x’x’.\n\n Completeness is a desirable property because it states that the \n\n importance scores for each feature break down the output of the network:\n\n each importance score represents that feature’s individual contribution to\n\n Although it’s not essential to our discussion here, we can prove \n\n that integrated gradients satisfies this axiom using the\n\n [fundamental\n\n full discussion of all of the properties that integrated \n\n gradients satisfies to the original paper, since they hold\n\n independent of the choice of baseline. The completeness \n\n axiom also provides a way to measure convergence.\n\n \n\n In practice, we can’t compute the exact value of the integral. Instead,\n\n we use a discrete sum approximation with kkk linearly-spaced points between\n\n 0 and 1 for some value of kkk. If we only chose 1 point to \n\n approximate the integral, that feels like too few. Is 10 enough? 100?\n\n Intuitively 1,000 may seem like enough, but can we be certain?\n\n As proposed in the original paper, we can use the completeness axiom\n\n as a sanity check on convergence: run integrated gradients with kkk\n\n and if the difference is large, re-run with a larger kkk \n\n Of course, this brings up a new question: what is “large” in this context?\n\n One heuristic is to compare the difference with the magnitude of the\n\n output itself.\n\n .\n\n \n\n The line chart above plots the following equation in red:\n\n ∑iϕiIG(f,x,x’;α)⏟(4): Sum of Cumulative Gradients up to α\n\n (4): Sum of Cumulative Gradients up to αi∑​ϕiIG​(f,x,x’;α)​​\n\n That is, it sums all of the pixel attributions in the saliency map.\n\n We can see that with 500 samples, we seem (at least intuitively) to\n\n have converged. But this article isn’t about how \n\n to get good convergence - it’s about baselines! In order\n\n to advance our understanding of the baseline, we will need a brief excursion\n\n into the world of game theory.\n\n \n\nGame Theory and Missingness\n\n---------------------------\n\n Integrated gradients is inspired by work\n\n from cooperative game theory, specifically the Aumann-Shapley value\n\n . In cooperative game theory,\n\n a non-atomic game is a construction used to model large-scale economic systems\n\n Aumann-Shapley values provide a theoretically grounded way to\n\n determine how much different groups of participants contribute to the system.\n\n \n\n In game theory, a notion of missingness is well-defined. Games are defined\n\n on coalitions - sets of participants - and for any specific coalition,\n\n a participant of the system can be in or out of that coalition. The fact\n\n that games can be evaluated on coalitions is the foundation of\n\n the Aumann-Shapley value. Intuitively, it computes how\n\n much value a group of participants adds to the game \n\n by computing how much the value of the game would increase\n\n if we added more of that group to any given coalition.\n\n \n\n Unfortunately, missingness is a more difficult notion when\n\n we are speaking about machine learning models. In order\n\n to evaluate how important the iiith feature is, we\n\n want to be able to compute how much the output of\n\n the network would increase if we successively increased\n\n the “presence” of the iiith feature. But what does this mean, exactly?\n\n In order to increase the presence of a feature, we would need to start\n\n with the feature being “missing” and have a way of interpolating \n\n between that missingness and its current, known value.\n\n \n\n Hopefully, this is sounding awfully familiar. Integrated gradients\n\n has a baseline input x’x’x’ for exactly this reason: to model a\n\n feature being absent. But how should you choose\n\n x’x’x’ in order to best represent this? It seems to be common practice\n\n to choose a baseline input x’x’x’ to be the vector of\n\n all zeros. But consider the following scenario: you’ve learned a model\n\n on a healthcare dataset, and one of the features is blood sugar level.\n\n The model has correctly learned that excessively low levels of blood sugar,\n\n which correspond to hypoglycemia, is dangerous. Does\n\n a blood sugar level of 000 seem like a good choice to represent missingness?\n\n \n\n The point here is that fixed feature values may have unintended meaning.\n\n The problem compounds further when you consider the difference from\n\n baseline term xi−x’ix\\_i - x’\\_ixi​−x’i​.\n\n To understand why our machine learning model thinks this patient\n\n is at high risk, you run integrated gradients on this data point with a\n\n because xi−x’i=0x\\_i - x’\\_i = 0xi​−x’i​=0. This is despite the fact that \n\n a blood sugar level of 000 would be fatal!\n\n \n\n We find similar problems when we move to the image domain.\n\n If you use a constant black image as a baseline, integrated gradients will\n\n not highlight black pixels as important even if black pixels make up\n\n authors in , and is in fact\n\n central to the definition of a baseline: we wouldn’t want integrated gradients\n\n to highlight missing features as important! But then how do we avoid\n\n giving zero importance to the baseline color?\n\n \n\n Mouse over the segmented image to choose a different color\n\n as a baseline input x’x’x’. Notice that pixels\n\n of the baseline color are not highlighted as important, \n\n even if they make up part of the main object in the image.\n\n \n\nAlternative Baseline Choices\n\n----------------------------\n\n It’s clear that any constant color baseline will have this problem.\n\n Are there any alternatives? In this section, we\n\n compare four alternative choices for a baseline in the image domain.\n\n Before proceeding, it’s important to note that this article isn’t\n\n the first article to point out the difficulty of choosing a baselines.\n\n Several articles, including the original paper, discuss and compare\n\n several notions of “missingness”, both in the\n\n context of integrated gradients and more generally \n\n .\n\n Nonetheless, choosing the right baseline remains a challenge. Here we will\n\n present several choices for baselines: some based on existing literature,\n\n others inspired by the problems discussed above. The figure at the end \n\n of the section visualizes the four baselines presented here.\n\n \n\n### The Maximum Distance Baseline\n\n If we are worried about constant baselines that are blind to the baseline\n\n color, can we explicitly construct a baseline that doesn’t suffer from this\n\n problem? One obvious way to construct such a baseline is to take the \n\n farthest image in L1 distance from the current image such that the\n\n baseline is still in the valid pixel range. This baseline, which\n\n we will refer to as the maximum distance baseline (denoted\n\n *max dist.* in the figure below),\n\n avoids the difference from baseline issue directly. \n\n \n\n### The Blurred Baseline\n\n The issue with the maximum distance baseline is that it doesn’t \n\n really represent *missingness*. It actually contains a lot of\n\n information about the original image, which means we are no longer\n\n explaining our prediction relative to a lack of information. To better\n\n preserve the notion of missingness, we take inspiration from \n\n . In their paper,\n\n Fong and Vedaldi use a blurred version of the image as a \n\n domain-specific way to represent missing information. This baseline\n\n is attractive because it captures the notion of missingness in images\n\n in a very human intuitive way. In the figure below, this baseline is\n\n denoted *blur*. The figure lets you play with the smoothing constant\n\n used to define the baseline.\n\n \n\n### The Uniform Baseline\n\n One potential drawback with the blurred baseline is that it is biased\n\n to highlight high-frequency information. Pixels that are very similar\n\n to their neighbors may get less importance than pixels that are very \n\n different than their neighbors, because the baseline is defined as a weighted\n\n from both and the original integrated\n\n gradients paper. Another way to define missingness is to simply sample a random\n\n uniform image in the valid pixel range and call that the baseline. \n\n We refer to this baseline as the *uniform* baseline in the figure below.\n\n \n\n### The Gaussian Baseline\n\n Of course, the uniform distribution is not the only distribution we can\n\n touch on in the next section), Smilkov et al. \n\n variance σ\\sigmaσ. We can use the same distribution as a baseline for \n\n range, which means that as σ\\sigmaσ approaches ∞\\infty∞, the gaussian\n\n baseline approaches the uniform baseline.\n\n \n\n Comparing alternative baseline choices. For the blur and gaussian\n\n baselines, you can vary the parameter σ\\sigmaσ, which refers\n\n to the width of the smoothing kernel and the standard deviation of\n\n noise respectively.\n\n \n\nAveraging Over Multiple Baselines\n\n---------------------------------\n\n You may have nagging doubts about those last two baselines, and you\n\n would be right to have them. A randomly generated baseline\n\n can suffer from the same blindness problem that a constant image can. If \n\n we draw a uniform random image as a baseline, there is a small chance\n\n that a baseline pixel will be very close to its corresponding input pixel\n\n in value. Those pixels will not be highlighted as important. The resulting\n\n saliency map may have artifacts due to the randomly drawn baseline. Is there\n\n any way we can fix this problem?\n\n \n\n Perhaps the most natural way to do so is to average over multiple\n\n different baselines, as discussed in \n\n .\n\n Although doing so may not be particularly natural for constant color images\n\n (which colors do you choose to average over and why?), it is a\n\n very natural notion for baselines drawn from distributions. Simply\n\n draw more samples from the same distribution and average the\n\n importance scores from each sample.\n\n \n\n### Assuming a Distribution\n\n At this point, it’s worth connecting the idea of averaging over multiple\n\n baselines back to the original definition of integrated gradients. When\n\n we average over multiple baselines from the same distribution DDD,\n\n we are attempting to use the distribution itself as our baseline. \n\n We use the distribution to define the notion of missingness: \n\n if we don’t know a pixel value, we don’t assume its value to be 0 - instead\n\n we assume that it has some underlying distribution DDD. Formally, given\n\n a baseline distribution DDD, we integrate over all possible baselines\n\n x’∈Dx’ \\in Dx’∈D weighted by the density function pDp\\_DpD​:\n\n )}^{\\text{integrated gradients \n\n with baseline } x’\n\n } \\times \\underbrace{p\\_D(x’) dx’}\\_{\\text{…and weight by the density}} \\bigg)\n\n In terms of missingness, assuming a distribution might intuitively feel \n\n like a more reasonable assumption to make than assuming a constant value.\n\n But this doesn’t quite solve the issue: instead of having to choose a baseline\n\n x’x’x’, now we have to choose a baseline distribution DDD. Have we simply\n\n postponed the problem? We will discuss one theoretically motivated\n\n way to choose DDD in an upcoming section, but before we do, we’ll take\n\n a brief aside to talk about how we compute the formula above in practice,\n\n and a connection to an existing method that arises as a result.\n\n \n\n### Expectations, and Connections to SmoothGrad\n\n Now that we’ve introduced a second integral into our formula,\n\n we need to do a second discrete sum to approximate it, which\n\n requires an additional hyperparameter: the number of baselines to sample. \n\n In , Erion et al. make the \n\n observation that both integrals can be thought of as expectations:\n\n the first integral as an expectation over DDD, and the second integral \n\n as an expectation over the path between x’x’x’ and xxx. This formulation,\n\n called *expected gradients*, is defined formally as:\n\n {\\text{Expectation over \\(D\\) and the path…}} \n\n \\bigg[ \\overbrace{(x\\_i - x’\\_i) \\times \n\n \\frac{\\delta f(x’ + \\alpha (x - x’))}{\\delta x\\_i}}^{\\text{…of the \n\n importance of the } i\\text{th pixel}} \\bigg]\n\n Expected gradients and integrated gradients belong to a family of methods\n\n known as “path attribution methods” because they integrate gradients\n\n over one or more paths between two valid inputs. \n\n Both expected gradients and integrated gradients use straight-line paths,\n\n in more detail in the original paper. To compute expected gradients in\n\n practice, we use the following formula:\n\n ϕ^iEG(f,x;D)=1k∑j=1k(xi−x’ij)×δf(x’j+αj(x−x’j))δxi\n\n \\frac{\\delta f(x’^j + \\alpha^{j} (x - x’^j))}{\\delta x\\_i}\n\n ϕ^​iEG​(f,x;D)=k1​j=1∑k​(xi​−x’ij​)×δxi​δf(x’j+αj(x−x’j))​\n\n where x’jx’^jx’j is the jjjth sample from DDD and \n\n αj\\alpha^jαj is the jjjth sample from the uniform distribution between\n\n 0 and 1. Now suppose that we use the gaussian baseline with variance\n\n \n\n ϕ^iEG(f,x;N(x,σ2I))=1k∑j=1kϵσj×δf(x+(1−αj)ϵσj)δxi\n\n \\hat{\\phi}\\_i^{EG}(f, x; N(x, \\sigma^2 I)) \n\n = \\frac{1}{k} \\sum\\_{j=1}^k \n\n \\epsilon\\_{\\sigma}^{j} \\times \n\n \\frac{\\delta f(x + (1 - \\alpha^j)\\epsilon\\_{\\sigma}^{j})}{\\delta x\\_i}\n\n ϕ^​iEG​(f,x;N(x,σ2I))=k1​j=1∑k​ϵσj​×δxi​δf(x+(1−αj)ϵσj​)​\n\n \n\n To see how we arrived\n\n at the above formula, first observe that \n\n x’∼N(x,σ2I)=x+ϵσx’−x=ϵσ \n\n \\begin{aligned}\n\n x’ \\sim N(x, \\sigma^2 I) &= x + \\epsilon\\_{\\sigma} \\\\\n\n x’- x &= \\epsilon\\_{\\sigma} \\\\\n\n \\end{aligned}\n\n x’∼N(x,σ2I)x’−x​=x+ϵσ​=ϵσ​​\n\n by definition of the gaussian baseline. Now we have: \n\n x’+α(x−x’)=x+ϵσ+α(x−(x+ϵσ))=x+(1−α)ϵσ\n\n \\begin{aligned}\n\n x’ + \\alpha(x - x’) &= \\\\\n\n x + \\epsilon\\_{\\sigma} + \\alpha(x - (x + \\epsilon\\_{\\sigma})) &= \\\\\n\n x + (1 - \\alpha)\\epsilon\\_{\\sigma}\n\n \\end{aligned}\n\n x’+α(x−x’)x+ϵσ​+α(x−(x+ϵσ​))x+(1−α)ϵσ​​==​\n\n The above formula simply substitues the last line\n\n of each equation block back into the formula.\n\n . \n\n \n\n This looks awfully familiar to an existing method called SmoothGrad\n\n . If we use the (gradients ×\\times× input image)\n\n variant of SmoothGrad SmoothGrad is\n\n was a method designed to sharpen saliency maps and was meant to be run\n\n on top of an existing saliency method. The idea is simple:\n\n instead of running a saliency method once on an image, first\n\n add some gaussian noise to an image, then run the saliency method.\n\n Do this several times with different draws of gaussian noise, then\n\n is discussed in more detail in the original SmoothGrad paper., \n\n then we have the following formula:\n\n ϕiSG(f,x;N(0ˉ,σ2I))=1k∑j=1k(x+ϵσj)×δf(x+ϵσj)δxi\n\n \\phi\\_i^{SG}(f, x; N(\\bar{0}, \\sigma^2 I)) \n\n = \\frac{1}{k} \\sum\\_{j=1}^k \n\n (x + \\epsilon\\_{\\sigma}^{j}) \\times \n\n \\frac{\\delta f(x + \\epsilon\\_{\\sigma}^{j})}{\\delta x\\_i}\n\n ϕiSG​(f,x;N(0ˉ,σ2I))=k1​j=1∑k​(x+ϵσj​)×δxi​δf(x+ϵσj​)​\n\n We can see that SmoothGrad and expected gradients with a\n\n gaussian baseline are quite similar, with two key differences:\n\n gradients multiplies by just ϵσ\\epsilon\\_{\\sigma}ϵσ​, and while expected\n\n gradients samples uniformly along the path, SmoothGrad always\n\n samples the endpoint α=0\\alpha = 0α=0.\n\n \n\n Can this connection help us understand why SmoothGrad creates\n\n assuming that each of our pixel values is drawn from a\n\n gaussian *independently* of the other pixel values. But we know\n\n this is far from true: in images, there is a rich correlation structure\n\n between nearby pixels. Once your network knows the value of a pixel, \n\n it doesn’t really need to use its immediate neighbors because\n\n it’s likely that those immediate neighbors have very similar intensities.\n\n \n\n \n\n Assuming each pixel is drawn from an independent gaussian\n\n breaks this correlation structure. It means that expected gradients\n\n tabulates the importance of each pixel independently of\n\n the other pixel values. The generated saliency maps\n\n will be less noisy and better highlight the object of interest\n\n because we are no longer allowing the network to rely \n\n on only pixel in a group of correlated pixels. This may be\n\n why SmoothGrad is smooth: because it is implicitly assuming\n\n independence among pixels. In the figure below, you can compare\n\n integrated gradients with a single randomly drawn baseline\n\n to expected gradients sampled over a distribution. For\n\n the gaussian baseline, you can also toggle the SmoothGrad\n\n option to use the SmoothGrad formula above. For all figures,\n\n k=500k=500k=500.\n\n \n\n The difference between a single baseline and multiple\n\n baselines from the same distribution. Use the \n\n “Multi-Reference” button to toggle between the two. For the gaussian\n\n baseline, you can also toggle the “Smooth Grad” button\n\n to toggle between expected gradients and SmoothGrad\n\n with gradients \\* inputs.\n\n \n\n### Using the Training Distribution\n\n Is it really reasonable to assume independence among\n\n pixels while generating saliency maps? In supervised learning, \n\n we make the assumption that the data is drawn\n\n share a common, underlying distribution is what allows us to \n\n do supervised learning and make claims about generalizability. Given\n\n this assumption, we don’t need to\n\n model missingness using a gaussian or a uniform distribution:\n\n we can use DdataD\\_{\\text{data}}Ddata​ to model missingness directly.\n\n \n\n The only problem is that we do not have access to the underlying distribution.\n\n But because this is a supervised learning task, we do have access to many \n\n independent draws from the underlying distribution: the training data!\n\n We can simply use samples from the training data as random draws\n\n from DdataD\\_{\\text{data}}Ddata​. This brings us to the variant\n\n of expected gradients used in ,\n\n which we again visualize in three parts:\n\n \\frac{1}{k} \\sum\\_{j=1}^k \n\n \\underbrace{(x\\_i - x’^j\\_i) \\times \n\n \\frac{\\delta f(\\text{ } \n\n \\overbrace{x’^j + \\alpha^{j} (x - x’^j)}^{\\text{(1): Interpolated Image}}\n\n \\text{ })}{\\delta x\\_i}}\\_{\\text{ (2): Gradients at Interpolation}}\n\n = \\overbrace{\\hat{\\phi\\_i}^{EG}(f, x, k; D\\_{\\text{data}})}\n\n ^{\\text{(3): Cumulative Gradients up to }\\alpha}\n\n A visual representation of expected gradients. Instead of taking contributions\n\n from a single path, expected gradients averages contributions from \n\n all paths defined by the underlying data distribution. Note that\n\n this figure only displays every 10th sample to avoid loading many images.\n\n \n\n In (4) we again plot the sum of the importance scores over pixels. As mentioned\n\n gradients, satisfy the completeness axiom. We can definitely see that\n\n completeness is harder to satisfy when we integrate over both a path\n\n and a distribution: that is, with the same number\n\n of samples, expected gradients doesn’t converge as quickly as \n\n integrated gradients does. Whether or not this is an acceptable price to\n\n pay to avoid color-blindness in attributions seems subjective.\n\n \n\nComparing Saliency Methods\n\n--------------------------\n\n So we now have many different choices for a baseline. How do we choose\n\n which one we should use? The different choices of distributions and constant\n\n baselines have different theoretical motivations and practical concerns.\n\n Do we have any way of comparing the different baselines? In this section,\n\n we will touch on several different ideas about how to compare\n\n of all of the existing evaluation metrics, but is instead meant to \n\n emphasize that evaluating interpretability methods remains a difficult problem.\n\n \n\n### The Dangers of Qualitative Assessment\n\n One naive way to evaluate our baselines is to look at the saliency maps \n\n they produce and see which ones best highlight the object in the image. \n\n reasonable results, as does using a gaussian baseline or the blurred baseline.\n\n But is visual inspection really a good way judge our baselines? For one thing,\n\n we’ve only presented four images from the test set here. We would need to\n\n conduct user studies on a much larger scale with more images from the test\n\n set to be confident in our results. But even with large-scale user studies,\n\n qualitative assessment of saliency maps has other drawbacks.\n\n \n\n When we rely on qualitative assessment, we are assuming that humans\n\n know what an “accurate” saliency map is. When we look at saliency maps\n\n on data like ImageNet, we often check whether or not the saliency map\n\n highlights the object that we see as representing the true class in the image.\n\n We make an assumption between the data and the label, and then further assume\n\n that a good saliency map should reflect that assumption. But doing so\n\n has no real justification. Consider the figure below, which compares \n\n two saliency methods on a network that gets above 99% accuracy\n\n on (an altered version of) MNIST.\n\n The first saliency method is just an edge detector plus gaussian smoothing,\n\n while the second saliency method is expected gradients using the training\n\n data as a distribution. Edge detection better reflects what we humans\n\n think is the relationship between the image and the label.\n\n \n\nOriginal Image:Edge Detection:Expected Gradients:\n\n Qualitative assessment can be dangerous because we rely\n\n on our human knowledge of the relationship between\n\n the data and the labels, and then we assume\n\n that an accurate model has learned that very relationship.\n\n \n\n Unfortunately, the edge detection method here does not highlight \n\n what the network has learned. This dataset is a variant of \n\n decoy MNIST, in which the top left corner of the image has\n\n been altered to directly encode the image’s class\n\n . That is, the intensity\n\n of the top left corner of each image has been altered to \n\n be 255×y9255 \\times \\frac{y}{9} 255×9y​ where yyy is the class\n\n the image belongs to. We can verify by removing this\n\n patch in the test set that the network heavily relies on it to make\n\n predictions, which is what the expected gradients saliency maps show.\n\n \n\n This is obviously a contrived example. Nonetheless, the fact that\n\n visual assessment is not necessarily a useful way to evaluate \n\n saliency maps and attribution methods has been extensively\n\n discussed in recent literature, with many proposed qualitative\n\n tests as replacements \n\n .\n\n At the heart of the issue is that we don’t have ground truth explanations:\n\n we are trying to evaluate which methods best explain our network without\n\n actually knowing what our networks are doing.\n\n \n\n### Top K Ablation Tests\n\n One simple way to evaluate the importance scores that \n\n expected/integrated gradients produces is to see whether \n\n ablating the top k features as ranked by their importance\n\n decreases the predicted output logit. In the figure below, we\n\n ablate either by mean-imputation or by replacing each pixel\n\n for 1000 different correctly classified test-set images using each\n\n of the baselines proposed above \n\n For the blur baseline and the blur\n\n ablation test, we use σ=20\\sigma = 20σ=20.\n\n For the gaussian baseline, we use σ=1\\sigma = 1σ=1. These choices\n\n are somewhat arbitrary - a more comprehensive evaluation\n\n would compare across many values of σ\\sigmaσ.\n\n . As a\n\n control, we also include ranking features randomly\n\n (*Random Noise* in the plot). \n\n \n\n We plot, as a fraction of the original logit, the output logit\n\n of the network at the true class. That is, suppose the original\n\n image is a goldfinch and the network predicts the goldfinch class correctly\n\n with 95% confidence. If the confidence of class goldfinch drops\n\n to 60% after ablating the top 10% of pixels as ranked by \n\n feature importance, then we plot a curve that goes through\n\n that best highlights which pixels the network \n\n should exhibit the fastest drop in logit magnitude, because\n\n it highlights the pixels that most increase the confidence of the network.\n\n That is, the lower the curve, the better the baseline.\n\n \n\n### Mass Center Ablation Tests\n\n One problem with ablating the top k features in an image\n\n is related to an issue we already brought up: feature correlation.\n\n No matter how we ablate a pixel, that pixel’s neighbors \n\n provide a lot of information about the pixel’s original value.\n\n With this in mind, one could argue that progressively ablating \n\n pixels one by one is a rather meaningless thing to do. Can\n\n we instead perform ablations with feature correlation in mind?\n\n \n\n One straightforward way to do this is simply compute the \n\n center of mass \n\n of the saliency map, and ablate a boxed region centered on\n\n the center of mass. This tests whether or not the saliency map\n\n is generally highlighting an important region in the image. We plot\n\n replacing the boxed region around the saliency map using mean-imputation\n\n and blurring below as well (*Mean Center* and *Blur Center*, respectively).\n\n As a control, we compare against a saliency map generated from random gaussian\n\n noise (*Random Noise* in the plot).\n\n \n\n A variety of ablation tests on a variety of baselines.\n\n Using the training distribution and using the uniform distribution\n\n outperform most other methods on the top k ablation tests. The\n\n blur baseline inspired by \n\n does equally well on the blur top-k test. All methods\n\n perform similarly on the mass center ablation tests. Mouse\n\n over the legend to highlight a single curve.\n\n \n\n The ablation tests seem to indicate some interesting trends. \n\n All methods do similarly on the mass center ablation tests, and\n\n only slightly better than random noise. This may be because the \n\n object of interest generally lies in the center of the image - it\n\n isn’t hard for random noise to be centered at the image. In contrast,\n\n using the training data or a uniform distribution seems to do quite well\n\n on the top-k ablation tests. Interestingly, the blur baseline\n\n inspired by also\n\n does quite well on the top k baseline tests, especially when\n\n we ablate pixels by blurring them! Would the uniform\n\n baseline do better if you ablate the image with uniform random noise?\n\n by progressively replacing it with a different image. We leave\n\n these experiments as future work, as there is a more pressing question\n\n we need to discuss.\n\n \n\n### The Pitfalls of Ablation Tests\n\n Constant baselines tend to not need as many samples\n\n comparing not only across baselines but also across number of samples drawn, \n\n and for the blur and gaussian baselines, the parameter σ\\sigmaσ.\n\n As mentioned above, we have defined many notions of missingness other than \n\n mean-imputation or blurring: more extensive comparisons would also compare\n\n all of our baselines across all of the corresponding notions of missing data.\n\n \n\n But even with all of these added comparisons, do ablation\n\n tests really provide a well-founded metric to judge attribution methods? \n\n The authors of argue\n\n against ablation tests. They point out that once we artificially ablate\n\n pixels an image, we have created inputs that do not come from\n\n the original data distribution. Our trained model has never seen such \n\n inputs. Why should we expect to extract any reasonable information\n\n from evaluating our model on them?\n\n \n\n On the other hand, integrated gradients and expected gradients\n\n rely on presenting interpolated images to your model, and unless\n\n you make some strange convexity assumption, those interpolated images \n\n don’t belong to the original training distribution either. \n\n In general, whether or not users should present\n\n is a subject of ongoing debate\n\n . Nonetheless, \n\n the point raised in is still an\n\n important one: “it is unclear whether the degradation in model \n\n performance comes from the distribution shift or because the \n\n features that were removed are truly informative.”\n\n \n\n### Alternative Evaluation Metrics\n\n So what about other evaluation metrics proposed in recent literature? In\n\n , Hooker et al. propose a variant of\n\n an ablation test where we first ablate pixels in the training and\n\n test sets. Then, we re-train a model on the ablated data and measure\n\n by how much the test-set performance degrades. This approach has the advantage\n\n of better capturing whether or not the saliency method\n\n highlights the pixels that are most important for predicting the output class.\n\n Unfortunately, it has the drawback of needing to re-train the model several\n\n times. This metric may also get confused by feature correlation.\n\n \n\n Consider the following scenario: our dataset has two features \n\n that are highly correlated. We train a model which learns to only\n\n use the first feature, and completely ignore the second feature.\n\n A feature attribution method might accurately reveal what the model is doing:\n\n re-train the model and get similar performance because similar information \n\n is stored in the second feature. We might conclude that our feature\n\n attribution method is lousy - is it? This problem fits into a larger discussion\n\n about whether or not your attribution method\n\n should be “true to the model” or “true to the data”\n\n which has been discussed in several recent articles\n\n .\n\n \n\n In , the authors propose several\n\n sanity checks that saliency methods should pass. One is the “Model Parameter\n\n Randomization Test”. Essentially, it states that a feature attribution\n\n method should produce different attributions when evaluated on a trained\n\n model (assumedly a trained model that performs well) and a randomly initialized\n\n model. This metric is intuitive: if a feature attribution method produces\n\n similar attributions for random and trained models, is the feature\n\n attribution really using information from the model? It might just\n\n be relying entirely on information from the input image.\n\n \n\n But consider the following figure, which is another (modified) version\n\n of MNIST. We’ve generated expected gradients attributions using the training\n\n distribution as a baseline for two different networks. One of the networks\n\n is a trained model that gets over 99% accuracy on the test set. The other\n\n Should we now conclude that expected gradients is an unreliable method?\n\n \n\nOriginal Image:Network 1 Saliency:Network 2 Saliency:\n\n A comparison of two network’s saliency maps using expected gradients. One\n\n network has randomly initialized weights, the other gets >99% accuracy\n\n on the test set.\n\n \n\n you would run these kinds of saliency method sanity checks on un-modified data.\n\n \n\n But the truth is, even for natural images, we don’t actually\n\n know what an accurate model’s saliency maps should look like. \n\n Different architectures trained on ImageNet can all get good performance\n\n and have very different saliency maps. Can we really say that \n\n trained models should have saliency maps that don’t look like \n\n saliency maps generated on randomly initialized models? That isn’t\n\n to say that the model randomization test doesn’t have merit: it\n\n does reveal interesting things about what saliency methods are are doing.\n\n It just doesn’t tell the whole story.\n\n \n\n .\n\n Each proposed metric comes with their various pros and cons. \n\n we don’t know what our model is doing and have no ground truth to compare\n\n against.\n\n \n\nConclusion\n\n----------\n\n So what should be done? We have many baselines and \n\n no conclusion about which one is the “best.” Although\n\n we don’t provide extensive quantitative results\n\n comparing each baseline, we do provide a foundation\n\n for understanding them further. At the heart of\n\n each baseline is an assumption about missingness \n\n in our model and the distribution of our data. In this article,\n\n we shed light on some of those assumptions, and their impact\n\n on the corresponding path attribution. We lay\n\n groundwork for future discussion about baselines in the\n\n context of path attributions, and more generally about\n\n the relationship between representations of missingness \n\n and how we explain machine learning models.\n\n \n\n \n\n A side-by-side comparison of integrated gradients\n\n using a black baseline \n\n and expected gradients using the training data\n\n as a baseline.\n\n \n\nRelated Methods\n\n---------------\n\n This work focuses on a specific interpretability method: integrated gradients\n\n and its extension, expected gradients. We refer to these\n\n methods as path attribution methods because they integrate \n\n importances over a path. However, path attribution methods\n\n represent only a tiny fraction of existing interpretability methods. We focus\n\n on them here both because they are amenable to interesting visualizations,\n\n and because they provide a springboard for talking about missingness.\n\n We briefly cited several other methods at the beginning of this article.\n\n Many of those methods use some notion of baseline and have contributed to\n\n the discussion surrounding baseline choices.\n\n \n\n In , Fong and Vedaldi propose\n\n a model-agnostic method to explain neural networks that is based\n\n on learning the minimal deletion to an image that changes the model\n\n prediction. In section 4, their work contains an extended discussion on \n\n how to represent deletions: that is, how to represent missing pixels. They\n\n argue that one natural way to delete pixels in an image is to blur them.\n\n This discussion inspired the blurred baseline that we presented in our article.\n\n They also discuss how noise can be used to represent missingness, which\n\n was part of the inspiration for our uniform and gaussian noise baselines.\n\n \n\n In , Shrikumar et al. \n\n propose a feature attribution method called deepLIFT. It assigns\n\n importance scores to features by propagating scores from the output\n\n of the model back to the input. Similar to integrated gradients,\n\n deepLIFT also defines importance scores relative to a baseline, which\n\n they call the “reference”. Their paper has an extended discussion on\n\n why explaining relative to a baseline is meaningful. They also discuss\n\n a few different baselines, including “using a blurred version of the original\n\n image”. \n\n \n\n The list of other related methods that we didn’t discuss\n\n in this article goes on: SHAP and DeepSHAP\n\n ,\n\n layer-wise relevance propagation ,\n\n LIME ,\n\n RISE and \n\n Grad-CAM \n\n among others. Many methods for explaining machine learning models\n\n define some notion of baseline or missingness, \n\n because missingness and explanations are closely related. When we explain\n\n a model, we often want to know which features, when missing, would most\n\n change model output. But in order to do so, we need to define \n\n what missing means because most machine learning models cannot\n\n handle arbitrary patterns of missing inputs. This article\n\n does not discuss all of the nuances presented alongside\n\n each existing method, but it is important to note that these methods were\n\n points of inspiration for a larger discussion about missingness.\n\n \n\n", "bibliography_bib": [{"title": "Axiomatic attribution for deep networks"}, {"title": "Using a deep learning algorithm and integrated gradients explanation to assist grading for diabetic retinopathy"}, {"title": "Ensembling convolutional and long short-term memory networks for electrocardiogram arrhythmia detection"}, {"title": "Inception-v4, inception-resnet and the impact of residual connections on learning"}, {"title": "Imagenet: A large-scale hierarchical image database"}, {"title": "Tensorflow-slim image classification model library"}, {"title": "The Building Blocks of Interpretability"}, {"title": "Feature Visualization"}, {"title": "Interpretability beyond feature attribution: Quantitative testing with concept activation vectors (tcav)"}, {"title": "Visualizing and understanding convolutional networks"}, {"title": "Visualizing higher-layer features of a deep network"}, {"title": "Understanding deep image representations by inverting them"}, {"title": "Net2vec: Quantifying and explaining how concepts are encoded by filters in deep neural networks"}, {"title": "\"Why Should I Trust You?\": Explaining the Predictions of Any Classifier"}, {"title": "A unified approach to interpreting model predictions"}, {"title": "Layer-wise relevance propagation for neural networks with local renormalization layers"}, {"title": "Learning important features through propagating activation differences"}, {"title": "Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps"}, {"title": "Interpretable explanations of black boxes by meaningful perturbation"}, {"title": "Learning deep features for discriminative localization"}, {"title": "Grad-cam: Visual explanations from deep networks via gradient-based localization"}, {"title": "Smoothgrad: removing noise by adding noise"}, {"title": "Rise: Randomized input sampling for explanation of black-box models"}, {"title": "Understanding the difficulty of training deep feedforward neural networks"}, {"title": "Gradients of counterfactuals"}, {"title": "Values of non-atomic games"}, {"title": "A note about: Local explanation methods for deep neural networks lack sensitivity to parameter values"}, {"title": "The (Un)reliability of saliency methods"}, {"title": "Towards better understanding of gradient-based attribution methods for Deep Neural Networks"}, {"title": "Learning Explainable Models Using Attribution Priors"}, {"title": "XRAI: Better Attributions Through Regions"}, {"title": "Right for the right reasons: Training differentiable models by constraining their explanations"}, {"title": "A Benchmark for Interpretability Methods in Deep Neural Networks"}, {"title": "On the (In)fidelity and Sensitivity for Explanations"}, {"title": "Sanity Checks for Saliency Maps"}, {"title": "Benchmarking Attribution Methods with Relative Feature Importance"}, {"title": "Do Explanations Reflect Decisions? A Machine-centric Strategy to Quantify the Performance of Explainability Algorithms"}, {"title": "How do Humans Understand Explanations from Machine Learning Systems? An Evaluation of the Human-Interpretability of Explanation"}, {"title": "Interpretation of neural networks is fragile"}, {"title": "The many Shapley values for model explanation"}, {"title": "Feature relevance quantification in explainable AI: A causality problem"}, {"title": "Explaining Models by Propagating Shapley Values of Local Components"}], "filename": "Visualizing the Impact of Feature Attribution Baselines.html", "id": "799948a526f1f43ead1c506661b89dd6"} {"url": "n/a", "source": "distill", "source_type": "html", "converted_with": "python", "title": "AI Safety Needs Social Scientists", "authors": ["Geoffrey Irving", "Amanda Askell"], "date_published": "2019-02-19", "abstract": " The goal of long-term artificial intelligence (AI) safety is to ensure that advanced AI systems are reliably aligned with human values — that they reliably do things that people want them to do.Roughly by human values we mean whatever it is that causes people to choose one option over another in each case, suitably corrected by reflection, with differences between groups of people taken into account. There are a lot of subtleties in this notion, some of which we will discuss in later sections and others of which are beyond the scope of this paper. Since it is difficult to write down precise rules describing human values, one approach is to treat aligning with human values as another learning problem. We ask humans a large number of questions about what they want, train an ML model of their values, and optimize the AI system to do well according to the learned values. ", "journal_ref": "distill-pub", "doi": "https://doi.org/10.23915/distill.00014", "text": "\n\n The goal of long-term artificial intelligence (AI) safety is to \n\nensure that advanced AI systems are reliably aligned with human \n\nvalues — that they reliably do things that people want them to do.Roughly\n\n by human values we mean whatever it is that causes people to choose one\n\n option over another in each case, suitably corrected by reflection, \n\nwith differences between groups of people taken into account. There are\n\n a lot of subtleties in this notion, some of which we will discuss in \n\nlater sections and others of which are beyond the scope of this paper.\n\n Since it is difficult to write down precise rules describing human \n\nvalues, one approach is to treat aligning with human values as another \n\nlearning problem. We ask humans a large number of questions about what \n\nthey want, train an ML model of their values, and optimize the AI system\n\n to do well according to the learned values.\n\n \n\n If humans reliably and accurately answered all questions about their\n\n values, the only uncertainties in this scheme would be on the machine \n\nlearning (ML) side. If the ML works, our model of human values would \n\nimprove as data is gathered, and broaden to cover all the decisions \n\nrelevant to our AI system as it learns. Unfortunately, humans have \n\nlimited knowledge and reasoning ability, and exhibit a variety of \n\ncognitive and ethical biases.\n\n If we learn values by asking humans questions, we expect different ways\n\n of asking questions to interact with human biases in different ways, \n\nproducing higher or lower quality answers. Direct questions about \n\n Different people may vary significantly in their ability to answer \n\nquestions well, and disagreements will persist across people even \n\nsetting aside answer quality. Although we have candidates for ML \n\n \n\n We believe the AI safety community needs to invest research effort \n\nin the human side of AI alignment. Many of the uncertainties involved \n\nare empirical, and can only be answered by experiment. They relate to \n\nthe psychology of human rationality, emotion, and biases. Critically, \n\nwe believe investigations into how people interact with AI alignment \n\nalgorithms should not be held back by the limitations of existing \n\nmachine learning. Current AI safety research is often limited to simple\n\n tasks in video games, robotics, or gridworlds,\n\n but problems on the human side may only appear in more realistic \n\nscenarios such as natural language discussion of value-laden questions. \n\n This is particularly important since many aspects of AI alignment \n\nchange as ML systems [increase in capability](#harder).\n\n \n\n To avoid the limitations of ML, we can instead conduct experiments \n\nconsisting entirely of people, replacing ML agents with people playing \n\nthe role of those agents. This is a variant of the “Wizard of Oz” \n\ntechnique from the human-computer interaction (HCI) community,\n\n though in our case the replacements will not be secret. These \n\nexperiments will be motivated by ML algorithms but will not involve any \n\nML systems or require an ML background. In all cases, they will require\n\n careful experimental design to build constructively on existing \n\nknowledge about how humans think. Most AI safety researchers are \n\nfocused on machine learning, which we do not believe is sufficient \n\nbackground to carry out these experiments. To fill the gap, we need \n\nsocial scientists with experience in human cognition, behavior, and \n\nethics, and in the careful design of rigorous experiments. Since the \n\nquestions we need to answer are interdisciplinary and somewhat unusual \n\nrelative to existing research, we believe many fields of social science \n\nare applicable, including experimental psychology, cognitive science, \n\neconomics, political science, and social psychology, as well as adjacent\n\n fields like neuroscience and law.\n\n \n\n This paper is a call for social scientists in AI safety. We believe\n\n close collaborations between social scientists and ML researchers will \n\nbe necessary to improve our understanding of the human side of AI \n\nalignment, and hope this paper sparks both conversation and \n\ncollaboration. We do not claim novelty: previous work mixing AI safety \n\n Our main goal is to enlarge these collaborations and emphasize their \n\nimportance to long-term AI safety, particularly for tasks which current \n\nML cannot reach.\n\n \n\nAn overview of AI alignment\n\n---------------------------\n\n Before discussing how social scientists can help with AI safety and \n\nthe AI alignment problem, we provide some background. We do not attempt\n\n to be exhaustive: the goal is to provide sufficient background for the \n\nremaining sections on social science experiments. Throughout, we will \n\nspeak primarily about aligning to the values of an individual human \n\nrather than a group: this is because the problem is already hard for a \n\nsingle person, not because the group case is unimportant.\n\n \n\n distinguish between training AI systems to identify actions that humans\n\n consider good and training AI systems to identify actions that are \n\n“good” in some objective and universal sense, even if most current \n\nhumans do not consider them so. Whether there are actions that are good\n\n in this latter sense is a subject of debate.\n\n Regardless of what position one takes on this philosophical question, \n\nthis sense of good is not yet available as a target for AI training.\n\n Here we focus on the machine learning approach to AI: gathering a \n\nlarge amount of data about what a system should do and using learning \n\nalgorithms to infer patterns from that data that generalize to other \n\nsituations. Since we are trying to behave in accord with people’s \n\nvalues, the most important data will be data from humans about their \n\nvalues. Within this frame, the AI alignment problem breaks down into a \n\nfew interrelated subproblems:\n\n \n\n1. Have a satisfactory definition of human values.\n\n2. Gather data about human values, in a manner compatible with the definition.\n\n3. Find reliable ML algorithms that can learn and generalize from this data.\n\n We have significant uncertainty about all three of these problems. \n\nWe will leave the third problem to other ML papers and focus on the \n\nfirst two, which concern uncertainties about people.\n\n \n\n### Learning values by asking humans questions\n\n We start with the premise that human values are too complex to \n\ndescribe with simple rules. By “human values” we mean our full set of \n\ndetailed preferences, not general goals such as “happiness” or \n\n“loyalty”. One source of complexity is that values are entangled with a\n\n large number of facts about the world, and we cannot cleanly separate \n\nfacts from values when building ML models. For example, a rule that \n\nrefers to “gender” would require an ML model that accurately recognizes \n\nthis concept, but Buolamwini and Gebru found that several commercial \n\ngender classifiers with a 1% error rate on white men failed to recognize\n\n Finally, our values may vary across cultures, legal systems, or \n\nsituations: no learned model of human values will be universally \n\napplicable.\n\n \n\n If humans can’t reliably report the reasoning behind their \n\nintuitions about values, perhaps we can make value judgements in \n\nspecific cases. To realize this approach in an ML context, we ask \n\nhumans a large number of questions about whether an action or outcome is\n\n better or worse, then train on this data. “Better or worse” will \n\ninclude both factual and value-laden components: for an AI system \n\ntrained to say things, “better” statements might include “rain falls \n\nfrom clouds”, “rain is good for plants”, “many people dislike rain”, \n\netc. If the training works, the resulting ML system will be able to \n\nreplicate human judgement about particular situations, and thus have the\n\n same “fuzzy access to approximate rules” about values as humans. We \n\nalso train the ML system to come up with proposed actions, so that it \n\nknows both how to perform a task and how to judge its performance. This\n\n approach works at least in simple cases, such as Atari games and simple\n\n robotics tasks and language-specified goals in gridworlds.\n\n The questions we ask change as the system learns to perform different \n\ntypes of actions, which is necessary as the model of what is better or \n\nworse will only be accurate if we have applicable data to generalize \n\nfrom.\n\n \n\n In practice, data in the form of interactive human questions may be \n\nquite limited, since people are slow and expensive relative to computers\n\n on many tasks. Therefore, we can augment the “train from human \n\nquestions” approach with static data from other sources, such as books \n\nor the internet. Ideally, \n\nthe static data can be treated only as information about the world \n\ndevoid of normative content: we can use it to learn patterns about the \n\nworld, but the human data is needed to distinguish good patterns from \n\nbad.\n\n \n\n### Definitions of alignment: reasoning and reflective equilibrium\n\n So far we have discussed asking humans direct questions about \n\nwhether something is better or worse. Unfortunately, we do not expect \n\npeople to provide reliably correct answers in all cases, for several \n\nreasons:\n\n \n\n1. **Cognitive and ethical biases:**\n\n In general, we expect direct answers to questions to reflect primarily\n\n Type 1 thinking (fast heuristic judgment), while we would like to \n\ntarget a combination of Type 1 and Type 2 thinking (slow, deliberative \n\njudgment).\n\n2. **Lack of domain knowledge:**\n\n We may be interested in questions that require domain knowledge \n\nunavailable to people answering the questions. For example, a correct \n\nanswer to whether a particular injury constitutes medical malpractice \n\nmay require detailed knowledge of medicine and law. In some cases, a \n\nquestion might require so many areas of specialized expertise that no \n\none person is sufficient, or (if AI is sufficiently advanced) deeper \n\nexpertise than any human possesses.\n\n3. **Limited cognitive capacity:**\n\n Some questions may require too much computation for a human to \n\nreasonably evaluate, especially in a short period of time. This \n\nincludes synthetic tasks such as chess and Go (where AIs already surpass\n\n4. **“Correctness” may be local:**\n\n For questions involving a community of people, “correct” may be a\n\n function of complex processes or systems. For example, in a trust game,\n\n the correct action for a trustee in one community may be to return at \n\nleast half of the money handed over by the investor, and the \n\n“correctness” of this answer could be determined by asking a group of \n\nparticipants in a previous game “how much should the trustee return to \n\nthe investor” but not by asking them “how much do most trustees return?”\n\n The answer may be different in other communities or cultures.\n\n In these cases, a human may be unable to provide the right answer, \n\nbut we still believe the right answer exists as a meaningful concept. \n\nWe have many conceptual biases: imagine we point out these biases in a \n\nway that helps the human to avoid them. Imagine the human has access to\n\n all the knowledge in the world, and is able to think for an arbitrarily\n\n long time. We could define alignment as “the answer they give then, \n\nafter these limitations have been removed”; in philosophy this is known \n\n \n\n However, the behavior of reflective equilibrium with actual humans \n\nis subtle; as Sugden states, a human is not “a neoclassically rational \n\nentity encased in, and able to interact with the world only through, an \n\nerror-prone psychological shell.”\n\n Our actual moral judgments are made via a messy combination of many \n\ndifferent brain areas, where reasoning plays a “restricted but \n\nsignificant role”. A reliable \n\nsolution to the alignment problem that uses human judgment as input will\n\n need to engage with this complexity, and ask how specific alignment \n\ntechniques interact with actual humans.\n\n \n\n### Disagreements, uncertainty, and inaction: a hopeful note\n\n A solution to alignment does not mean knowing the answer to every \n\nquestion. Even at reflective equilibrium, we expect disagreements will \n\npersist about which actions are good or bad, across both different \n\nindividuals and different cultures. Since we lack perfect knowledge \n\nabout the world, reflective equilibrium will not eliminate uncertainty \n\nabout either future predictions or values, and any real ML system will \n\nbe at best an approximation of reflective equilibrium. In these cases, \n\nwe consider an AI aligned if it recognizes what it does not know and \n\nchooses actions which work however that uncertainty plays out.\n\n \n\n Admitting uncertainty is not always enough. If our brakes fail \n\nwhile driving a car, we may be uncertain whether to dodge left or right \n\naround an obstacle, but we have to pick one — and fast. For long-term \n\nsafety, however, we believe a safe fallback usually exists: inaction. \n\nIf an ML system recognizes that a question hinges on disagreements \n\nbetween people, it can either choose an action which is reasonable \n\nregardless of the disagreement or fall back to further human \n\ndeliberation. If we are about to make a decision that might be \n\ncatastrophic, we can delay and gather more data. Inaction or indecision\n\n may not be optimal, but it is hopefully safe, and matches the default \n\nscenario of not having any powerful AI system.\n\n \n\n### Alignment gets harder as ML systems get smarter\n\n and mismatch between human values and easily available data sources \n\n(such as training news feeds based on clicks and likes instead of \n\ndeliberate human preferences). However, we expect the alignment problem\n\n to get harder as AI systems grow more advanced, for two reasons. \n\nFirst, advanced systems will apply to increasingly consequential tasks: \n\nhiring, medicine, scientific analysis, public policy, etc. Besides \n\nraising the stakes, these tasks require more reasoning, leading to more \n\ncomplex alignment algorithms.\n\n \n\n Second, advanced systems may be capable of answers that sound \n\nplausible but are wrong in nonobvious ways, even if an AI is better than\n\n humans only in a limited domain (examples of which already exist).\n\n This type of misleading behavior is not the same as intentional \n\ndeception: an AI system trained from human data might have no notion of \n\ntruth separate from what answers humans say are best. Ideally, we want \n\nAI alignment algorithms to reveal misleading behavior as part of the \n\ntraining process, surfacing failures to humans and helping us provide \n\nmore accurate data. As with human-to-human deception, misleading \n\nbehavior might take advantage of our biases in complicated ways, such as\n\n learning to express policy arguments in coded racial language to sound \n\nmore convincing.\n\n \n\nDebate: learning human reasoning\n\n--------------------------------\n\n Before we discuss social science experiments for AI alignment in \n\ndetail, we need to describe a particular method for AI alignment. \n\nAlthough the need for social science experiments applies even to direct \n\nquestioning, this need intensifies for methods which try to get at \n\nreasoning and reflective equilibrium. As discussed above, it is unclear\n\n whether reflective equilibrium is a well defined concept when applied \n\nto humans, and at a minimum we expect it to interact with cognitive and \n\nethical biases in complex ways. Thus, for the remainder of this paper \n\nwe focus on a specific proposal for learning reasoning-oriented \n\n \n\n We describe the debate approach to AI alignment in the question \n\nanswering setting. Given a question, we have two AI agents engage in a \n\ndebate about the correct answer, then show the transcript of the debate \n\nto a human to judge. The judge decides which debater gave the most \n\ntrue, useful information, and declares that debater the winner.We\n\n can also allow ties. Indeed, if telling the truth is the winning \n\nstrategy ties will be common with strong play, as disagreeing with a \n\ntrue statement would lose. This defines a two player zero \n\nsum game between the debaters, where the goal is to convince the human \n\nthat one’s answer is correct. Arguments in a debate can consist of \n\nanything: reasons for an answer, rebuttals of reasons for the alternate \n\nanswer, subtleties the judge might miss, or pointing out biases which \n\nmight mislead the judge. Once we have defined this game, we can train \n\nAI systems to play it similarly to how we train AIs to play other games \n\nsuch as Go or Dota 2. Our hope is that the following hypothesis holds:\n\n \n\n \n\n### An example of debate\n\n Imagine we’re building a personal assistant that helps people decide\n\n where to go on vacation. The assistant has knowledge of people’s \n\nvalues, and is trained via debate to come up with convincing arguments \n\nthat back up vacation decisions. As the human judge, you know what \n\ndestinations you intuitively think are better, but have limited \n\nknowledge about the wide variety of possible vacation destinations and \n\ntheir advantages and disadvantages. A debate about the question “Where \n\nshould I go on vacation?” might open as follows:\n\n \n\n1. Where should I go on vacation?\n\n2. Alaska.\n\n3. Bali.\n\n If you are able to reliably decide between these two destinations, \n\nwe could end here. Unfortunately, Bali has a hidden flaw:\n\n \n\n3. Bali is out since your passport won’t arrive in time.\n\n At this point it looks like Red wins, but Blue has one more countermove:\n\n \n\n4. Expedited passport service only takes two weeks.\n\n Here Red fails to think of additional points, and loses to Blue and \n\nBali. Note that a debate does not need to cover all possible arguments.\n\n There are many other ways the debate could have gone, such as:\n\n \n\n1. Alaska.\n\n2. Bali.\n\n3. Bali is way too hot.\n\n4. You prefer too hot to too cold.\n\n5. Alaska is pleasantly warm in the summer.\n\n6. It's January.\n\n This debate is also a loss for Red (arguably a worse loss). Say we \n\nbelieve Red is very good at debate, and is able to predict in advance \n\nwhich debates are more likely to win. If we see only the first debate \n\nabout passports and decide in favor of Bali, we can take that as \n\nevidence that any other debate would have also gone for Bali, and thus \n\nthat Bali is the correct answer. A larger portion of this hypothetical \n\ndebate tree is shown below:\n\n \n\n[1](#figure-debate-tree)\n\n A hypothetical partial debate tree for the question “Where \n\nshould I go on vacation?” A single debate would explore only one of \n\nthese paths, but a single path chosen by good debaters is evidence that \n\nother paths would not change the result of the game.\n\n \n\n If trained debaters are bad at predicting which debates will win, \n\nanswer quality will degrade since debaters will be unable to think of \n\nimportant arguments and counterarguments. However, as long as the two \n\nsides are reasonably well matched, we can hope that at least the results\n\n are not malicious: that misleading behavior is still a losing strategy.\n\n Let’s set aside the ability of the debaters for now, and turn to the \n\nability of the judge.\n\n \n\n### Are people good enough as judges?\n\n> \n\n> “In fact, almost everything written at a practical level about the\n\n> Turing test is about how to make good bots, with a small remaining \n\n> fraction about how to be a good judge.”\n\n> Brian Christian, The Most Human Human\n\n> \n\n As with learning by asking humans direct questions, whether debate \n\nproduces aligned behavior depends on the reasoning abilities of the \n\nhuman judge. Unlike direct questioning, debate has the potential to \n\ngive correct answers beyond what the judge could provide without \n\nassistance. This is because a sufficiently strong judge could follow \n\nalong with arguments the judge could not come up with on their own, \n\nchecking complex reasoning for both self consistency and consistency \n\nwith human-checkable facts. A judge who is biased but willing to adjust\n\n once those biases are revealed could result in unbiased debates, or a \n\njudge who is able to check facts but does not know where to look could \n\nbe helped along by honest debaters. If the hypothesis holds, a \n\nmisleading debater would not be able to counter the points of an honest \n\ndebater, since the honest points would appear more consistent to the \n\njudge.\n\n \n\n On the other hand, we can also imagine debate going the other way: \n\namplifying biases and failures of reason. A judge with an ethical bias \n\nwho is happy to accept statements reinforcing that bias could result in \n\neven more biased debates. A judge with too much confirmation bias might\n\n happily accept misleading sources of evidence, and be unwilling to \n\naccept arguments showing why that evidence is wrong. In this case, an \n\noptimal debate agent might be quite malicious, taking advantage of \n\nbiases and weakness in the judge to win with convincing but wrong \n\narguments.The difficulties that cognitive \n\nbiases, prejudice, and social influence introduce to persuasion ‒ as \n\nwell as methods for reducing these factors ‒ are being increasingly \n\nexplored in psychology, communication science, and neuroscience.\n\n In both these cases, debate acts as an amplifier. For strong \n\njudges, this amplification is positive, removing biases and simulating \n\nextra reasoning abilities for the judge. For weak judges, the biases \n\nand weaknesses would themselves be amplified. If this model holds, \n\ndebate would have threshold behavior: it would work for judges above \n\nsome threshold of ability and fail below the threshold.The\n\n threshold model is only intuition, and could fail for a variety of \n\nreasons: the intermediate region could be very large, or the threshold \n\ncould differ widely per question so that even quite strong judges are \n\ninsufficient for many questions. Assuming the threshold \n\nexists, it is unclear whether people are above or below it. People are \n\ncapable of general reasoning, but our ability is limited and riddled \n\nwith cognitive biases. People are capable of advanced ethical sentiment\n\n but also full of biases, both conscious and unconscious.\n\n \n\n Thus, if debate is the method we use to align an AI, we need to know\n\n if people are strong enough as judges. In other words, whether the \n\nhuman judges are sufficiently good at discerning whether a debater is \n\ntelling the truth or not. This question depends on many details: the \n\ntype of questions under consideration, whether judges are trained or \n\nnot, and restrictions on what debaters can say. We believe experiment \n\nwill be necessary to determine whether people are sufficient judges, and\n\n which form of debate is most truth-seeking.\n\n \n\n### From superforecasters to superjudges\n\n An analogy with the task of probabilistic forecasting is useful \n\nhere. Tetlock’s “Good Judgment Project” showed that some amateurs were \n\nsignificantly better at forecasting world events than both their peers \n\nand many professional forecasters. These “superforecasters” maintained \n\ntheir prediction accuracy over years (without regression to the mean), \n\n p. 234-236). The superforecasting trait was not immutable: it was \n\ntraceable to particular methods and thought processes, improved with \n\ncareful practice, and could be amplified if superforecasters were \n\ncollected into teams. For forecasters in general, brief probabilistic \n\ntraining significantly improved forecasting ability even 1-2 years after\n\n the training. We believe a similar research program is possible for \n\ndebate and other AI alignment algorithms. In the best case, we would be\n\n able to find, train, or assemble “superjudges”, and have high \n\nconfidence that optimal debate with them as judges would produce aligned\n\n behavior.\n\n \n\n In the forecasting case, much of the research difficulty lay in \n\nassembling a large corpus of high quality forecasting questions. \n\nSimilarly, measuring how good people are as debate judges will not be \n\neasy. We would like to apply debate to problems where there is no other\n\n source of truth: if we had that source of truth, we would train ML \n\nmodels on it directly. But if there is no source of truth, there is no \n\nway to measure whether debate produced the correct answer. This problem\n\n can be avoided by starting with simple, verifiable domains, where the \n\nexperimenters know the answer but the judge would not. “Success” then \n\nmeans that the winning debate argument is telling the externally known \n\ntruth. The challenge gets harder as we scale up to more complex, \n\nvalue-laden questions, as we discuss in detail later.\n\n \n\n### Debate is only one possible approach\n\n As mentioned, debate is not the only scheme trying to learn human \n\nreasoning. Debate is a modified version of iterated amplification,\n\n which uses humans to break down hard questions into easier questions \n\nand trains ML models to be consistent with this decomposition. \n\nRecursive reward modeling is a further variant.\n\n Inverse reinforcement learning, inverse reward design, and variants \n\ntry to back out goals from human actions, taking into account \n\nlimitations and biases that might affect this reasoning.\n\n The need to study how humans interact with AI alignment applies to any\n\n of these approaches. Some of this work has already begun: Ought’s \n\nFactored Cognition project uses teams of humans to decompose questions \n\nand reassemble answers, mimicking iterated amplification.\n\n We believe knowledge gained about how humans perform with one approach\n\n is likely to partially generalize to other approaches: knowledge about \n\nhow to structure truth-seeking debates could inform how to structure \n\ntruth-seeking amplification, and vice versa.\n\n \n\n### Experiments needed for debate\n\n To recap, in debate we have two AI agents engaged in debate, trying \n\nto convince a human judge. The debaters are trained only to win the \n\ngame, and are not motivated by truth separate from the human’s \n\njudgments. On the human side, we would like to know whether people are \n\nstrong enough as judges in debate to make this scheme work, or how to \n\nmodify debate to fix it if it doesn’t. Unfortunately, actual debates in\n\n natural language are well beyond the capabilities of present AI \n\nsystems, so previous work on debate and similar schemes has been \n\nrestricted to synthetic or toy tasks.\n\n \n\n Rather than waiting for ML to catch up to natural language debate, \n\nwe propose simulating our eventual setting (two AI debaters and one \n\nhuman judge) with all human debates: two human debaters and one human \n\njudge. Since an all human debate doesn’t involve any machine learning, \n\nit becomes a pure social science experiment: motivated by ML \n\nconsiderations but not requiring ML expertise to run. This lets us \n\nfocus on the component of AI alignment uncertainty specific to humans.\n\n \n\n Helvetica \n\n[2](#figure-debate-experiments)\n\n Our goal is ML+ML+human debates, but ML is currently too \n\nprimitive to do many interesting tasks.\n\n Therefore, we propose replacing ML debaters with human \n\ndebaters, learning how to best conduct debates in this human-only \n\nsetting, and eventually applying what we learn to the ML+ML+human case.\n\n \n\n To make human+human+human debate experiments concrete, we must \n\nchoose who to use as judges and debaters and which tasks to consider. \n\nWe also can choose to structure the debate in various ways, some of \n\nwhich overlaps with the choice of judge since we can instruct a judge to\n\n penalize deviations from a given format. By task we mean the questions\n\n our debates will try to resolve, together with any information provided\n\n to the debaters or to the judge. Such an experiment would then try to \n\nanswer the following question:\n\n \n\n**Question:** For a given task and judge, is the winning debate strategy honest?\n\n \n\n The “winning strategy” proviso is important: an experiment that \n\npicked debaters at random might conclude that honest behavior won, \n\nmissing the fact that more practiced debaters would learn to \n\nsuccessfully lie. We can try to solve this by training debaters, \n\nletting them practice against each other, filtering out debaters that \n\nwin more often, and so on, but we will still be left with uncertainty \n\nabout whether a better strategy exists. Even assuming we can find or \n\ntrain strong debaters, the choice of task and judge is quite tricky if \n\nwe want an informative proxy for our eventual AI+AI+human setting. Here\n\n are some desiderata constraining our choice of task:\n\n \n\n1. **True answers are known:**\n\n Although our eventual goal is debates where no source of truth \n\nis available, to conduct a useful experiment we must be able to measure \n\nsuccess. This means we must know what the correct answer is separate \n\nfrom debate, in order to compare with the results of debate.\n\n2. **False answers are plausible:**\n\n If the truth is obvious or no plausible counternarrative exists,\n\n honest debates will win for uninteresting and uninformative reasons \n\nregardless of judge ability. In particular, the judge shouldn’t know \n\nthe answer upfront.\n\n3. **Debaters know more than the judge:**\n\n Debate can produce interesting results only when the debaters \n\nknow more than the judge; otherwise asking direct questions is enough.\n\n4. **Definitive argument longer than debate limit:**\n\n If one debater can write out a full proof of their answer \n\n(ignoring their opponent’s moves), the task won’t be a good test of \n\ninteractive debate.\n\n5. **Some checkable facts:**\n\n There must be some facts which the judge is able to check, \n\neither because they can recognize them as true once presented or look \n\nthem up.It is impossible to usefully \n\ndebate a question where the judge has nothing to check: consider \n\ndebating the result of a coin flip shown to the two debaters but not the\n\n judge.\n\n6. **No “tells”:**\n\n Human tells of deception could result in honest debaters winning\n\n for reasons that wouldn’t apply to an AI. These tells include tone of \n\nvoice, eye contact, or additional time required to construct plausible \n\nlies. These tells can be reduced by showing judges completed debate \n\ntranscripts instead of engaging in interactive debates, but others might\n\n remain.\n\n7. **Available data:**\n\n We need a large enough pool of questions, judges, and debaters \n\nto achieve statistical significance. This is made more difficult \n\nbecause we may have a large number of hypotheses to test, in the form of\n\n many variations on debate or interventions to improve judging.\n\n8. **Known biases (optional):**\n\n We are specifically interested in debate tasks which test \n\nspecific types of cognitive or ethical biases.\n\n Are judges with some racial or gender bias able to set those \n\nbiases aside if they are highlighted, or does debate amplify bias? Do \n\ndebates about statistical or probabilistic questions make it too easy to\n\n lie with statistics?\n\n9. **Realistic tasks (ideally):**\n\n If possible, we would like to try debate with interesting, real \n\nworld tasks that reflect the types of questions we would like to apply \n\nAI to in the future, including science, mathematics, ethics, etc.\n\n It may not be possible to meet all of these criteria with a single \n\nexperiment. Several of the criteria are in tension: (1) and (2) are \n\nessentially “not too hard” and “not too easy”, and any restriction on \n\nthe types of questions may make it difficult to find large numbers of \n\nquestions, judges, or debaters. Realistic tasks are much harder than \n\nsynthetic tasks, which easily fulfill many of the criteria as discussed \n\nbelow. Thus, we may need to begin with synthetic tasks and move up \n\ntowards realistic tasks over time.\n\n We turn next to a few examples of experiments to see how many \n\ncriteria we can meet simultaneously.\n\n \n\n### Synthetic experiments: single pixel image debate\n\n We choose an image of a cat or dog, and show the image to the two \n\ndebaters but not the judge. One debater is honest and argues for the \n\ntrue contents of the image; the other debater lies. The debaters can \n\ntalk to the judge and illustrate their points by drawing rectangles on \n\nthe image, but the judge sees only the rectangles. At the end of the \n\ndebate, each debater is allowed to reveal a single pixel to the judge, \n\nwhich is the only part of the debate which cannot be a lie.\n\n \n\n[3](#dog-debate)\n\n An example debate with two human debaters and a human judge. \n\nOnly the debaters can see the image. Red is arguing that the image is a \n\n In informal play, the honest debater wins most of the time. \n\nAlthough the judge only gets to check a single pixel, the honest player \n\ncan try to pin down the liar towards disagreements on smaller and \n\nsmaller regions of the image until one pixel is enough to reveal the \n\nlie. However, it is hard to know whether this is a real result or an \n\nartifact that would not carry across to training an AI. There are many \n\nways to ruin the game, such as the judge asking each player to spend 15 \n\nseconds naming as many details about the image as possible. Worse, this\n\n trick ruins the game in favor of the honest player, who can easily name\n\n many true details while a human liar may have difficulty lying fluently\n\n at speed. Therefore, we are wary of trying to perform a full version \n\nof this experiment without more careful experimental design building on \n\nthe experimental knowledge base of existing social science research.\n\n \n\n As a synthetic experiment, single pixel debate satisfies many of our\n\n experimental criteria for all human debate. The true answer is obvious\n\n to the debaters, and they know more than the judge since only the \n\ndebaters see the image. Anything is plausible to the judge who sees \n\nonly one or two pixels over the course of the debate. Since the \n\nargument bottoms out at single pixels, the “full argument” would be a \n\nhuge list of pixels and how they relate, forcing the debaters to pin \n\neach other down and focus in on disagreements. The single pixels \n\nconstitute the checkable facts, and we have an endless supply of \n\nquestions in the form of images chosen at random. Less fortunately, it \n\nis easy for the judge to force “tells” which reveal who is lying, the \n\ntask has no obvious relation to biases, and is quite unrealistic.\n\n \n\n### Realistic experiments: domain expert debate\n\n For a more interesting task, we can find two debaters who are \n\nexperts in a domain, pick a question in their area of expertise, and use\n\n a layperson as the judge. The debaters could be experts in some area \n\nof science, law, or ethics, but “domain expertise” could also mean \n\nknowledge about hobbies, local customs, sports, or any other subject the\n\n judge does not know. We again choose one of the debaters to lie and \n\none to tell the truth. To guarantee a source of truth, we can choose a \n\nquestion with an agreed upon answer, either between the two debaters or \n\nmore broadly in their field. This is only approximate truth, but is \n\ngood enough for informative experiments. We also specify what facts the\n\n judge can access: a debate about physics might allow the debaters to \n\nquote a sentence or paragraph from Wikipedia, perhaps with restrictions \n\non what pages are allowed.\n\n \n\n Expert debate satisfies most of our desiderata, and it is likely \n\npossible to target specific biases (such as race or gender bias) by \n\npicking domain areas that overlap with these biases. It may be quite \n\ndifficult or expensive to find suitable debaters, but this may be \n\nsolvable either by throwing resources at the problem (ML is a well \n\nfunded field), enlarging the kinds of domain expertise considered \n\n(soccer, football, cricket), or by making the experiments interesting \n\nenough that volunteers are available. However, even if domain experts \n\ncan be found, there is no guarantee that they will be experts in debate \n\nviewed as a game. With the possible exception of law, politics, or \n\nphilosophy, domain experts \n\nmay not be trained to construct intentionally misleading but self \n\nconsistent narratives: they may be experts only in trying to tell the \n\ntruth.\n\n \n\n We’ve tried a few informal expert debates using theoretical computer\n\n science questions, and the main lesson is that the structure of the \n\ndebate matters a great deal. The debaters were allowed to point to a \n\nsmall snippet of a mathematical definition on Wikipedia, but not to any \n\npage that directly answered the question. To reduce tells, we first \n\ntried to write a full debate transcript with only minimal interaction \n\nwith a layperson, then showed the completed transcript to several more \n\nlaypeople judges. Unfortunately, even the layperson present when the \n\ndebate was conducted picked the lying debater as honest, due to a \n\nmisunderstanding of the question (which was whether the complexity \n\nclasses PPP and BPPBPPBPP\n\n are probably equal). As a result, throughout the debate the honest \n\ndebater did not understand what the judge was thinking, and failed to \n\ncorrect an easy but important misunderstanding. We fixed this in a \n\nsecond debate by letting a judge ask questions throughout, but still \n\nshowing the completed transcript to a second set of judges to reduce \n\ntells. See [the appendix](#quantum) for the transcript of this second debate.\n\n \n\n### Other tasks: bias tests, probability puzzles, etc.\n\n Synthetic image debates and expert debates are just two examples of \n\npossible tasks. More thought will be required to find tasks that \n\nsatisfy all our criteria, and these criteria will change as experiments \n\nprogress. Pulling from existing social science research will be useful,\n\n as there are many cognitive tasks with existing research results. If \n\nwe can map these tasks to debate, we can compare debate directly against\n\n baselines in psychology and other fields.\n\n \n\n For example, Bertrand and Mullainathan sent around 5000 resumes in \n\nresponse to real employment ads, randomizing the resumes between White \n\nand African American sounding names.\n\n With otherwise identical resumes, the choice of name significantly \n\nchanged the probability of a response. This experiment corresponds to \n\nthe direct question “Should we call back given this resume?” What if we\n\n introduce a few steps of debate? An argument against a candidate based\n\n on name or implicit inferences from that name might come across as \n\nobviously racist, and convince at least some judges away from \n\ndiscrimination. Unfortunately, such an experiment would necessarily \n\ndiffer from Bertrand et al.’s original, where employers did not realize \n\nthey were part of an experiment. Note that this experiment works even \n\nthough the source of truth is partial: we do not know whether a \n\nparticular resume should be hired or not, but most would agree that the \n\nanswer should not depend on the candidate’s name.\n\n \n\n For biases affecting probabilistic reasoning and decision making, \n\nthere is a long literature exploring how people decide between gambles \n\n For example, Erev et al. constructed an 11-dimensional space of gambles\n\n sufficient to reproduce 14 known cognitive biases, from which new \n\ninstances can be algorithmically generated.\n\n Would debates about gambles reduce cognitive biases? One difficulty \n\nhere is that simple gambles might fail the “definitive argument longer \n\nthan debate limit” criteria if an expected utility calculation is \n\nsufficient to prove the answer, making it difficult for a lying debater \n\nto meaningfully compete.\n\n \n\n Interestingly, Chen et al. used a similar setup to human+human+human\n\n debate to improve the quality of human data collected in a synthetic \n\n“Relation Extraction” task. \n\nPeople were first asked for direct answers, then pairs of people who \n\ndisagreed were asked to discuss and possibly update their answers. Here\n\n the debaters and judges are the same, but the overall goal of \n\nextracting higher quality information from humans is shared with debate.\n\n \n\nQuestions social science can help us answer\n\n-------------------------------------------\n\n We’ve laid out the general program for learning AI goals by asking \n\nhumans questions, and discussed how to use debate to strengthen what we \n\ncan learn by targeting the reasoning behind conclusions. Whether we use\n\n direct questions or something like debate, any intervention that gives \n\nus higher quality answers is more likely to produce aligned AI. The \n\nquality of those answers depends on the human judges, and social science\n\n research can help to measure answer quality and improve it. Let’s go \n\ninto more detail about what types of questions we want to answer, and \n\nwhat we hope to do with that information. Although we will frame these \n\nquestions as they apply to debate, most of them apply to any other \n\nmethod which learns goals from humans.\n\n \n\n1. **How skilled are people as judges by default?**\n\n If we ran debate using a person chosen at random as the judge, and \n\ngave them no training, would the result be aligned? A person picked at \n\nrandom might be vulnerable to convincing fallacious reasoning,\n\n leading AI to employ such reasoning. Note that the debaters are not \n\nchosen at random: once the judge is fixed, we care about debaters who \n\neither learn to help the judge (in the good case) or to exploit the \n\njudge’s weaknesses (in the bad case).\n\n2. **Can we distinguish good judges from bad judges?**\n\n People likely differ in the ability to judge debates. There are \n\nmany filters we could use to identify good judges: comparing their \n\nverdicts to those of other judges, to people given more time to think, \n\nor to known expert judgmentNote that \n\ndomain expertise may be quite different from what makes a good judge of \n\n Ideally we would like filters that do not require an independent \n\nsource of truth, though at experiment time we will need a source of \n\ntruth to know whether a filter works. It is not obvious a priori that \n\ngood filters exist, and any filter would need careful scrutiny to ensure\n\n it does not introduce bias into our choice of judges.\n\n3. **Does judge ability generalize across domains?**\n\n If judge ability in one domain fails to transfer to other domains, \n\nwe will have low confidence that it transfers to new questions and \n\narguments arising from highly capable AI debaters. This generalization \n\nis necessary to trust debate as a method for alignment, especially once \n\nwe move to questions where no independent source of truth is available. \n\n We emphasize that judge ability is not the same as knowledge: there is \n\n4. **Can we train people to be better judges?**\n\n5. **What questions are people better at answering?**\n\n If we know that humans are bad at answering certain types of \n\nquestions, we can switch to reliable formulations. For example, \n\nphrasing questions in frequentist terms may reduce known cognitive \n\nbiases.\n\n Graham et al. argue that different political views follow from \n\ndifferent weights placed on fundamental moral considerations, and \n\nsimilar analysis could help understand where we can expect moral \n\ndisagreements to persist after reflective equilibrium.\n\n In cases where reliable answers are unavailable, we need to ensure \n\nthat trained models know their own limits, and express uncertainty or \n\ndisagreement as required.\n\n6. **Are there ways to restrict debate to make it easier to judge?**\n\n People might be better at judging debates formulated in terms of \n\ncalm, factual statements, and worse at judging debates designed to \n\ntrigger strong emotions. Or, counterintuitively, it could be the other \n\nway around. If we know which styles of debates that people are\n\n better at judging, we may be able to restrict AI debaters to these styles.\n\n7. **How can people work together to improve quality?**\n\n If individuals are insufficient judges, are teams of judges better? \n\n Majority vote is the simplest option, but perhaps several people \n\ntalking through an answer together is stronger, either actively or after\n\n the fact through peer review. Condorcet’s jury theorem implies that \n\nmajority votes can amplify weakly good judgments to strong judgments (or\n\n \n\n Given our lack of experience outside of ML, we are not able to \n\nprecisely articulate all of the different experiments we need. The only\n\n way to fix this is to talk to more people with different backgrounds \n\nand expertise. We have started this process, but are eager for more \n\nconversations with social scientists about what experiments could be \n\nrun, and encourage other AI safety efforts to engage similarly.\n\n \n\nReasons for optimism\n\n--------------------\n\n We believe that understanding how humans interact with long-term AI \n\nalignment is difficult but possible. However, this would be a new \n\nresearch area, and we want to be upfront about the uncertainties \n\ninvolved. In this section and the next, we discuss some reasons for \n\noptimism and pessimism about whether this research will succeed. We \n\nfocus on issues specific to human uncertainty and associated social \n\nscience research; for similar discussion on ML uncertainty in the case \n\nof debate we refer to our previous work.\n\n \n\n### Engineering vs. science\n\n Most social science seeks to understand humans “in the wild”: \n\nresults that generalize to people going about their everyday lives. \n\nWith limited control over these lives, differences between laboratory \n\nand real life are bad from the scientific perspective. In contrast, AI \n\nalignment seeks to extract the best version of what humans want: our \n\ngoal is engineering rather than science, and we have more freedom to \n\nintervene. If judges in debate need training to perform well, we can \n\nprovide that training. If some people still do not provide good data, \n\nwe can remove them from experiments (as long as this filter does not \n\ncreate too much bias). This freedom to intervene means that some of the\n\n difficulty in understanding and improving human reasoning may not \n\napply. However, science is still required: once our interventions are \n\nin place, we need to correctly know whether our methods work. Since our\n\n experiments will be an imperfect model of the final goal, careful \n\ndesign will be necessary to minimize this mismatch, just as is required \n\nby existing social science.\n\n \n\n### We don’t need to answer all questions\n\n Our most powerful intervention is to give up: to recognize that we \n\nare unable to answer some types of questions, and instead prevent AI \n\nsystems from pretending to answer. Humans might be good judges on some \n\ntopics but not others, or with some types of reasoning but not others; \n\nif we discover that we can adjust our goals appropriately. Giving up on\n\n some types of questions is achievable either on the ML side, using \n\ncareful uncertainty modeling to know when we do not know, or on the \n\nhuman side by training judges to understand their own areas of \n\nuncertainty. Although we will attempt to formulate ML systems that \n\nautomatically detect areas of uncertainty, any information we can gain \n\non the social science side about human uncertainty can be used both to \n\naugment ML uncertainty modeling and to test whether ML uncertainty \n\nmodeling works.\n\n \n\n### Relative accuracy may be enough\n\n Say we have a variety of different ways to structure debate with \n\nhumans. Ideally, we would like to achieve results of the form “debate \n\nstructure AAA\n\n is truth-seeking with 90% confidence”. Unfortunately, we may be \n\nunconfident that an absolute result of this form will generalize to \n\nadvanced AI systems: it may hold for an experiment with simple tasks but\n\n break down later on. However, even if we can’t achieve such absolute \n\nresults, we can still hope for relative results of the form “debate \n\n \n\n### We don’t need to pin down the best alignment scheme\n\n As the AI safety field progresses to increasingly advanced ML \n\nsystems, we expect research on the ML side and the human side to merge. \n\n Starting social science experiments prior to this merging will give the\n\n field a head start, but we can also take advantage of the expected \n\nmerging to make our goals easier. If social science research narrows \n\nthe design space of human-friendly AI alignment algorithms but does not \n\nproduce a single best scheme, we can test the smaller design space once \n\nthe machines are ready.\n\n \n\n### A negative result would be important!\n\n If we test an AI alignment scheme from the social science \n\nperspective and it fails, we’ve learned valuable information. There are\n\n a variety of proposed alignment schemes, and learning which don’t work \n\nearly gives us more time to switch to others, or to intervene on a \n\npolicy level to slow down dangerous development. In fact, given our \n\nbelief that AI alignment is harder for more advanced agents, a negative \n\nresult might be easier to believe and thus more valuable that a less \n\ntrustworthy positive result.\n\n \n\nReasons to worry\n\n----------------\n\n We turn next to reasons social science experiments about AI \n\nalignment might fail to produce useful results. We emphasize that \n\nuseful results might be both positive and negative, so these are not \n\nreasons why alignment schemes might fail. Our primary worry is one \n\nsided, that experiments would say an alignment scheme works when in fact\n\n it does not, though errors in the other direction are also undesirable.\n\n \n\n### Our desiderata are conflicting\n\n As mentioned before, some of our criteria when picking experimental \n\ntasks are in conflict. We want tasks that are sufficiently interesting \n\n(not too easy), with a source of verifiable ground truth, are not too \n\nhard, etc. “Not too easy” and “not too hard” are in obvious conflict, \n\nbut there are other more subtle difficulties. Domain experts with the \n\nknowledge to debate interesting tasks may not be the same people capable\n\n of lying effectively, and both restrictions make it hard to gather \n\nlarge volumes of data. Lying effectively is required for a meaningful \n\nexperiment, since a trained AI may have no trouble lying unless lying is\n\n a poor strategy to win debates. Experiments to test whether ethical \n\nbiases interfere with judgment may make it more difficult to find tasks \n\nwith reliable ground truth, especially on subjects with significant \n\ndisagreement across people. The natural way out is to use many \n\ndifferent experiments to cover different aspects of our uncertainty, but\n\n this would take more time and might fail to notice interactions between\n\n desiderata.\n\n \n\n### We want to measure judge quality given optimal debaters\n\n For debate, our end goal is to understand if the judge is capable of\n\n determining who is telling the truth. However, we specifically care \n\nwhether the judge performs well given that the debaters are performing \n\nwell. Thus our experiments have an inner/outer optimization structure: \n\nwe first train the debaters to debate well, then measure how well the \n\njudges perform. This increases time and cost: if we change the task, we\n\n may need to find new debaters or retrain existing debaters. Worse, the\n\n human debaters may be bad at performing the task, either out of \n\ninclination or ability. Poor performance is particularly bad if it is \n\none sided and applies only to lying: a debater might be worse at lying \n\nout of inclination or lack of practice, and thus a win for the honest \n\ndebater might be misleading.\n\n \n\n### ML algorithms will change\n\n It is unclear when or if ML systems will reach various levels of \n\ncapability, and the algorithms used to train them will evolve over time.\n\n The AI alignment algorithms of the future may be similar to the \n\nproposed algorithms of today, or they may be very different. However, \n\nwe believe that knowledge gained on the human side will partially \n\ntransfer: results about debate will teach us about how to gather data \n\nfrom humans even if debate is superseded. The algorithms may change; \n\nhumans will not.\n\n \n\n### Need strong out-of-domain generalization\n\n Regardless of how carefully designed our experiments are, \n\nhuman+human+human debate will not be a perfect match to AI+AI+human \n\ndebate. We are seeking research results that generalize to the setting \n\nwhere we replace the human debaters (or similar) with AIs of the future,\n\n which is a hard ask. This problem is fundamental: we do not have the \n\nadvanced AI systems of the future to play with, and want to learn about \n\nhuman uncertainty starting now.\n\n \n\n### Lack of philosophical clarity\n\n Any AI alignment scheme will be both an algorithm for training ML \n\nsystems and a proposed definition of what it means to be aligned. \n\nHowever, we do not expect humans to conform to any philosophically \n\nconsistent notion of values, and concepts like reflective equilibrium \n\nmust be treated with caution in case they break down when applied to \n\nreal human judgement. Fortunately, algorithms like debate need not \n\npresuppose philosophical consistency: a back and forth conversation to \n\nconvince a human judge makes sense even if the human is leaning on \n\nheuristics, intuition, and emotion. It is not obvious that debate works\n\n in this messy setting, but there is hope if we take advantage of \n\ninaction bias, uncertainty modeling, and other escape hatches. We \n\nbelieve lack of philosophical clarity is an argument for investing in \n\nsocial science research: if humans are not simple, we must engage with \n\ntheir complexity.\n\n \n\nThe scale of the challenge\n\n--------------------------\n\n Long-term AI safety is particularly important if we develop \n\nartificial general intelligence (AGI), which the OpenAI Charter defines \n\nas highly autonomous systems that outperform humans at most economically\n\n valuable work. If we want to \n\ntrain an AGI with reward learning from humans, it is unclear how many \n\nsamples will be required to align it. As much as possible, we can try \n\nto replace human samples with knowledge about the world gained by \n\nreading language, the internet, and other sources of information. But \n\nit is likely that a fairly large number of samples from people will \n\nstill be required. Since more samples means less noise and more safety,\n\n if we are uncertain about how many samples we need then we will want a \n\nlot of samples.\n\n \n\n A lot of samples would mean recruiting a lot of people. We cannot \n\nrule out needing to involve thousands to tens of thousands of people for\n\n millions to tens of millions of short interactions: answering \n\nquestions, judging debates, etc. We may need to train these people to \n\nbe better judges, arrange for peers to judge each other’s reasoning, \n\ndetermine who is doing better at judging and give them more weight or a \n\nmore supervisory role, and so on. Many researchers would be required on\n\n the social science side to extract the highest quality information from\n\n the judges.\n\n \n\n A task of this scale would be a large interdisciplinary project, \n\nrequiring close collaborations in which people of different backgrounds \n\nfill in each other’s missing knowledge. If machine learning reaches \n\nthis scale, it is important to get a head start on the collaborations \n\nsoon.\n\n \n\nConclusion: how you can help\n\n----------------------------\n\n We have argued that the AI safety community needs social scientists \n\nto tackle a major source of uncertainty about AI alignment algorithms: \n\nwill humans give good answers to questions? This uncertainty is \n\ndifficult to tackle with conventional machine learning experiments, \n\nsince machine learning is primitive. We are still in the early days of \n\nperformance on natural language and other tasks, and problems with human\n\n reward learning may only show up on tasks we cannot yet tackle.\n\n \n\n Our proposed solution is to replace machine learning with people, at\n\n least until ML systems can participate in the complexity of debates we \n\nare interested in. If we want to understand a game played with ML and \n\nhuman participants, we replace the ML participants with people, and see \n\nhow the all human game plays out. For the specific example of debate, \n\nwe start with debates with two ML debaters and a human judge, then \n\nswitch to two human debaters and a human judge. The result is a pure \n\nhuman experiment, motivated by machine learning but available to anyone \n\nwith a solid background in experimental social science. It won’t be an \n\neasy experiment, which is all the more reason to start soon.\n\n \n\n If you are a social scientist interested in these questions, please \n\ntalk to AI safety researchers! We are interested in both conversation \n\nand close collaboration. There are many institutions engaged with \n\n \n\n If you are a machine learning researcher interested in or already \n\nworking on safety, please think about how alignment algorithms will work\n\n once we advance to tasks beyond the abilities of current machine \n\nlearning. If your preferred alignment scheme uses humans in an \n\nimportant way, can you simulate the future by replacing some or all ML \n\ncomponents with people? If you can imagine these experiments but don’t \n\nfeel you have the expertise to perform them, find someone who does.\n\n \n\n", "bibliography_bib": [{"title": "Deep reinforcement learning from human preferences"}, {"title": "Judgment under uncertainty: heuristics and biases"}, {"title": "Intergroup bias"}, {"title": "AI safety via debate"}, {"title": "Supervising strong learners by amplifying weak experts"}, {"title": "Reward learning from human preferences and demonstrations in Atari"}, {"title": "AI safety gridworlds"}, {"title": "An empirical methodology for writing user-friendly natural language computer applications"}, {"title": "Factored Cognition"}, {"title": "Learning the Preferences of Ignorant, Inconsistent Agents"}, {"title": "Comparing human-centric and robot-centric sampling for robot deep learning from demonstrations"}, {"title": "Computational Social Science: Towards a collaborative future"}, {"title": "Mirror Mirror: Reflections on Quantitative Fairness"}, {"title": "Moral Anti-Realism"}, {"title": "Gender shades: Intersectional accuracy disparities in commercial gender classification"}, {"title": "Moral dumbfounding: When intuition finds no reason"}, {"title": "Batch active preference-based learning of reward functions"}, {"title": "Learning to understand goal specifications by modelling reward"}, {"title": "Improving language understanding by generative pre-training"}, {"title": "Thinking, fast and slow"}, {"title": "Deep Blue"}, {"title": "Mastering chess and shogi by self-play with a general reinforcement learning algorithm"}, {"title": "Deviant or Wrong? The Effects of Norm Information on the Efficacy of Punishment"}, {"title": "The weirdest people in the world?"}, {"title": "Fact, fiction, and forecast"}, {"title": "A theory of justice"}, {"title": "Looking for a psychology for the inner rational agent"}, {"title": "How (and where) does moral judgment work?"}, {"title": "Scalable agent alignment via reward modeling: a research direction"}, {"title": "OpenAI Five"}, {"title": "The Most Human Human: What Talking with Computers Teaches Us About What It Means to Be Alive"}, {"title": "How to overcome prejudice"}, {"title": "The nature and origins of misperceptions: Understanding false and unsupported beliefs about politics"}, {"title": "Persuasion, influence, and value: Perspectives from communication and social neuroscience"}, {"title": "Identifying and cultivating superforecasters as a method of improving probabilistic predictions"}, {"title": "Superforecasting: The art and science of prediction"}, {"title": "Cooperative inverse reinforcement learning"}, {"title": "Inverse reward design"}, {"title": "The art of being right"}, {"title": "Are Emily and Greg more employable than Lakisha and Jamal? A field experiment on labor market discrimination"}, {"title": "Prospect theory: An analysis of decisions under risk"}, {"title": "Advances in prospect theory: Cumulative representation of uncertainty"}, {"title": "From anomalies to forecasts: Toward a descriptive model of decisions under risk, under ambiguity, and from experience"}, {"title": "Cicero: Multi-Turn, Contextual Argumentation for Accurate Crowdsourcing"}, {"title": "The rationality of informal argumentation: A Bayesian approach to reasoning fallacies"}, {"title": "Rationality in medical decision making: a review of the literature on doctors’ decision-making biases"}, {"title": "Expert political judgment: How good is it? How can we know?"}, {"title": "Two approaches to the study of experts’ characteristics"}, {"title": "Debiasing"}, {"title": "An evaluation of argument mapping as a method of enhancing critical thinking performance in e-learning environments"}, {"title": "Forecasting tournaments: Tools for increasing transparency and improving the quality of debate"}, {"title": "How to make cognitive illusions disappear: Beyond \"heuristics and biases\""}, {"title": "Liberals and conservatives rely on different sets of moral foundations"}, {"title": "Negative emotions can attenuate the influence of beliefs on logical reasoning"}, {"title": "Epistemic democracy: Generalizing the Condorcet jury theorem"}, {"title": "Aggregating sets of judgments: An impossibility result"}, {"title": "The Delphi technique as a forecasting tool: issues and analysis"}, {"title": "OpenAI Charter"}], "filename": "AI Safety Needs Social Scientists.html", "id": "79090ca6e149ebc3be48ae24351d62a9"} {"url": "n/a", "source": "distill", "source_type": "html", "converted_with": "python", "title": "Visualizing memorization in RNNs", "authors": ["Andreas Madsen"], "date_published": "2019-03-25", "abstract": " Memorization in Recurrent Neural Networks (RNNs) continues to pose a challenge in many applications. We’d like RNNs to be able to store information over many timesteps and retrieve it when it becomes relevant — but vanilla RNNs often struggle to do this. ", "journal_ref": "distill-pub", "doi": "https://doi.org/10.23915/distill.00016", "text": "\n\n Memorization in Recurrent Neural Networks (RNNs) continues to pose a challenge\n\n in many applications. We’d like RNNs to be able to store information over many\n\n \n\n as Long-Short-Term Memory (LSTM)\n\n units and Gated Recurrent Units (GRU).\n\n However, the practical problem of memorization still poses a challenge.\n\n As such, developing new recurrent units that are better at memorization\n\n continues to be an active field of research.\n\n \n\n To compare a recurrent unit against its alternatives, both past and recent\n\n papers, such as the Nested LSTM paper by Monzi et al.\n\n , heavily rely on quantitative\n\n comparisons. These comparisons often measure accuracy or\n\n cross entropy loss on standard problems such as Penn Treebank, Chinese\n\n Poetry Generation, or text8, where the task is to predict the\n\n next character given existing input.\n\n \n\n While quantitative comparisons are useful, they only provide partial\n\n insight into the how a recurrent unit memorizes. A model can, for \n\nexample,\n\n achieve high accuracy and cross entropy loss by just providing \n\nhighly accurate\n\n predictions in cases that only require short-term memorization, \n\nwhile\n\n being inaccurate at predictions that require long-term\n\n memorization.\n\n For example, when autocompleting words in a sentence, a model with \n\nonly short-term understanding could still exhibit high accuracy \n\ncompleting the ends of words once most of the characters are present.\n\n However, without longer term contextual understanding it won’t be \n\nable to predict words when only a few characters are known.\n\n \n\n This article presents a qualitative visualization method for comparing\n\n recurrent units with regards to memorization and contextual understanding.\n\n \n\nRecurrent Units\n\n---------------\n\n The networks that will be analyzed all use a simple RNN structure:\n\n \n\nhℓth\\_{\\ell}^{t}hℓt​\n\n Output for layer ℓ\\ellℓ at time ttt.\n\n \n\n===\n\nUnit\\mathrm{Unit}Unit\n\n Recurrent unit of choice.\n\n \n\nyty^tyt\n\n===\n\nSoftmax\\mathrm{Softmax}Softmax\n\n(hLt)(h\\_L^t)(hLt​)\n\n In theory, the time dependency allows it in each iteration to know\n\n about every part of the sequence that came before. However, this time\n\n dependency typically causes a vanishing gradient problem that results in\n\n long-term dependencies being ignored during training\n\n .\n\n \n\n**Vanishing Gradient:** where the contribution from the\n\n earlier steps becomes insignificant in the gradient for the vanilla RNN\n\n unit.\n\n \n\nSoftmax LayerRecurrent LayerRecurrent LayerInput Layer\n\n Several solutions to the vanishing gradient problem have been proposed over\n\n the years. The most popular are the aforementioned LSTM and GRU units, but this\n\n is still an area of active research. Both LSTM and GRU are well known\n\n  — an explanation of Nested LSTMs\n\n can be found [in the appendix](#appendix-nestedlstm).\n\n \n\n* Nested LSTM\n\n* LSTM\n\n* GRU\n\n**Recurrent Unit, Nested LSTM:** makes the cell update depend on another\n\n LSTM unit, supposedly this allows more long-term memory compared to\n\n stacking LSTM layers.\n\n \n\n**Recurrent Unit, LSTM:** allows for long-term\n\n memorization by gateing its update, thereby solving the vanishing gradient\n\n problem.\n\n \n\n**Recurrent Unit, GRU:** solves the vanishing gradient\n\n problem without depending on an internal memory state.\n\n \n\n It is not entirely clear why one recurrent unit performs better than another\n\n in some applications, while in other applications it is another type of\n\n recurrent unit that performs better. Theoretically they all solve the vanishing\n\n gradient problem, but in practice their performance is highly application\n\n dependent.\n\n \n\n Understanding why these differences occur is likely an opaque and\n\n challenging problem. The purpose of this article is to demonstrate a\n\n visualization technique that can better highlight what these differences\n\n are. Hopefully, such an understanding can lead to a deeper understanding.\n\n \n\nComparing Recurrent Units\n\n-------------------------\n\n loss. Differences in these high-level quantitative measures\n\n can have many explanations and may only be because of some small improvement\n\n in predictions that only requires short-term contextual understanding,\n\n while it is often the long-term contextual understanding that is of interest.\n\n \n\n### A problem for qualitative analysis\n\n Therefore a good problem for qualitatively analyzing contextual\n\n understanding should have a human-interpretable output and depend both on\n\n long-term and short-term contextual understanding. The typical problems\n\n that are often used, such as Penn Treebank, Chinese Poetry Generation, or\n\n text8 generation do not have outputs that are easy to reason about, as they\n\n require an extensive understanding of either grammar, Chinese poetry, or\n\n only output a single letter.\n\n \n\n \n\n The autocomplete problem is quite similar to the text8 generation\n\n problem: the only difference is that instead of predicting the next letter,\n\n the model predicts an entire word. This makes the output much more\n\n interpretable. Finally, because of its close relation to text8 generation,\n\n existing literature on text8 generation is relevant and comparable,\n\n in the sense that models that work well on text8 generation should work\n\n well on the autocomplete problem.\n\n \n\nUser types input sequence.\n\nRecurrent neural network processes the sequence.\n\nThe output for the last character is used.\n\nThe most likely suggestions are extracted.\n\n parts of north af \n\ncustom input, loading ...\n\nafrica(85.30%)africans(1.40%)african(8.90%)\n\n**Autocomplete:** An application that has a humanly\n\n interpretable output, while depending on both short and long-term\n\n contextual understanding. In this case, the network uses past information\n\n and understands the next word should be a country.\n\n \n\n The output in this figure was produced by the GRU model;\n\n all model setups are [described in the appendix](#appendix-autocomplete).\n\n \n\n Try [removing the last letters](javascript:arDemoShort();) to see\n\n that the network continues to give meaningful suggestions.\n\n \n\nYou can also type in your own text.\n\n ([reset](javascript:arDemoReset();)).\n\n \n\n The autocomplete dataset is constructed from the full\n\n [text8](http://mattmahoney.net/dc/textdata.html) dataset. The\n\n recurrent neural networks used to solve the problem have two layers, each\n\n with 600 units. There are three models, using GRU, LSTM, and Nested LSTM.\n\n See [the appendix](#appendix-autocomplete) for more details.\n\n \n\n### Connectivity in the Autocomplete Problem\n\n In the recently published Nested LSTM paper\n\n , they qualitatively compared their\n\n Nested LSTM unit to other recurrent units, to show how it memorizes in\n\n comparison, by visualizing individual cell activations.\n\n \n\n This visualization was inspired by Karpathy et al.\n\n where they identify cells\n\n that capture a specific feature. To identify a specific\n\n feature, this visualization approach works well. However, it is not a useful\n\n argument for memorization in general as the output is entirely dependent\n\n on what feature the specific cell captures.\n\n \n\n Instead, to get a better idea of how well each model memorizes and uses\n\n memory for contextual understanding, the connectivity between the desired\n\n output and the input is analyzed. This is calculated as:\n\n \n\nconnectivity(\\textrm{connectivity}(connectivity(\n\nttt\n\n Input time index.\n\n \n\n,,,\n\nt~\\tilde{t}t~\n\n Output time index.\n\n \n\n)=) =)=\n\n xtx^txt.\n\n \n\n Exploring the connectivity gives a surprising amount of insight into the\n\n different models’ ability for long-term contextual understanding. Try and\n\n interact with the figure below yourself to see what information the\n\n different models use for their predictions.\n\n \n\n**Connectivity:** the connection strength between\n\n ([reset](javascript:connectivitySetIndex(null);)).\n\n *Hover over or tap the text to change the selected character.*\n\n Let’s highlight three specific situations:\n\n \n\n1\n\n Observe how the models predict the word “learning” with [only the first two\n\n information and thus only suggests common words starting with the letter “l”.\n\n \n\n In contrast, the LSTM and GRU models both suggests the word “learning”.\n\n The GRU model shows stronger connectivity with the word “advanced”,\n\n \n\n2\n\n Examine how the models predict the word “grammar”.\n\n Thus, no model suggests “grammar” until it has\n\n [seen at least 4 characters](javascript:connectivitySetIndex(32);).\n\n \n\n When “grammar” appears for the second time, the models have more context.\n\n need [at least 4 characters](javascript:connectivitySetIndex(162);).\n\n \n\n3\n\n Finally, let’s look at predicting the word “schools”\n\n the GRU model seems better at using past information for\n\n contextual understanding.\n\n \n\n What makes this case noteworthy is how the LSTM model appears to\n\n use words from almost the entire sentence as context. However,\n\n its suggestions are far from correct and have little to do\n\n with the previous words it seems to use in its prediction.\n\n This suggests that the LSTM model in this setup is capable of\n\n long-term memorization, but not long-term contextual understanding.\n\n \n\n1\n\n2\n\n3\n\n These observations show that the connectivity visualization is a powerful tool\n\n However, it is only possible to compare models on the same dataset, and\n\n on a specific example. As such, while these observations may show that\n\n these observations may not generalize to other datasets or hyperparameters.\n\n \n\n### Future work; quantitative metric\n\n From the above observations it appears that short-term contextual understanding\n\n using previously seen letters from the word itself, as more letters become\n\n GRU network — use previously seen words as context for the prediction.\n\n \n\n This observation suggests a quantitative metric: measure the accuracy given\n\n how many letters from the word being predicted are already known.\n\n \n\n**Accuracy Graph**: shows the accuracy\n\n given a fixed number of characters in a word that the RNN has seen.\n\n 0 characters mean that the RNN has only seen the space leading up\n\n to the word, including all the previous text which should provide context.\n\n The different line styles, indicates if the correct word should appear\n\n among the top 1, 2, or 3 suggestions.\n\n \n\n These results suggest that the GRU model is better at long-term contextual\n\n understanding, while the LSTM model is better at short-term contextual\n\n understanding. These observations are valuable, as it justifies why the\n\n the GRU model is far better at long-term contextual understanding.\n\n \n\n While more detailed quantitative metrics like this provides new insight,\n\n qualitative analysis like the connectivity figure presented\n\n intuitive understanding of how the model works, which a quantitative metric\n\n cannot. It also shows that a wrong prediction can still be considered a\n\n useful prediction, such as a synonym or a contextually reasonable\n\n prediction.\n\n \n\nConclusion\n\n----------\n\n Looking at overall accuracy and cross entropy loss in itself is not that\n\n interesting. Different models may prioritize either long-term or\n\n short-term contextual understanding, while both models can have similar\n\n accuracy and cross entropy.\n\n \n\n A qualitative analysis, where one looks at how previous input is used in\n\n the prediction is therefore also important when judging models. In this\n\n case, the connectivity visualization together with the autocomplete\n\n predictions, reveals that the GRU model is much more capable of long-term\n\n contextual understanding, compared to LSTM and Nested LSTM. In the case of\n\n LSTM, the difference is much higher than one would guess from just looking\n\n at the overall accuracy and cross entropy loss alone. This observation is\n\n not that interesting in itself as it is likely very dependent on the\n\n hyperparameters, and the specific application.\n\n \n\n Much more valuable is that this visualization method makes it possible\n\n to intuitively understand how the models are different, to a much higher\n\n degree than just looking at accuracy and cross entropy. For this application,\n\n it is clear that the GRU model uses repeating words and semantic meaning\n\n of past words to make its prediction, to a much higher degree than the LSTM\n\n and Nested LSTM models. This is both a valuable insight when choosing the\n\n final model, but also essential knowledge when developing better models\n\n in the future.\n\n \n\n", "bibliography_bib": [{"title": "Long short-term memory"}, {"title": "Learning Phrase Representations using RNN Encoder–Decoder for Statistical Machine Translation"}, {"title": "Nested LSTMs"}, {"title": "The Penn Treebank: Annotating Predicate Argument Structure"}, {"title": "text8 Dataset"}, {"title": "On the difficulty of training recurrent neural networks"}, {"title": "Visualizing and Understanding Recurrent Networks"}], "filename": "Visualizing memorization in RNNs.html", "id": "2605283a8fc0b2e0cb8d1ac493ae0f18"} {"url": "n/a", "source": "distill", "source_type": "html", "converted_with": "python", "title": "Branch Specialization", "authors": ["Chelsea Voss", "Gabriel Goh", "Nick Cammarata", "Michael Petrov", "Ludwig Schubert", "Chris Olah"], "date_published": "2021-04-05", "abstract": " This article is part of the Circuits thread, an experimental format collecting invited short articles and critical commentary delving into the inner workings of neural networks. ", "journal_ref": "distill-pub", "doi": "https://doi.org/10.23915/distill.00024.008", "text": "\n\n![](Branch%20Specialization_files/multiple-pages.svg)\n\n an experimental format collecting invited short articles and critical \n\ncommentary delving into the inner workings of neural networks.\n\n \n\n[Visualizing Weights](https://distill.pub/2020/circuits/visualizing-weights/)\n\n[Weight Banding](https://distill.pub/2020/circuits/weight-banding/)\n\nIntroduction\n\n------------\n\n If we think of interpretability as a kind of “anatomy of neural \n\nnetworks,” most of the circuits thread has involved studying tiny little\n\n veins – looking at the small-scale, at individual neurons and how they \n\nconnect. However, there are many natural questions that the small-scale \n\napproach doesn’t address.\n\n \n\n In contrast, the most prominent abstractions in biological anatomy\n\n involve larger-scale structures: individual organs like the heart, or \n\nentire organ systems like the respiratory system. And so we wonder: is \n\nthere a “respiratory system” or “heart” or “brain region” of an \n\nartificial neural network? Do neural networks have any emergent \n\nstructures that we could study that are larger-scale than circuits?\n\n \n\n have separate dedicated articles.) Branch specialization occurs when \n\nneural network layers are split up into branches. The neurons and \n\ncircuits tend to self-organize, clumping related functions into each \n\nbranch and forming larger functional units – a kind of “neural network \n\nbrain region.” We find evidence that these structures implicitly exist \n\nin neural networks without branches, and that branches are simply \n\nreifying structures that otherwise exist.\n\n \n\n AlexNet is famous as a jump in computer vision, arguably starting the \n\ndeep learning revolution, but buried in the paper is a fascinating, \n\nrarely-discussed detail.\n\n The first two layers of AlexNet are split into two branches which \n\ncan’t communicate until they rejoin after the second layer. This \n\nstructure was used to maximize the efficiency of training the model on \n\ntwo GPUs, but the authors noticed something very curious happened as a \n\nresult. The neurons in the first layer organized themselves into two \n\ngroups: black-and-white Gabor filters formed on one branch and \n\nlow-frequency color detectors formed on the other branch.\n\n \n\n![](Branch%20Specialization_files/Figure_1.png)\n\n observed the phenomenon we call branch specialization in the first \n\nlayer of AlexNet by visualizing their weights to RGB channels; here, we \n\n \n\n Although the first layer of AlexNet is the only example of branch \n\nspecialization we’re aware of being discussed in the literature, it \n\nseems to be a common phenomenon. We find that branch specialization \n\nhappens in later hidden layers, not just the first layer. It occurs in \n\nboth low-level and high-level features. It occurs in a wide range of \n\nmodels, including places you might not expect it – for example, residual\n\n blocks in resnets can functionally be branches and specialize. Finally,\n\n branch specialization appears to surface as a structural phenomenon in \n\nplain convolutional nets, even without any particular structure causing \n\nit.\n\n \n\n Is there a large-scale structure to how neural networks operate? \n\nHow are features and circuits organized within the model? Does network \n\narchitecture influence the features and circuits that form? Branch \n\nspecialization hints at an exciting story related to all of these \n\nquestions.\n\n \n\nWhat is a branch?\n\n-----------------\n\n \n\nInceptionV1\n\n has nine sets of four-way branches called “Inception blocks.”\n\n has several two-way branches.\n\nAlexNet\n\nResidual Networks\n\n \n\n In more recent years, these have become less common, but residual \n\nnetworks – which can be seen as implicitly having branches in their \n\nresidual blocks – have become very common. We also sometimes see \n\nbranched architectures develop automatically in neural architecture \n\nsearch, an approach where the network architecture is learned.\n\n \n\n The implicit branching of residual networks has some important \n\nnuances. At first glance, every layer is a two-way branch. But because \n\nthe branches are combined together by addition, we can actually rewrite \n\nthe model to reveal that the residual blocks can be understood as \n\nbranches in parallel:\n\n \n\n+\n\n+\n\n+\n\n+\n\n+\n\n[3](#figure-3). Residual blocks as branches in parallel.\n\n \n\n One hypothesis for why is that, in these models, the exact depth of a \n\nlayer doesn’t matter and the branching aspect becomes more important \n\nthan the sequential aspect.\n\n \n\n One of the conceptual weaknesses of normal branching models is \n\nthat although branches can save parameters, it still requires a lot of \n\nparameters to mix values between branches. However, if you buy the \n\nbranch interpretation of residual networks, you can see them as a \n\nstrategy to sidestep this: residual networks intermix branches (e.g. \n\nblock sparse weights) with low-rank connections (projecting all the \n\nblocks into the same sum and then back up). This seems like a really \n\nelegant way to handle branching. More practically, it suggests that \n\nanalysis of residual networks might be well-served by paying close \n\nattention to the units in the blocks, and that we might expect the \n\nresidual stream to be unusually polysemantic.\n\n \n\nWhy does branch specialization occur?\n\n-------------------------------------\n\n Branch specialization is defined by features organizing between \n\nbranches. In a normal layer, features are organized randomly: a given \n\nfeature is just as likely to be any neuron in a layer. But in a branched\n\n layer, we often see features of a given type cluster to one branch. The\n\n branch has specialized on that type of feature.\n\n \n\n \n\nA1\n\nB1\n\nC1\n\nD1\n\nA2\n\nB2\n\nD2\n\nD2\n\n \n\n Another way to think about this is that if you need to cut a \n\nneural network into pieces that have limited ability to communicate with\n\n each other, it makes sense to organize similar features close together,\n\n because they probably need to share more information.\n\n \n\nBranch specialization beyond the first layer\n\n--------------------------------------------\n\n So far, the only concrete example we’ve shown of branch \n\nspecialization is the first and second layer of AlexNet. What about \n\nlater layers? AlexNet also splits its later layers into branches, after \n\nall. This seems to be unexplored, since studying features after the \n\n Unfortunately, branch specialization in the later layers of \n\nAlexNet is also very subtle. Instead of one overall split, it’s more \n\nlike there’s dozens of small clusters of neurons, each cluster being \n\nassigned to a branch. It’s hard to be confident that one isn’t just \n\nseeing patterns in noise.\n\n \n\n But other models have very clear branch specialization in later \n\nlayers. This tends to happen when a branch constitutes only a very small\n\n fraction of a layer, either because there are many branches or because \n\none is much smaller than others. In these cases, the branch can \n\nspecialize on a very small subset of the features that exist in a layer \n\nand reveal a clear pattern.\n\n \n\n For example, most of InceptionV1′s layers have a branched \n\nstructure. The branches have varying numbers of units, and varying \n\nconvolution sizes. The 5x5 branch is the smallest branch, and also has \n\nthe largest convolution size. It’s often very specialized:\n\n \n\nmixed3a\\_5x5: \n\nmixed3b\\_5x5: \n\nmixed4a\\_5x5: \n\n3D Geometry / Complex Shapes\n\nCurve Related\n\nBW vs Color\n\nFur/Eye/Face Related\n\nOther\n\nBoundary Detectors\n\nOther\n\nOther\n\nBrightness\n\nOther Color Contrast\n\n \n\n This is exceptionally unlikely to have occurred by chance.\n\n \n\n It’s worth noting one confounding factor which might be \n\ninfluencing the specialization. The 5x5 branches are the smallest \n\nbranches, but also have larger convolutions (5x5 instead of 3x3 or 1x1) \n\nthan their neighbors.There is something \n\nwhich suggests that the branching plays an essential role: mixed3a and \n\nmixed3b are adjacent layers which contain relatively similar features \n\nand are at the same scale. If it was only about convolution size, why \n\nWhy is branch specialization consistent?\n\n----------------------------------------\n\n Perhaps the most surprising thing about branch specialization is \n\nthat the same branch specializations seem to occur again and again, \n\nacross different architectures and tasks.\n\n \n\n For example, the branch specialization we observed in AlexNet – \n\nthe first layer specializing into a black-and-white Gabor branch vs. a \n\nlow-frequency color branch – is a surprisingly robust phenomenon. It \n\noccurs consistently if you retrain AlexNet. It also occurs if you train \n\nother architectures with the first few layers split into two branches. \n\nIt even occurs if you train those models on other natural image \n\ndatasets, like Places instead of ImageNet. Anecdotally, we also seem to \n\nsee other types of branch specialization recur. For example, finding \n\nbranches that seem to specialize in curve detection seems to be quite \n\n \n\n So, why do the same branch specializations occur again and again?\n\n \n\n One hypothesis seems very tempting. Notice that many of the same \n\nfeatures that form in normal, non-branched models also seem to form in \n\nbranched models. For example, the first layer of both branched and \n\nnon-branched models contain Gabor filters and color features. If the \n\nsame features exist, presumably the same weights exist between them.\n\n \n\n Could it be that branching is just surfacing a structure that \n\nalready exists? Perhaps there’s two different subgraphs between the \n\nweights of the first and second conv layer in a normal model, with \n\nrelatively small weights between them, and when you train a branched \n\nmodel these two subgraphs latch onto the branches.\n\n (This would be directionally similar to work finding modular \n\nsubstructures within neural networks.)\n\n \n\n To test this, let’s look at models which have non-branched first \n\nand second convolutional layers. Let’s take the weights between them and\n\n perform a singular value decomposition (SVD) on the absolute values of \n\nthe weights. This will show us the main factors of variation in which \n\nneurons connect to different neurons in the next layer (irrespective of \n\nwhether those connections are excitatory or inhibitory).\n\n \n\n Sure enough, the singular vector (the largest factor of variation)\n\n of the weights between the first two convolutional layers of \n\nInceptionV1 is color.\n\n \n\nfirst convolutional layer\n\nNeurons in the \n\norganized by the left singular vectors of |W|.\n\nInceptionV1 (tf-slim version) trained on ImageNet.\n\nSingular Vector 1 (frequency?)\n\nSingular Vector 0 (color?)\n\nInceptionV1 trained on Places365\n\nSingular Vector 1 (frequency?)\n\nSingular Vector 0 (color?)\n\nSingular Vector 1 (frequency?)\n\nSingular Vector 0 (color?)\n\nSingular Vector 1 (frequency?)\n\nSingular Vector 0 (color?)\n\nsecond convolutional layer\n\nNeurons in the organized by the right singular vectors of |W|.\n\n[6](#figure-6). Singular \n\nvectors for the first and second convolutional layers of InceptionV1, \n\ntrained on ImageNet (above) or Places365 (below). One can think of \n\nneurons being plotted closer together in this diagram as meaning they \n\nlikely tend to connect to similar neurons.\n\n \n\n This suggests an interesting prediction: perhaps if we were to split \n\nthe layer into more than two branches, we’d also observe specialization \n\nin frequency in addition to color.\n\n \n\n This seems like it may be true. For example, here we see a \n\nhigh-frequency black-and-white branch, a mid-frequency mostly \n\nblack-and-white branch, a mid-frequency color branch, and a \n\nlow-frequency color branch.\n\n \n\n![](Branch%20Specialization_files/Figure_7.png)\n\n[7](#figure-7). We \n\nconstructed a small ImageNet model with the first layer split into four \n\nbranches. The rest of the model is roughly an InceptionV1 architecture.\n\n \n\nParallels to neuroscience\n\n-------------------------\n\n We’ve shown that branch specialization is one example of a \n\nstructural phenomenon — a larger-scale structure in a neural network. It\n\n happens in a variety of situations and neural network architectures, \n\nand it happens with *consistency* – certain motifs of \n\nspecialization, such as color, frequency, and curves, happen \n\nconsistently across different architectures and tasks.\n\n \n\n Returning to our comparison with anatomy, although we hesitate to \n\nclaim explicit parallels to neuroscience, it’s tempting to draw \n\nanalogies between branch specialization and the existence of regions of \n\nthe brain focused on particular tasks.\n\n The visual cortex, the auditory cortex, Broca’s area and \n\nWernicke’s area\n\n \n\n The subspecialization within the V2 area of the primate visual \n\ncortex is another strong example from neuroscience. One type of stripe \n\nwithin V2 is sensitive to orientation or luminance, whereas the other \n\ntype of stripe contains color-selective neurons.\n\n We are grateful to Patrick Mineault for noting this analogy, and\n\n for further noting that the high-frequency features are consistent with\n\n some of the known representations of high-level features in the primate\n\n V2 area.\n\n – these are all examples of brain areas with such consistent \n\nspecialization across wide populations of people that neuroscientists \n\nand psychologists have been able to characterize as having remarkably \n\nconsistent functions.\n\n As researchers without expertise in neuroscience, we’re uncertain \n\nhow useful this connection is, but it may be worth considering whether \n\nbranch specialization can be a useful model of how specialization might \n\nemerge in biological neural networks.\n\n \n\n![](Branch%20Specialization_files/multiple-pages.svg)\n\n an experimental format collecting invited short articles and critical \n\ncommentary delving into the inner workings of neural networks.\n\n \n\n[Visualizing Weights](https://distill.pub/2020/circuits/visualizing-weights/)\n\n[Weight Banding](https://distill.pub/2020/circuits/weight-banding/)\n\n .comment {\n\n background-color: hsl(54, 78%, 96%);\n\n border-left: solid hsl(54, 33%, 67%) 1px;\n\n padding: 1em;\n\n color: hsla(0, 0%, 0%, 0.67);\n\n margin-top: 1em;\n\n }\n\n .comment h3 {\n\n font-size: 100%;\n\n font-weight: bold;\n\n text-transform: uppercase;\n\n margin-top: 0px;\n\n }\n\n .comment .commenter-description {\n\n font-style: italic;\n\n margin-bottom: 1em;\n\n margin-top: 0px;\n\n }\n\n .comment .commenter-description, .comment .commenter-description a {\n\n color: #777;\n\n font-size: 80%;\n\n line-height: 160%;\n\n }\n\n .comment p {\n\n margin-bottom: 0px;\n\n }\n\n .comment div {\n\n margin-top: 1em;\n\n }\n\n \n\n### Comment\n\n is Professor of Neural Circuits and Computation at the Centre for \n\nDiscovery Brain Sciences and Simons Initiative for the Developing Brain,\n\n University of Edinburgh.\n\n \n\nAs neuroscientists we’re excited by this work as it offers \n\nfresh theoretical perspectives on long-standing questions about how \n\nbrains are organised and how they develop. Branching and specialisation \n\nare found throughout the brain. A well studied example is the dorsal and\n\n ventral visual streams, which are associated with spatial and \n\nnon-spatial visual processing. At the microcircuit level neurons in each\n\n pathway are similar. However, recordings of neural activity demonstrate\n\n remarkable specialisation; classic experiments from the 1970s and 80s \n\nestablished the idea that the ventral stream enables identification of \n\nobjects whereas the dorsal stream represents their location. Since then,\n\n much has been learned about signal processing in these pathways but \n\nfundamental questions such as why there are multiple streams and how \n\nthey are established remain unanswered.\n\nFrom the perspective of a neuroscientist, a striking result \n\nfrom the investigation of branch specialization by Voss and her \n\ncolleagues is that robust branch specialisation emerges in the absence \n\nof any complex branch specific design rules. Their analyses show that \n\nspecialisation is similar within and across architectures, and across \n\ndifferent training tasks. The implication here is that no specific \n\ninstructions are required for branch specialisation to emerge. Indeed, \n\ntheir analyses suggest that it even emerges in the absence of \n\npredetermined branches. By contrast, the intuition of many \n\nneuroscientists would be that specialisation of different areas of the \n\nneocortex requires developmental mechanisms that are specific to each \n\narea. For neuroscientists aiming to understand how perceptual and \n\ncognitive functions of the brain arise, an important idea here is that \n\ndevelopmental mechanisms that drive the separation of cortical pathways,\n\n such as the dorsal and ventral visual streams, may be absolutely \n\ncritical.\n\nWhile the parallels between branch specialization in \n\nartificial neural networks and neural circuits in the brain are \n\nstriking, there are clearly major differences and many outstanding \n\nquestions. From the perspective of building artificial neural networks, \n\nwe wonder if branch specific tuning of individual units and their \n\nconnectivity rules would enhance performance? In the brain, there is \n\ngood evidence that the activation functions of individual neurons are \n\nfine-tuned between and even within distinct neural circuits. If this \n\nfine tuning confers benefits to the brain then we might expect similar \n\nbenefits in artificial networks. From the perspective of understanding \n\nthe brain, we wonder whether branch specialisation could help make \n\nexperimentally testable predictions? If artificial networks can be \n\nengineered with branches that have organisation similar to branching \n\npathways in the brain, then manipulations to these networks could be \n\ncompared to experimental manipulations achieved with optogenetic and \n\nchemogenetic strategies. Given that many brain disorders involve changes\n\n to specific neural populations, similar strategies could give insights \n\ninto how these pathological changes alter brain functions. For example, \n\nvery specific populations of neurons are disrupted in early stages of \n\nAlzheimer’s disease. By disrupting corresponding units in neural network\n\n models one could explore the resulting computational deficits and \n\npossible strategies for restoration of cognitive functions.\n\n", "bibliography_bib": [{"title": "Imagenet classification with deep convolutional neural networks"}, {"title": "Visualizing higher-layer features of a deep network"}, {"title": "Deep inside convolutional networks: Visualising image classification models and saliency maps"}, {"title": "Multifaceted feature visualization: Uncovering the different types of features learned by each neuron in deep neural networks"}, {"title": "Feature Visualization"}, {"title": "Going deeper with convolutions"}, {"title": "Neural architecture search with reinforcement learning"}, {"title": "Neural networks are surprisingly modular"}, {"title": "Are Neural Nets Modular? Inspecting Functional Modularity Through Differentiable Weight Masks"}, {"title": "Segregation of form, color, and stereopsis in primate area 18"}, {"title": "Representation of Angles Embedded within Contour Stimuli in Area V2 of Macaque Monkeys"}], "filename": "Branch Specialization.html", "id": "076701aef9db76e7a5761b0b3af43bba"} {"url": "n/a", "source": "distill", "source_type": "html", "converted_with": "python", "title": "Growing Neural Cellular Automata", "authors": ["Alexander Mordvintsev", "Ettore Randazzo", "Eyvind Niklasson", "Michael Levin"], "date_published": "2020-02-11", "abstract": " This article is part of the Differentiable Self-organizing Systems Thread, an experimental format collecting invited short articles delving into differentiable self-organizing systems, interspersed with critical commentary from several experts in adjacent fields. ", "journal_ref": "distill-pub", "doi": "https://doi.org/10.23915/distill.00023", "text": "\n\n### Contents\n\n[Model](#model)\n\n[Experiments](#experiment-1)\n\n* [Learning to Grow](#experiment-1)\n\n* [What persists, exists](#experiment-2)\n\n* [Learning to regenerate](#experiment-3)\n\n* [Rotating the perceptive field](#experiment-4)\n\n[Related Work](#related-work)\n\n[Discussion](#discussion)\n\n![](Growing%20Neural%20Cellular%20Automata_files/multiple-pages.svg)\n\n This article is part of the\n\n an experimental format collecting invited short articles delving into\n\n differentiable self-organizing systems, interspersed with critical\n\n commentary from several experts in adjacent fields.\n\n \n\n[Self-classifying MNIST Digits](https://distill.pub/2020/selforg/mnist/)\n\n Most multicellular organisms begin their life as a single egg cell - a\n\n single cell whose progeny reliably self-assemble into highly complex\n\n anatomies with many organs and tissues in precisely the same arrangement\n\n each time. The ability to build their own bodies is probably the most\n\n fundamental skill every living creature possesses. Morphogenesis (the\n\n process of an organism’s shape development) is one of the most striking\n\n examples of a phenomenon called *self-organisation*. Cells, the tiny\n\n building blocks of bodies, communicate with their neighbors to decide the\n\n shape of organs and body plans, where to grow each organ, how to\n\n interconnect them, and when to eventually stop. Understanding the interplay\n\n of the emergence of complex outcomes from simple rules and\n\n homeostatic\n\n Self-regulatory feedback loops trying maintain the body in a stable state\n\n or preserve its correct overall morphology under external\n\n perturbations\n\n feedback loops is an active area of research\n\n . What is clear\n\n is that evolution has learned to exploit the laws of physics and computation\n\n to implement the highly robust morphogenetic software that runs on\n\n genome-encoded cellular hardware.\n\n \n\n This process is extremely robust to perturbations. Even when the organism is\n\n fully developed, some species still have the capability to repair damage - a\n\n process known as regeneration. Some creatures, such as salamanders, can\n\n fully regenerate vital organs, limbs, eyes, or even parts of the brain!\n\n Morphogenesis is a surprisingly adaptive process. Sometimes even a very\n\n atypical development process can result in a viable organism - for example,\n\n when an early mammalian embryo is cut in two, each half will form a complete\n\n individual - monozygotic twins!\n\n \n\n The biggest puzzle in this field is the question of how the cell collective\n\n knows what to build and when to stop. The sciences of genomics and stem cell\n\n biology are only part of the puzzle, as they explain the distribution of\n\n specific components in each cell, and the establishment of different types\n\n of cells. While we know of many genes that are *required* for the\n\n process of regeneration, we still do not know the algorithm that is\n\n *sufficient* for cells to know how to build or remodel complex organs\n\n to a very specific anatomical end-goal. Thus, one major lynch-pin of future\n\n work in biomedicine is the discovery of the process by which large-scale\n\n anatomy is specified within cell collectives, and how we can rewrite this\n\n information to have rational control of growth and form. It is also becoming\n\n clear that the software of life possesses numerous modules or subroutines,\n\n such as “build an eye here”, which can be activated with simple signal\n\n triggers. Discovery of such subroutines and a\n\n mapping out of the developmental logic is a new field at the intersection of\n\n developmental biology and computer science. An important next step is to try\n\n to formulate computational models of this process, both to enrich the\n\n conceptual toolkit of biologists and to help translate the discoveries of\n\n biology into better robotics and computational technology.\n\n \n\n Imagine if we could design systems of the same plasticity and robustness as\n\n biological life: structures and machines that could grow and repair\n\n themselves. Such technology would transform the current efforts in\n\n regenerative medicine, where scientists and clinicians seek to discover the\n\n inputs or stimuli that could cause cells in the body to build structures on\n\n demand as needed. To help crack the puzzle of the morphogenetic code, and\n\n also exploit the insights of biology to create self-repairing systems in\n\n real life, we try to replicate some of the desired properties in an\n\n *in silico* experiment.\n\n \n\nModel\n\n-----\n\n Those in engineering disciplines and researchers often use many kinds of\n\n simulations incorporating local interaction, including systems of partial\n\n derivative equation (PDEs), particle systems, and various kinds of Cellular\n\n Automata (CA). We will focus on Cellular Automata models as a roadmap for\n\n the effort of identifying cell-level rules which give rise to complex,\n\n regenerative behavior of the collective. CAs typically consist of a grid of\n\n cells being iteratively updated, with the same set of rules being applied to\n\n each cell at every step. The new state of a cell depends only on the states\n\n of the few cells in its immediate neighborhood. Despite their apparent\n\n simplicity, CAs often demonstrate rich, interesting behaviours, and have a\n\n long history of being applied to modeling biological phenomena.\n\n \n\n Let’s try to develop a cellular automata update rule that, starting from a\n\n single cell, will produce a predefined multicellular pattern on a 2D grid.\n\n This is our analogous toy model of organism development. To design the CA,\n\n we must specify the possible cell states, and their update function. Typical\n\n CA models represent cell states with a set of discrete values, although\n\n variants using vectors of continuous values exist. The use of continuous\n\n values has the virtue of allowing the update rule to be a differentiable\n\n function of the cell’s neighbourhood’s states. The rules that guide\n\n individual cell behavior based on the local environment are analogous to the\n\n low-level hardware specification encoded by the genome of an organism.\n\n Running our model for a set amount of steps from a starting configuration\n\n will reveal the patterning behavior that is enabled by such hardware.\n\n \n\n So - what is so special about differentiable update rules? They will allow\n\n us to use the powerful language of loss functions to express our wishes, and\n\n the extensive existing machinery around gradient-based numerical\n\n optimization to fulfill them. The art of stacking together differentiable\n\n functions, and optimizing their parameters to perform various tasks has a\n\n long history. In recent years it has flourished under various names, such as\n\n (Deep) Neural Networks, Deep Learning or Differentiable Programming.\n\n \n\nA single update step of the model.\n\n### Cell State\n\n We will represent each cell state as a vector of 16 real values (see the\n\n figure above). The first three channels represent the cell color visible to\n\n and an α\\alphaα equal to 1.0 for foreground pixels, and 0.0 for background.\n\n \n\n The alpha channel (α\\alphaα) has a special meaning: it demarcates living\n\n cells, those belonging to the pattern being grown. In particular, cells\n\n cells are “dead” or empty and have their state vector values explicitly set\n\n can become mature if their alpha passes the 0.1 threshold.\n\n \n\n![](Growing%20Neural%20Cellular%20Automata_files/alive2.svg)\n\n Hidden channels don’t have a predefined meaning, and it’s up to the update\n\n rule to decide what to use them for. They can be interpreted as\n\n concentrations of some chemicals, electric potentials or some other\n\n signaling mechanism that are used by cells to orchestrate the growth. In\n\n terms of our biological analogy - all our cells share the same genome\n\n (update rule) and are only differentiated by the information encoded the\n\n chemical signalling they receive, emit, and store internally (their state\n\n vectors).\n\n \n\n### Cellular Automaton rule\n\n Now it’s time to define the update rule. Our CA runs on a regular 2D grid of\n\n 16-dimensional vectors, essentially a 3D array of shape [height, width, 16].\n\n We want to apply the same operation to each cell, and the result of this\n\n operation can only depend on the small (3x3) neighborhood of the cell. This\n\n is heavily reminiscent of the convolution operation, one of the cornerstones\n\n of signal processing and differential programming. Convolution is a linear\n\n operation, but it can be combined with other per-cell operations to produce\n\n a complex update rule, capable of learning the desired behaviour. Our cell\n\n update rule can be split into the following phases, applied in order:\n\n \n\n**Perception.** This step defines what each cell perceives of\n\n the environment surrounding it. We implement this via a 3x3 convolution with\n\n a fixed kernel. One may argue that defining this kernel is superfluous -\n\n after all we could simply have the cell learn the requisite perception\n\n kernel coefficients. Our choice of fixed operations are motivated by the\n\n fact that real life cells often rely only on chemical gradients to guide the\n\n organism development. Thus, we are using classical Sobel filters to estimate\n\n the partial derivatives of cell state channels in the x⃗\\vec{x}x⃗ and\n\n y⃗\\vec{y}y⃗​ directions, forming a 2D gradient vector in each direction, for\n\n each state channel. We concatenate those gradients with the cells own\n\n rather *percepted vector,* for each cell.\n\n \n\ndef perceive(state\\_grid):\n\nsobel\\_x = [[-1, 0, +1],\n\n[-2, 0, +2],\n\n[-1, 0, +1]]\n\nsobel\\_y = transpose(sobel\\_x)\n\n# Convolve sobel filters with states\n\n# in x, y and channel dimension.\n\ngrad\\_x = conv2d(sobel\\_x, state\\_grid)\n\ngrad\\_y = conv2d(sobel\\_y, state\\_grid)\n\n# Concatenate the cell’s state channels,\n\n# the gradients of channels in x and\n\n# the gradient of channels in y.\n\nperception\\_grid = concat(\n\nstate\\_grid, grad\\_x, grad\\_y, axis=2)\n\nreturn perception\\_grid\n\n**Update rule.** Each cell now applies a series of operations\n\n to the perception vector, consisting of typical differentiable programming\n\n building blocks, such as 1x1-convolutions and ReLU nonlinearities, which we\n\n call the cell’s “update rule”. Recall that the update rule is learned, but\n\n every cell runs the same update rule. The network parametrizing this update\n\n rule consists of approximately 8,000 parameters. Inspired by residual neural\n\n networks, the update rule outputs an incremental update to the cell’s state,\n\n which applied to the cell before the next time step. The update rule is\n\n designed to exhibit “do-nothing” initial behaviour - implemented by\n\n initializing the weights of the final convolutional layer in the update rule\n\n with zero. We also forego applying a ReLU to the output of the last layer of\n\n the update rule as the incremental updates to the cell state must\n\n necessarily be able to both add or subtract from the state.\n\n \n\ndef update(perception\\_vector):\n\n# The following pseudocode operates on\n\n# a single cell’s perception vector.\n\n# Our reference implementation uses 1D\n\n# convolutions for performance reasons.\n\nx = dense(perception\\_vector, output\\_len=128)\n\nx = relu(x)\n\nds = dense(x, output\\_len=16, weights\\_init=0.0)\n\nreturn ds\n\n**Stochastic cell update.** Typical cellular automata update\n\n all cells simultaneously. This implies the existence of a global clock,\n\n synchronizing all cells. Relying on global synchronisation is not something\n\n one expects from a self-organising system. We relax this requirement by\n\n assuming that each cell performs an update independently, waiting for a\n\n random time interval between updates. To model this behaviour we apply a\n\n random per-cell mask to update vectors, setting all update values to zero\n\n with some predefined probability (we use 0.5 during training). This\n\n operation can be also seen as an application of per-cell dropout to update\n\n vectors.\n\n \n\ndef stochastic\\_update(state\\_grid, ds\\_grid):\n\n# Zero out a random fraction of the updates.\n\nrand\\_mask = cast(random(64, 64) < 0.5, float32)\n\nds\\_grid = ds\\_grid \\* rand\\_mask\n\nreturn state\\_grid + ds\\_grid\n\n**Living cell masking.** We want to model the growth process\n\n that starts with a single cell, and don’t want empty cells to participate in\n\n computations or carry any hidden state. We enforce this by explicitly\n\n setting all channels of empty cells to zeros. A cell is considered empty if\n\n there is no “mature” (alpha>0.1) cell in its 3x3 neightborhood.\n\n \n\ndef alive\\_masking(state\\_grid):\n\n# Take the alpha channel as the measure of “life”.\n\nalive = max\\_pool(state\\_grid[:, :, 3], (3,3)) > 0.1\n\nstate\\_grid = state\\_grid \\* cast(alive, float32)\n\nreturn state\\_grid\n\nExperiment 1: Learning to Grow\n\n------------------------------\n\nTraining regime for learning a target pattern.\n\n In our first experiment, we simply train the CA to achieve a target image\n\n after a random number of updates. This approach is quite naive and will run\n\n into issues. But the challenges it surfaces will help us refine future\n\n attempts.\n\n \n\n We initialize the grid with zeros, except a single seed cell in the center,\n\n which will have all channels except RGB\n\n We set RGB channels of the seed to zero because we want it to be visible\n\n on the white background.\n\n set to one. Once the grid is initialized, we iteratively apply the update\n\n rule. We sample a random number of CA steps from the [64, 96]\n\n This should be a sufficient number of steps to grow the pattern of the\n\n size we work with (40x40), even considering the stochastic nature of our\n\n update rule.\n\n range for each training step, as we want the pattern to be stable across a\n\n number of iterations. At the last step we apply pixel-wise L2 loss between\n\n RGBA channels in the grid and the target pattern. This loss can be\n\n differentiably optimized\n\n We observed training instabilities, that were manifesting themselves as\n\n sudden jumps of the loss value in the later stages of the training. We\n\n managed to mitigate them by applying per-variable L2 normalization to\n\n parameter gradients. This may have the effect similar to the weight\n\n normalization . Other training\n\n parameters are available in the accompanying source code.\n\n with respect to the update rule parameters by backpropagation-through-time,\n\n the standard method of training recurrent neural networks.\n\n \n\n Once the optimisation converges, we can run simulations to see how our\n\n learned CAs grow patterns starting from the seed cell. Let’s see what\n\n happens when we run it for longer than the number of steps used during\n\n training. The animation below shows the behaviour of a few different models,\n\n trained to generate different emoji patterns.\n\n \n\n Your browser does not support the video tag.\n\n \n\n Many of the patterns exhibit instability for longer time periods.\n\n \n\n \n\n We can see that different training runs can lead to models with drastically\n\n different long term behaviours. Some tend to die out, some don’t seem to\n\n know how to stop growing, but some happen to be almost stable! How can we\n\n steer the training towards producing persistent patterns all the time?\n\n \n\nExperiment 2: What persists, exists\n\n-----------------------------------\n\n One way of understanding why the previous experiment was unstable is to draw\n\n a parallel to dynamical systems. We can consider every cell to be a\n\n dynamical system, with each cell sharing the same dynamics, and all cells\n\n being locally coupled amongst themselves. When we train our cell update\n\n model we are adjusting these dynamics. Our goal is to find dynamics that\n\n satisfy a number of properties. Initially, we wanted the system to evolve\n\n from the seed pattern to the target pattern - a trajectory which we achieved\n\n in Experiment 1. Now, we want to avoid the instability we observed - which\n\n in our dynamical system metaphor consists of making the target pattern an\n\n attractor.\n\n \n\n One strategy to achieve this is letting the CA iterate for much longer time\n\n and periodically applying the loss against the target, training the system\n\n by backpropagation through these longer time intervals. Intuitively we claim\n\n that with longer time intervals and several applications of loss, the model\n\n is more likely to create an attractor for the target shape, as we\n\n iteratively mold the dynamics to return to the target pattern from wherever\n\n the system has decided to venture. However, longer time periods\n\n substantially increase the training time and more importantly, the memory\n\n requirements, given that the entire episode’s intermediate activations must\n\n be stored in memory for a backwards-pass to occur.\n\n \n\n Instead, we propose a “sample pool” based strategy to a similar effect. We\n\n define a pool of seed states to start the iterations from, initially filled\n\n with the single black pixel seed state. We then sample a batch from this\n\n pool which we use in our training step. To prevent the equivalent of\n\n “catastrophic forgetting” we replace one sample in this batch with the\n\n original, single-pixel seed state. After concluding the training step , we\n\n replace samples in the pool that were sampled for the batch with the output\n\n states from the training step over this batch. The animation below shows a\n\n random sample of the entries in the pool every 20 training steps.\n\n \n\ndef pool\\_training():\n\n# Set alpha and hidden channels to (1.0).\n\nseed = zeros(64, 64, 16)\n\nseed[64//2, 64//2, 3:] = 1.0\n\ntarget = targets[‘lizard’]\n\npool = [seed] \\* 1024\n\nfor i in range(training\\_iterations):\n\nidxs, batch = pool.sample(32)\n\n# Sort by loss, descending.\n\nbatch = sort\\_desc(batch, loss(batch))\n\n# Replace the highest-loss sample with the seed.\n\nbatch[0] = seed\n\n# Perform training.\n\noutputs, loss = train(batch, target)\n\n# Place outputs back in the pool.\n\npool[idxs] = outputs\n\n Your browser does not support the video tag.\n\n \n\n A random sample of the patterns in the pool during training, sampled\n\n every 20 training steps. \n\n \n\n Early on in the training process, the random dynamics in the system allow\n\n the model to end up in various incomplete and incorrect states. As these\n\n states are sampled from the pool, we refine the dynamics to be able to\n\n recover from such states. Finally, as the model becomes more robust at going\n\n from a seed state to the target state, the samples in the pool reflect this\n\n and are more likely to be very close to the target pattern, allowing the\n\n training to refine these almost completed patterns further.\n\n \n\n Essentially, we use the previous final states as new starting points to\n\n force our CA to learn how to persist or even improve an already formed\n\n pattern, in addition to being able to grow it from a seed. This makes it\n\n possible to add a periodical loss for significantly longer time intervals\n\n than otherwise possible, encouraging the generation of an attractor as the\n\n target shape in our coupled system. We also noticed that reseeding the\n\n highest loss sample in the batch, instead of a random one, makes training\n\n more stable at the initial stages, as it helps to clean up the low quality\n\n states from the pool.\n\n \n\n Here is what a typical training progress of a CA rule looks like. The cell\n\n rule learns to stabilize the pattern in parallel to refining its features.\n\n \n\n Your browser does not support the video tag.\n\n \n\n CA behaviour at training steps 100, 500, 1000, 4000. \n\n \n\nExperiment 3: Learning to regenerate\n\n------------------------------------\n\n In addition to being able to grow their own bodies, living creatures are\n\n great at maintaining them. Not only does worn out skin get replaced with new\n\n skin, but very heavy damage to complex vital organs can be regenerated in\n\n some species. Is there a chance that some of the models we trained above\n\n have regenerative capabilities?\n\n \n\n Your browser does not support the video tag.\n\n \n\n Patterns exhibit some regenerative properties upon being damaged, but\n\n not full re-growth. \n\n \n\n The animation above shows three different models trained using the same\n\n settings. We let each of the models develop a pattern over 100 steps, then\n\n damage the final state in five different ways: by removing different halves\n\n of the formed pattern, and by cutting out a square from the center. Once\n\n again, we see that these models show quite different out-of-training mode\n\n behaviour. For example “the lizard” develops quite strong regenerative\n\n capabilities, without being explicitly trained for it!\n\n \n\n Since we trained our coupled system of cells to generate an attractor\n\n towards a target shape from a single cell, it was likely that these systems,\n\n once damaged, would generalize towards non-self-destructive reactions.\n\n That’s because the systems were trained to grow, stabilize, and never\n\n entirely self-destruct. Some of these systems might naturally gravitate\n\n towards regenerative capabilities, but nothing stops them from developing\n\n different behaviors such as explosive mitoses (uncontrolled growth),\n\n unresponsiveness to damage (overstabilization), or even self destruction,\n\n especially for the more severe types of damage.\n\n \n\n If we want our model to show more consistent and accurate regenerative\n\n capabilities, we can try to increase the basin of attraction for our target\n\n pattern - increase the space of cell configurations that naturally gravitate\n\n towards our target shape. We will do this by damaging a few pool-sampled\n\n states before each training step. The system now has to be capable of\n\n regenerating from states damaged by randomly placed erasing circles. Our\n\n hope is that this will generalize to regenerational capabilities from\n\n various types of damage.\n\n \n\n Your browser does not support the video tag.\n\n \n\n Damaging samples in the pool encourages the learning of robust\n\n regenerative qualities. Row 1 are samples from the pool, Row 2 are their\n\n respective states after iterating the model. \n\n \n\n The animation above shows training progress, which includes sample damage.\n\n We sample 8 states from the pool. Then we replace the highest-loss sample\n\n (top-left-most in the above) with the seed state, and damage the three\n\n lowest-loss (top-right-most) states by setting a random circular region\n\n within the pattern to zeros. The bottom row shows states after iteration\n\n from the respective top-most starting state. As in Experiment 2, the\n\n resulting states get injected back into the pool.\n\n \n\n Your browser does not support the video tag.\n\n \n\n Patterns exposed to damage during training exhibit astounding\n\n regenerative capabilities. \n\n \n\n As we can see from the animation above, models that were exposed to damage\n\n during training are much more robust, including to types of damage not\n\n experienced in the training process (for instance rectangular damage as\n\n above).\n\n \n\nExperiment 4: Rotating the perceptive field\n\n-------------------------------------------\n\n As previously described, we model the cell’s perception of its neighbouring\n\n cells by estimating the gradients of state channels in x⃗\\vec{x}x⃗ and\n\n y⃗\\vec{y}y⃗​ using Sobel filters. A convenient analogy is that each agent has\n\n two sensors (chemosensory receptors, for instance) pointing in orthogonal\n\n directions that can sense the gradients in the concentration of certain\n\n chemicals along the axis of the sensor. What happens if we rotate those\n\n sensors? We can do this by rotating the Sobel kernels.\n\n \n\n -\\sin \\theta \\\\ \\sin \\theta & \\cos \\theta \\end{bmatrix} \\* \\begin{bmatrix}\n\n This simple modification of the perceptive field produces rotated versions\n\n of the pattern for an angle of choosing without retraining as seen below.\n\n \n\n![](Growing%20Neural%20Cellular%20Automata_files/rotation.png)\n\n Rotating the axis along which the perception step computes gradients\n\n brings about rotated versions of the pattern. \n\n \n\n In a perfect world, not quantized by individual cells in a pixel-lattice,\n\n this would not be too surprising, as, after all, one would expect the\n\n angle - a simple change of frame of reference. However, it is important to\n\n note that things are not as simple in a pixel based model. Rotating pixel\n\n based graphics involves computing a mapping that’s not necessarily bijective\n\n and classically involves interpolating between pixels to achieve the desired\n\n result. This is because a single pixel, when rotated, will now likely\n\n overlap several pixels. The successful growth of patterns as above suggests\n\n a certain robustness to the underlying conditions outside of those\n\n experienced during training.\n\n \n\nRelated Work\n\n------------\n\n### CA and PDEs\n\n There exists an extensive body of literature that describes the various\n\n flavours of cellular automata and PDE systems, and their applications to\n\n modelling physical, biological or even social systems. Although it would be\n\n impossible to present a just overview of this field in a few lines, we will\n\n describe some prominent examples that inspired this work. Alan Turing\n\n introduced his famous Turing patterns back in 1952\n\n , suggesting how\n\n reaction-diffusion systems can be a valid model for chemical behaviors\n\n during morphogenesis. A particularly inspiring reaction-diffusion model that\n\n stood the test of time is the Gray-Scott model\n\n , which shows an extreme variety of\n\n behaviors controlled by just a few variables.\n\n \n\n Ever since von Neumann introduced CAs\n\n as models for self-replication they\n\n have captivated researchers’ minds, who observed extremely complex\n\n behaviours emerging from very simple rules. Likewise, the a broader audience\n\n outside of academia were seduced by CA’s life-like behaviours thanks to\n\n Conway’s Game of Life . Perhaps\n\n motivated in part by the proof that something as simple as the Rule 110 is\n\n Turing complete, Wolfram’s “*A New Kind of Science”*\n\n asks for a paradigm shift centered\n\n around the extensive usage of elementary computer programs such as CA as\n\n tools for understanding the world.\n\n \n\n More recently, several researchers generalized Conway’s Game of life to work\n\n on more continuous domains. We were particularly inspired by Rafler’s\n\n SmoothLife and Chan’s Lenia\n\n , the latter of\n\n which also discovers and classifies entire species of “lifeforms”.\n\n \n\n A number of researchers have used evolutionary algorithms to find CA rules\n\n that reproduce predefined simple patterns\n\n .\n\n For example, J. Miller proposed an\n\n experiment similar to ours, using evolutionary algorithms to design a CA\n\n rule that could build and regenerate the French flag, starting from a seed\n\n cell.\n\n \n\n### Neural Networks and Self-Organisation\n\n The close relation between Convolutional Neural Networks and Cellular\n\n Automata has already been observed by a number of researchers\n\n . The\n\n connection is so strong it allowed us to build Neural CA models using\n\n components readily available in popular ML frameworks. Thus, using a\n\n different jargon, our Neural CA could potentially be named “Recurrent\n\n Residual Convolutional Networks with ‘per-pixel’ Dropout”.\n\n \n\n The Neural GPU\n\n offers\n\n a computational architecture very similar to ours, but applied in the\n\n context of learning multiplication and a sorting algorithm.\n\n \n\n Looking more broadly, we think that the concept of self-organisation is\n\n finding its way into mainstream machine learning with popularisation of\n\n Graph Neural Network models.\n\n Typically, GNNs run a repeated computation across vertices of a (possibly\n\n dynamic) graph. Vertices communicate locally through graph edges, and\n\n aggregate global information required to perform the task over multiple\n\n rounds of message exchanges, just as atoms can be thought of as\n\n communicating with each other to produce the emergent properties of a\n\n molecule , or even points of a point\n\n cloud talk to their neighbors to figure out their global shape\n\n .\n\n \n\n Self-organization also appeared in fascinating contemporary work using more\n\n traditional dynamic graph networks, where the authors evolved\n\n Self-Assembling Agents to solve a variety of virtual tasks\n\n .\n\n \n\n### Swarm Robotics\n\n One of the most remarkable demonstrations of the power of self-organisation\n\n is when it is applied to swarm modeling. Back in 1987, Reynolds’ Boids\n\n simulated the flocking behaviour of birds with\n\n just a tiny set of handcrafted rules. Nowadays, we can embed tiny robots\n\n with programs and test their collective behavior on physical agents, as\n\n demonstrated by work such as Mergeable Nervous Systems\n\n and Kilobots\n\n . To the best of our knowledge, programs\n\n embedded into swarm robots are currently designed by humans. We hope our\n\n work can serve as an inspiration for the field and encourage the design of\n\n collective behaviors through differentiable modeling.\n\n \n\nDiscussion\n\n----------\n\n### Embryogenetic Modeling\n\n Your browser does not support the video tag.\n\n \n\n Regeneration-capable 2-headed planarian, the creature that inspired this\n\n work \n\n \n\n \n\n This article describes a toy embryogenesis and regeneration model. This is a\n\n major direction for future work, with many applications in biology and\n\n beyond. In addition to the implications for understanding the evolution and\n\n control of regeneration, and harnessing this understanding for biomedical\n\n repair, there is the field of bioengineering. As the field transitions from\n\n synthetic biology of single cell collectives to a true synthetic morphology\n\n of novel living machines , it\n\n will be essential to develop strategies for programming system-level\n\n capabilities, such as anatomical homeostasis (regenerative repair). It has\n\n long been known that regenerative organisms can restore a specific\n\n anatomical pattern; however, more recently it’s been found that the target\n\n morphology is not hard coded by the DNA, but is maintained by a\n\n physiological circuit that stores a setpoint for this anatomical homeostasis\n\n . Techniques are\n\n now available for re-writing this setpoint, resulting for example\n\n in 2-headed flatworms\n\n that, when cut into pieces in plain water (with no more manipulations)\n\n result in subsequent generations of 2-headed regenerated worms (as shown\n\n above). It is essential to begin to develop models of the computational\n\n processes that store the system-level target state for swarm behavior\n\n , so that efficient strategies can be developed for rationally editing this\n\n information structure, resulting in desired large-scale outcomes (thus\n\n defeating the inverse problem that holds back regenerative medicine and many\n\n other advances).\n\n \n\n### Engineering and machine learning\n\n The models described in this article run on the powerful GPU of a modern\n\n computer or a smartphone. Yet, let’s speculate about what a “more physical”\n\n implementation of such a system could look like. We can imagine it as a grid\n\n of tiny independent computers, simulating individual cells. Each of those\n\n computers would require approximately 10Kb of ROM to store the “cell\n\n genome”: neural network weights and the control code, and about 256 bytes of\n\n RAM for the cell state and intermediate activations. The cells must be able\n\n to communicate their 16-value state vectors to neighbors. Each cell would\n\n also require an RGB-diode to display the color of the pixel it represents. A\n\n single cell update would require about 10k multiply-add operations and does\n\n not have to be synchronised across the grid. We propose that cells might\n\n wait for random time intervals between updates. The system described above\n\n is uniform and decentralised. Yet, our method provides a way to program it\n\n to reach the predefined global state, and recover this state in case of\n\n multi-element failures and restarts. We therefore conjecture this kind of\n\n modeling may be used for designing reliable, self-organising agents. On the\n\n more theoretical machine learning front, we show an instance of a\n\n decentralized model able to accomplish remarkably complex tasks. We believe\n\n this direction to be opposite to the more traditional global modeling used\n\n in the majority of contemporary work in the deep learning field, and we hope\n\n this work to be an inspiration to explore more decentralized learning\n\n modeling.\n\n \n\n![](Growing%20Neural%20Cellular%20Automata_files/multiple-pages.svg)\n\n This article is part of the\n\n an experimental format collecting invited short articles delving into\n\n differentiable self-organizing systems, interspersed with critical\n\n commentary from several experts in adjacent fields.\n\n \n\n[Self-classifying MNIST Digits](https://distill.pub/2020/selforg/mnist/)\n\n", "bibliography_bib": [{"title": "Top-down models in biology: explanation and control of complex living systems above the molecular level"}, {"title": "Re-membering\n the body: applications of computational neuroscience to the top-down \ncontrol of regeneration of limbs and other complex organs"}, {"title": "Transmembrane voltage potential controls embryonic eye patterning in Xenopus laevis"}, {"title": "Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks"}, {"title": "The chemical basis of morphogenesis"}, {"title": "Complex Patterns in a Simple System"}, {"title": "Theory of Self-Reproducing Automata"}, {"title": "MATHEMATICAL GAMES"}, {"title": "A New Kind of Science"}, {"title": "Generalization of Conway's \"Game of Life\" to a continuous domain - SmoothLife"}, {"title": "Lenia: Biology of Artificial Life"}, {"title": "Intrinsically Motivated Exploration for Automated Discovery of Patterns in Morphogenetic Systems"}, {"title": "Evolving Self-organizing Cellular Automata Based on Neural Network Genotypes"}, {"title": "CA-NEAT: Evolved Compositional Pattern Producing Networks for Cellular Automata Morphogenesis and Replication"}, {"title": "Evolving a Self-Repairing, Self-Regulating, French Flag Organism"}, {"title": "Learning Cellular Automaton Dynamics with Neural Networks"}, {"title": "Cellular automata as convolutional neural networks"}, {"title": "Neural GPUs Learn Algorithms"}, {"title": "Improving the Neural GPU Architecture for Algorithm Learning"}, {"title": "A Comprehensive Survey on Graph Neural Networks"}, {"title": "Convolutional Networks on Graphs for Learning Molecular Fingerprints"}, {"title": "Dynamic Graph CNN for Learning on Point Clouds"}, {"title": "Learning to Control Self- Assembling Morphologies: A Study of Generalization via Modularity"}, {"title": "Flocks, Herds and Schools: A Distributed Behavioral Model"}, {"title": "Mergeable nervous systems for robots"}, {"title": "Kilobot: A low cost scalable robot system for collective behaviors"}, {"title": "What Bodies Think About: Bioelectric Computation Outside the Nervous System"}, {"title": "A scalable pipeline for designing reconfigurable organisms"}, {"title": "Perspective: The promise of multi-cellular engineered living systems"}, {"title": "Physiological inputs regulate species-specific anatomy during embryogenesis and regeneration"}, {"title": "Long-range neural and gap junction protein-mediated cues control polarity during planarian regeneration"}, {"title": "Long-Term, Stochastic Editing of Regenerative Anatomy via Targeting Endogenous Bioelectric Gradients"}, {"title": "Pattern Regeneration in Coupled Networks"}, {"title": "Bioelectrical\n control of positional information in development and regeneration: A \nreview of conceptual and computational advances"}, {"title": "Modeling Cell Migration in a Simulated Bioelectrical Signaling Network for Anatomical Regeneration"}, {"title": "Investigating the effects of noise on a cell-to-cell communication mechanism for structure regeneration"}, {"title": "Social Intelligence"}, {"title": "Inceptionism: Going deeper into neural networks"}], "filename": "Growing Neural Cellular Automata.html", "id": "c302ce2fb3acb8462ba5f5c28e11a5e5"} {"url": "n/a", "source": "distill", "source_type": "html", "converted_with": "python", "title": "Thread: Circuits", "authors": ["Nick Cammarata", "Shan Carter", "Gabriel Goh", "Chris Olah", "Michael Petrov", "Ludwig Schubert", "Chelsea Voss", "Ben Egan", "Swee Kiat Lim"], "date_published": "2020-03-10", "abstract": " In the original narrative of deep learning, each neuron builds progressively more abstract, meaningful features by composing features in the preceding layer. In recent years, there’s been some skepticism of this view, but what happens if you take it really seriously? ", "journal_ref": "distill-pub", "doi": "https://doi.org/10.23915/distill.00024", "text": "\n\n In the original narrative of deep learning, each neuron builds\n\n progressively more abstract, meaningful features by composing features in\n\n the preceding layer. In recent years, there’s been some skepticism of this\n\n view, but what happens if you take it really seriously?\n\n \n\n InceptionV1 is a classic vision model with around 10,000 unique \n\nneurons — a large number, but still on a scale that a group effort could\n\n attack.\n\n What if you simply go through the model, neuron by neuron, trying \n\nto\n\n understand each one and the connections between them? The circuits\n\n collaboration aims to find out.\n\n \n\nArticles & Comments\n\n-------------------\n\n The natural unit of publication for investigating circuits seems to be\n\n short papers on individual circuits or small families of features.\n\n Compared to normal machine learning papers, this is a small and unusual\n\n topic for a paper.\n\n \n\n To facilitate exploration of this direction, Distill is inviting a\n\n “thread” of short articles on circuits, interspersed with critical\n\n commentary by experts in adjacent fields. The thread will be a living\n\n document, with new articles added over time, organized through an open\n\n slack channel (#circuits in the\n\n [Distill slack](http://slack.distill.pub/)). Content in this\n\n thread should be seen as early stage exploratory research.\n\n \n\nArticles and comments are presented below in chronological order:\n\n### \n\n### Authors\n\n### Affiliations\n\n[Chris Olah](https://colah.github.io/),\n\n [Nick Cammarata](http://nickcammarata.com/),\n\n [Ludwig Schubert](https://schubert.io/),\n\n [Gabriel Goh](http://gabgoh.github.io/),\n\n [Michael Petrov](https://twitter.com/mpetrov),\n\n [Shan Carter](http://shancarter.com/)\n\n[OpenAI](https://openai.com/)\n\n Does it make sense to treat individual neurons and the connections\n\n between them as a serious object of study? This essay proposes three\n\n claims which, if true, might justify serious inquiry into them: the\n\n existence of meaningful features, the existence of meaningful circuits\n\n between features, and the universality of those features and circuits.\n\n \n\n \n\n It also discuses historical successes of science “zooming in,” whether\n\n we should be concerned about this research being qualitative, and\n\n approaches to rigorous investigation.\n\n \n\n \n\n[Read Full Article](https://distill.pub/2020/circuits/zoom-in/)\n\n### \n\n### Authors\n\n### Affiliations\n\n[Chris Olah](https://colah.github.io/),\n\n [Nick Cammarata](http://nickcammarata.com/),\n\n [Ludwig Schubert](https://schubert.io/),\n\n [Gabriel Goh](http://gabgoh.github.io/),\n\n [Michael Petrov](https://twitter.com/mpetrov),\n\n [Shan Carter](http://shancarter.com/)\n\n[OpenAI](https://openai.com/)\n\n An overview of all the neurons in the first five layers of\n\n InceptionV1, organized into a taxonomy of “neuron groups.” This\n\n article sets the stage for future deep dives into particular aspects\n\n of early vision.\n\n \n\n \n\n [Read Full Article](https://distill.pub/2020/circuits/early-vision/) \n\n### \n\n[Curve Detectors](https://distill.pub/2020/circuits/curve-detectors/)\n\n### Authors\n\n### Affiliations\n\n[Nick Cammarata](http://nickcammarata.com/),\n\n [Gabriel Goh](http://gabgoh.github.io/),\n\n [Shan Carter](http://shancarter.com/),\n\n [Ludwig Schubert](https://schubert.io/),\n\n [Michael Petrov](https://twitter.com/mpetrov),\n\n [Chris Olah](https://colah.github.io/)\n\n[OpenAI](https://openai.com/)\n\n Every vision model we’ve explored in detail contains neurons which\n\n detect curves. Curve detectors is the first in a series of three\n\n articles exploring this neuron family in detail.\n\n \n\n \n\n[Read Full Article](https://distill.pub/2020/circuits/curve-detectors/)\n\n### \n\n### Authors\n\n### Affiliations\n\n[Chris Olah](https://colah.github.io/),\n\n [Ludwig Schubert](https://schubert.io/),\n\n [Gabriel Goh](http://gabgoh.github.io/)\n\n[OpenAI](https://openai.com/)\n\n Neural networks naturally learn many transformed copies of the same\n\n feature, connected by symmetric weights.\n\n \n\n \n\n[Read Full Article](https://distill.pub/2020/circuits/equivariance/)\n\n### \n\n### Authors\n\n### Affiliations\n\n[Ludwig Schubert](https://schubert.io/),\n\n [Chelsea Voss](https://csvoss.com/),\n\n [Nick Cammarata](http://nickcammarata.com/),\n\n [Gabriel Goh](http://gabgoh.github.io/),\n\n [Chris Olah](https://colah.github.io/)\n\n[OpenAI](https://openai.com/)\n\n A family of early-vision neurons reacting to directional transitions\n\n from high to low spatial frequency.\n\n \n\n \n\n[Read Full Article](https://distill.pub/2020/circuits/frequency-edges/)\n\n### \n\n[Curve Circuits](https://distill.pub/2020/circuits/curve-circuits/)\n\n### Authors\n\n### Affiliations\n\n[Nick Cammarata](http://nickcammarata.com/),\n\n [Gabriel Goh](http://gabgoh.github.io/),\n\n [Shan Carter](http://shancarter.com/),\n\n [Chelsea Voss](https://csvoss.com/),\n\n [Ludwig Schubert](https://schubert.io/),\n\n [Chris Olah](https://colah.github.io/)\n\n[OpenAI](https://openai.com/)\n\n We reverse engineer a non-trivial learned algorithm from the weights\n\n of a neural network and use its core ideas to craft an artificial\n\n artificial neural network from scratch that reimplements it.\n\n \n\n \n\n[Read Full Article](https://distill.pub/2020/circuits/curve-circuits/)\n\n### \n\n[Visualizing Weights](https://distill.pub/2020/circuits/visualizing-weights/)\n\n### Authors\n\n### Affiliations\n\n[Chelsea Voss](https://csvoss.com/),\n\n [Nick Cammarata](http://nickcammarata.com/),\n\n [Gabriel Goh](https://gabgoh.github.io/),\n\n [Michael Petrov](https://twitter.com/mpetrov),\n\n [Ludwig Schubert](https://schubert.io/),\n\n Ben Egan,\n\n [Swee Kiat Lim](https://greentfrapp.github.io/),\n\n [Chris Olah](https://colah.github.io/)\n\n[OpenAI](https://openai.com/),\n\n [Mount Royal University](https://mtroyal.ca/),\n\n [Stanford University](https://stanford.edu/)\n\n We present techniques for visualizing, contextualizing, and\n\n understanding neural network weights.\n\n \n\n \n\n[Read Full Article](https://distill.pub/2020/circuits/visualizing-weights/)\n\n### \n\n### Authors\n\n### Affiliations\n\n[Chelsea Voss](https://csvoss.com/),\n\n [Gabriel Goh](https://gabgoh.github.io/),\n\n [Nick Cammarata](http://nickcammarata.com/),\n\n [Michael Petrov](https://twitter.com/mpetrov),\n\n [Ludwig Schubert](https://schubert.io/),\n\n [Chris Olah](https://colah.github.io/)\n\n[OpenAI](https://openai.com/)\n\n When a neural network layer is divided into multiple branches, neurons\n\n self-organize into coherent groupings.\n\n \n\n \n\n[Read Full Article](https://distill.pub/2020/circuits/branch-specialization/)\n\n### \n\n[Weight Banding](https://distill.pub/2020/circuits/weight-banding/)\n\n### Authors\n\n### Affiliations\n\n[Michael Petrov](https://twitter.com/mpetrov),\n\n [Chelsea Voss](https://csvoss.com/),\n\n [Ludwig Schubert](https://schubert.io/),\n\n [Nick Cammarata](http://nickcammarata.com/),\n\n [Gabriel Goh](https://gabgoh.github.io/),\n\n [Chris Olah](https://colah.github.io/)\n\n[OpenAI](https://openai.com/)\n\n \n\n \n\n[Read Full Article](https://distill.pub/2020/circuits/weight-banding/)\n\n#### This is a living document\n\n Expect more articles on this topic, along with critical comments from\n\n experts.\n\n \n\nGet Involved\n\n------------\n\n The Circuits thread is open to articles exploring individual features,\n\n circuits, and their organization within neural networks. Critical\n\n commentary and discussion of existing articles is also welcome. The thread\n\n is organized through the open `#circuits` channel on the\n\n [Distill slack](http://slack.distill.pub/). Articles can be\n\n suggested there, and will be included at the discretion of previous\n\n authors in the thread, or in the case of disagreement by an uninvolved\n\n editor.\n\n \n\n If you would like get involved but don’t know where to start, small\n\n projects may be available if you ask in the channel.\n\n \n\nAbout the Thread Format\n\n-----------------------\n\n Part of Distill’s mandate is to experiment with new forms of scientific\n\n publishing. We believe that that reconciling faster and more continuous\n\n approaches to publication with review and discussion is an important open\n\n problem in scientific publishing.\n\n \n\n Threads are collections of short articles, experiments, and critical\n\n commentary around a narrow or unusual research topic, along with a slack\n\n channel for real time discussion and collaboration. They are intended to\n\n be earlier stage than a full Distill paper, and allow for more fluid\n\n publishing, feedback and discussion. We also hope they’ll allow for wider\n\n participation. Think of a cross between a Twitter thread, an academic\n\n workshop, and a book of collected essays.\n\n \n\n Threads are very much an experiment. We think it’s possible they’re a\n\n great format, and also possible they’re terrible. We plan to trial two\n\n such threads and then re-evaluate our thought on the format.\n\n \n\n", "bibliography_bib": null, "filename": "Thread_Circuits.html", "id": "9f04c2da882257bcd6ce0a5ac809dfae"} {"url": "n/a", "source": "distill", "source_type": "html", "converted_with": "python", "title": "Communicating with Interactive Articles", "authors": ["Fred Hohman", "Matthew Conlen", "Jeffrey Heer", "Duen Horng (Polo) Chau"], "date_published": "2020-09-11", "abstract": " Computing has changed how people communicate. The transmission of news, messages, and ideas is instant. Anyone’s voice can be heard. In fact, access to digital communication technologies such as the Internet is so fundamental to daily life that their disruption by government is condemned by the United Nations Human Rights Council . But while the technology to distribute our ideas has grown in leaps and bounds, the interfaces have remained largely the same. ", "journal_ref": "distill-pub", "doi": "https://doi.org/10.23915/distill.00028", "text": "\n\n### Contents\n\n[Introduction](#introduction)\n\n[Interactive Articles: Theory & Practice](#interactive-articles)\n\n* [Connecting People and Data](#connecting-people-and-data)\n\n* [Making Systems Playful](#making-systems-playful)\n\n* [Prompting Self-Reflection](#prompting-self-reflection)\n\n* [Personalizing Reading](#personalizing-reading)\n\n* [Reducing Cognitive Load](#reducing-cognitive-load)\n\n[Challenges for Authoring Interactives](#challenges)\n\n[Critical Reflections](#critical-reflections)\n\n[Looking Forward](#looking-forward)\n\n Computing has changed how people communicate. The transmission \n\nof news, messages, and ideas is instant. Anyone’s voice can be heard. In\n\n fact, access to digital communication technologies such as the Internet\n\n is so fundamental to daily life that their disruption by government is \n\n \n\n Parallel to the development of the internet, researchers like \n\nAlan Kay and Douglas Engelbart worked to build technology that would \n\nempower individuals and enhance cognition. Kay imagined the Dynabook \n\n in the hands of children across the world. Engelbart, while best \n\nremembered for his “mother of all demos,” was more interested in the \n\nability of computation to augment human intellect .\n\n Neal Stephenson wrote speculative fiction that imagined interactive \n\npaper that could display videos and interfaces, and books that could \n\nteach and respond to their readers .\n\n \n\n More recent designs (though still historical by personal computing\n\n standards) point to a future where computers are connected and assist \n\npeople in decision-making and communicating using rich graphics and \n\n unfortunately, many others have not. The most popular publishing \n\nplatforms, for example WordPress and Medium, choose to prioritize social\n\n features and ease-of-use while limiting the ability for authors to \n\ncommunicate using the dynamic features of the web.\n\n \n\n In the spirit of previous computer-assisted cognition \n\ntechnologies, a new type of computational communication medium has \n\nemerged that leverages active reading techniques to make ideas more \n\naccessible to a broad range of people. These interactive articles build \n\n \n\n In this work, for the the first time, we connect the dots between \n\ninteractive articles such as those featured in this journal and \n\npublications like *The New York Times* and the techniques, \n\ntheories, and empirical evaluations put forth by academic researchers \n\nacross the fields of education, human-computer interaction, information \n\nvisualization, and digital journalism. We show how digital designers are\n\n operationalizing these ideas to create interactive articles that help \n\nboost learning and engagement for their readers compared to static \n\nalternatives.\n\n \n\n Conducting\n\n novel research requires deep understanding and expertise in a specific \n\narea. Once achieved, researchers continue contributing new knowledge for\n\n future researchers to use and build upon. Over time, this consistent \n\naddition of new knowledge can build up, contributing to what some have \n\ncalled research debt. Not everyone is an expert in every field, and it \n\ncan be easy to lose perspective and forget the bigger picture. Yet \n\nresearch should be understood by many. Interactive articles can be used \n\nto distill the latest progress in various research fields and make their\n\n methods and results accessible and understandable to a broader \n\naudience. #### Opportunities\n\n * Engage and excite broader audience with latest research progress\n\n* Remove research debt, onboard new researchers\n\n* Make faster and clearer research progress\n\n #### Challenges\n\n * No clear incentive structure for researchers\n\n* Little funding for bespoke research dissemination and communication\n\n companion piece to a traditional research paper that uses interactive \n\nvisualizations to let readers adjust a machine learning model’s behavior\n\n PhD thesis that contributes a programming language abstraction for \n\nunderstanding how programs access the context or environment in which \n\nthey execute, and walks readers through the work using two simple \n\n crash course in complex systems science, created by leading experts, \n\npractitioners, and students in the field, with accompanying interactive \n\n Interactive articles are applicable to variety of domains, such as \n\nresearch dissemination, journalism, education, and policy and decision \n\n Today there is a growing excitement around the use of interactive \n\narticles for communication since they offer unique capabilities to help \n\npeople learn and engage with complex ideas that traditional media lacks.\n\n After describing the affordances of interactive articles, we provide \n\ncritical reflections from our own experience with open-source, \n\ninteractive publishing at scale. We conclude with discussing practical \n\nchallenges and open research directions for authoring, designing, and \n\npublishing interactive articles.\n\n \n\n This style of communication — and the platforms which support \n\nit — are still in their infancy. When choosing where to publish this \n\nwork, we wanted the medium to reflect the message. Journals like *Distill*\n\n are not only pushing the boundaries of machine learning research but \n\nalso offer a space to put forth new interfaces for dissemination. This \n\nwork ties together the theory and practice of authoring and publishing \n\ninteractive articles. It demonstrates the power that the medium has for \n\nproviding new representations and interactions to make systems and ideas\n\n more accessible to broad audiences.\n\n \n\nInteractive Articles: Theory & Practice\n\n---------------------------------------\n\n Interactive articles draw from and connect many types of media, \n\nfrom static text and images to movies and animations. But in contrast to\n\n these existing forms, they also leverage interaction techniques such as\n\n details-on demand, belief elicitation, play, and models and simulations\n\n to enhance communication.\n\n \n\n While the space of possible designs is far too broad to be solved \n\nwith one-size-fits-all guidelines, by connecting the techniques used in \n\nthese articles back to underlying theories presented across disparate \n\nfields of research we provide a missing foundation for designers to use \n\nwhen considering the broad space of interactions that could be added to a\n\n born-digital article.\n\n \n\n We draw from a corpus of over sixty interactive articles to \n\nhighlight the breadth of techniques available and analyze how their \n\nauthors took advantage of a digital medium to improve the reading \n\nexperience along one or more dimensions, for example, by reducing the \n\noverall cognitive load, instilling positive affect, or improving \n\ninformation recall.\n\n \n\n| Title *arrow\\_downward* | Publication or Author | Tags | Year |\n\n| --- | --- | --- | --- |\n\n Because diverse communities create interactive content, this \n\nmedium goes by many different names and has not yet settled on a \n\n In newsrooms, data journalists, developers, and designers work together\n\n to make complex news and investigative reporting clear and engaging \n\nusing interactive stories . Educators\n\n use interactive textbooks as an alternative learning format to give \n\nstudents hands-on experience with learning material .\n\n \n\n Besides these groups, others such as academics, game developers, \n\nweb developers, and designers blend editorial, design, and programming \n\n While these all slightly differ in their technical approach and target \n\naudience, they all largely leverage the interactivity of the modern web.\n\n \n\n We focus on five unique affordances of interactive articles, \n\nlisted below. In-line videos and example interactive graphics are \n\npresented alongside this discussion to demonstrate specific techniques.\n\n \n\n### Connecting People and Data\n\n an audience which finds content to be aesthetically pleasing is more \n\nlikely to have a positive attitude towards it. This in turn means people\n\n will spend more time engaging with content and ultimately lead to \n\nimproved learning outcomes. While engagement itself may not be an end \n\ngoal of most research communications, the ability to influence both \n\naudience attitude and the amount of time that is spent is a useful lever\n\n to improve learning: we know from education research that both time \n\nspent and emotion are predictive of learning outcomes.\n\n \n\n Animations can also be used to improve engagement .\n\n While there is debate amongst researchers if animations in general are \n\nable to more effectively convey the same information compared to a well \n\n while the series of still images may be more effective for answering \n\nspecific questions like, “Does a horse lift all four of its feet off the\n\n ground when it runs?” watching the animation in slow motion gives the \n\nviewer a much more visceral sense of how it runs. A more modern example \n\n \n\n 1878, Eadweard Muybridge settled Leland Stanford's hotly debated \n\nquestion of whether all four feet of a horse lifted off the ground \n\nduring a trot using multiple cameras to capture motion in stop-motion \n\n Passively, animation can be used to add drama to a graphic \n\ndisplaying important information, but which readers may otherwise find \n\ndry. Scientific data which is inherently time varying may be shown using\n\n an animation to connect viewers more closely with the original data, as\n\n compared to seeing an abstracted static view. For example, Ed Hawkins \n\ndesigned “Climate Spirals,” which shows the average global temperature \n\nchange over time . This \n\npresentation of the data resonated with a large public audience, so much\n\n so that it was displayed at the opening ceremony at the 2016 Rio \n\nOlympics. In fact, many other climate change visualizations of this same\n\n dataset use animation to build suspense and highlight the recent spike \n\nin global temperatures .\n\n \n\n Preview,\n\n click to play\n\n Full Video.\n\n By adding variation over time, authors have access to a new \n\ndimension to encode information and an even wider design space to work \n\nin. Consider the animated graphic in *The New York Times* story \n\n“Extensive Data Shows Punishing Reach of Racism for Black Boys,” which \n\n by utilizing animation, it became possible for the authors to design a \n\nunit visualization in which each data point shown represented an \n\nindividual, reminding readers that the data in this story was about real\n\n peoples’ lives.\n\n \n\n Unit visualizations have also been used to evoke empathy in \n\n Using person-shaped glyphs (as opposed to abstract symbols like circles\n\n or squares) has been shown not to produce additional empathic responses\n\n using visualizations. Correll argues that much of the power of \n\nvisualization comes from abstraction, but quantization stymies empathy .\n\n He instead suggests anthropomorphizing data, borrowing journalistic and\n\n rhetoric techniques to create novel designs or interventions to foster \n\nempathy in readers when viewing visualizations .\n\n \n\n Regarding the format of interactive articles, an ongoing debate \n\nwithin the data journalism community has been whether articles which \n\nutilize scroll-based graphics (scrollytelling) are more effective than \n\nthose which use step-based graphics (slideshows). McKenna et al. \n\n found that their study participants largely preferred content to be \n\ndisplayed with a step- or scroll-based navigation as opposed to \n\ntraditional static articles, but did not find a significant difference \n\nin engagement between the two layouts. In related work, Zhi et al. found\n\n that performance on comprehension tasks was better in slideshow layouts\n\n than in vertical scroll-based layouts .\n\n Both studies focused on people using desktop (rather than mobile) \n\ndevices. More work is needed to evaluate the effectiveness of various \n\nlayouts on mobile devices, however the interviews conducted by MckEnna \n\net al. suggest that additional features, such as supporting navigation \n\nthrough swipe gestures, may be necessary to facilitate the mobile \n\nreading experience.\n\n \n\nreaders play the role of a pirate commander, giving them a unique look \n\nat the economics that led to rise in piracy off the coast of Somalia. Playing\n\n Preview,\n\n click to play\n\n Full Video.\n\n \n\n the reading experience becomes closer to that of playing a game. For \n\nexample, the critically acclaimed explorable explanation “Parable of the\n\n Polygons” puts play at the center of the story, letting a reader \n\nmanually run an algorithm that is later simulated in the article to \n\ndemonstrate how a population of people with slight personal biases \n\nagainst diversity leads to social segregation .\n\n \n\n### Making Systems Playful\n\n Interactive articles utilize an underlying computational \n\ninfrastructure, allowing authors editorial control over the \n\ncomputational processes happening on a page. This access to computation \n\nallows interactive articles to engage readers in an experience they \n\ncould not have with traditional media. For example, in “Drawing Dynamic \n\nVisualizations”, Victor demonstrates how an interactive visualization \n\ncan allow readers to build an intuition about the behavior of a system, \n\nleading to a fundamentally different understanding of an underlying \n\n \n\n Complex systems often require extensive setup to allow for proper \n\nstudy: conducting scientific experiments, training machine learning \n\nmodels, modeling social phenomenon, digesting advanced mathematics, and \n\nresearching recent political events, all require the configuration of \n\nsophisticated software packages before a user can interact with a system\n\n at all, even just to tweak a single parameter. This barrier to entry \n\ncan deter people from engaging with complex topics, or explicitly \n\nprevent people who do not have the necessary resources, for example, \n\ncomputer hardware for intense machine learning tasks. Interactive \n\narticles drastically lower these barriers.\n\n \n\n Science that utilizes physical and computational experiments \n\nrequires systematically controlling and changing parameters to observe \n\ntheir effect on the modeled system. In research, dissemination is \n\ntypically done through static documents, where various figures show and \n\ncompare the effect of varying particular parameters. However, efforts \n\nhave been made to leverage interactivity in academic publishing, \n\n gives readers control over the reporting of the research findings and \n\nshows great promise in helping readers both digest new ideas and learn \n\nabout existing fields that are built upon piles of research debt .\n\n \n\n Beyond reporting statistics, interactive articles are extremely \n\npowerful when the studied systems can be modeled or simulated in \n\nreal-time with interactive parameters without setup, e.g., in-browser \n\nsandboxes. Consider the example in [4](#simulation-vis)\n\n of a Boids simulation that models how birds flock together. Complex \n\nsystems such as these have many different parameters that change the \n\nresulting simulation. These sandbox simulations allow readers to play \n\nwith parameters to see their effect without worrying about technical \n\noverhead or other experimental consequences.\n\n \n\nInteract with live simulations—no setup required. This\n\n Boids visualization models the movement of a flock of birds, and \n\nexposes parameters that a reader can manipulate to change the behavior \n\nof the simulation. Boid Count \n\n At the top, drag the slider to change the number of boids in the \n\nsimulation. Underneath, adjust the different parameters to find \n\ninteresting configurations.\n\n This is a standout design pattern within interactive articles, and\n\n many examples exist ranging in complexity. “How You Will Die” visually \n\nsimulates the average life expectancy of different groups of people, \n\nwhere a reader can choose the gender, race, and age of a person .\n\n “On Particle Physics” allows readers to experiment with accelerating \n\ndifferent particles through electric and magnetic fields to build \n\nintuition behind electromagnetism foundations such as the Lorentz force \n\nand Maxwell’s equations — the experiments backing these simulations \n\ncannot be done without multi-million dollar machinery .\n\n “Should Prison Sentences Be Based On Crimes That Haven’t Been Committed\n\n Yet?” shows the outcome of calculating risk assessments for recidivism \n\nwhere readers adjust the thresholds for determining who gets parole .\n\n \n\n reader uses their own live video camera to train a machine learning \n\nimage classifier in-browser without any extra computational resources. Playing\n\n Preview,\n\n click to play\n\n Full Video.\n\n The dissemination of modern machine learning techniques has been \n\nbolstered by interactive models and simulations. Three articles, “How to\n\n show the effect that hyperparameters and different dimensionality \n\nreduction techniques have on creating low dimensional embeddings of \n\nhigh-dimensional data. A popular approach is to demonstrate how machine \n\n Other examples are aimed at technical readers who wish to learn about \n\nspecific concepts within deep learning. Here, interfaces allow readers \n\nto choose model hyperparameters, datasets, and training procedures that,\n\n once selected, visualize the training process and model internals to \n\ninspect the effect of varying the model configuration .\n\n \n\n Interactive articles commonly communicate a single idea or concept\n\n using multiple representations. The same information represented in \n\ndifferent forms can have different impact. For example, in mathematics \n\noften a single object has both an algebraic and a geometric \n\nrepresentation. A clear example of this is the definition of a circle .\n\n Both are useful, inform one another, and lead to different ways of \n\nthinking. Examples of interactive articles that demonstrate this include\n\n various media publications’ political election coverage that break down\n\n the same outcome in multiple ways, for example, by voter demographics, \n\ngeographical location, and historical perspective .\n\n \n\n as people can process information through both a visual channel and \n\nauditory channel simultaneously. Popular video creators such as \n\n3Blue1Brown and Primer \n\n exemplify these principles by using rich animation and simultaneous \n\nnarration to break down complex topics. These videos additionally take \n\nadvantage of the Redundancy Principle by including complementary \n\ninformation in the narration and in the graphics rather than repeating \n\nthe same information in both channels .\n\n \n\n Preview,\n\n click to play\n\n Full Video.\n\n While these videos are praised for their approachability and \n\nrich exposition, they are not interactive. One radical extension from \n\ntraditional video content is also incorporating user input into the \n\nvideo while narration plays. A series of these interactive videos on \n\n“Visualizing Quaternions” lets a reader listen to narration of a live \n\nanimation on screen, but at any time the viewer can take control of the \n\nvideo and manipulate the animation and graphics while simultaneously \n\nlistening to the narration .\n\n \n\n Utilizing multiple representations allows a reader to see \n\ndifferent abstractions of a single idea. Once these are familiar and \n\nknown, an author can build interfaces from multiple representations and \n\nlet readers interact with them simultaneously, ultimately leading to \n\ninteractive experiences that demonstrate the power of computational \n\ncommunication mediums. Next, we discuss such experiences where \n\ninteractive articles have transformed communication and learning by \n\nmaking live models and simulations of complex systems and phenomena \n\naccessible.\n\n \n\n### Prompting Self-Reflection\n\n Asking a student to reflect on material that they are studying and\n\n explain it back to themselves — a learning technique called \n\nself-explanation — is known to have a positive impact on learning \n\noutcomes . By generating explanations\n\n and refining them as new information is obtained, it is hypothesized \n\nthat a student will be more engaged with the processes which they are \n\nstudying . When writing for an \n\ninteractive environment, components can be included which prompt readers\n\n to make a prediction or reflection about the material and cause them to\n\n engage in self-explanation .\n\n \n\n While these prompts may take the form of text entry or other \n\nstandard input widgets, one of the most prominent examples of this \n\n In these visualizations, readers are prompted to complete a trendline \n\non a chart, causing them to generate an explanation based on their \n\ncurrent beliefs for why they think the trend may move in a certain \n\ndirection. Only after readers make their prediction are they shown the \n\nactual data. Kim et al. showed that using visualizations as a prompt is \n\nan effective way to encourage readers to engage in self explanation and \n\nimprove their recall of the information . [5](#you-draw-it)\n\n shows one these visualizations for CO₂ emissions from burning fossil \n\nfuels. After clicking and dragging to guess the trend, your guess will \n\nbe compared against the actual data.\n\n \n\nComplete the trend of CO₂ emissions from burning fossil fuels. Letting\n\n a reader first guess about data and only showing the ground truth \n\nafterwards challenges a reader's prior beliefs and has been shown to \n\nreaders are tasked to type the names of celebrities with challenging \n\nspellings. After submitting a guess, a visualization shows the reader’s \n\nentry against everyone else’s, scaled by the frequency of different \n\nspellings. Playing\n\n Preview,\n\n click to play\n\n Full Video.\n\n In the case of “You Draw It,” readers were also shown the \n\npredictions that others made, adding a social comparison element to the \n\nexperience. This additional social information was not shown to \n\nnecessarily be effective for improving recall .\n\n However, one might hypothesize that this social aspect may have other \n\nbenefits such as improving reader engagement, due to the popularity of \n\n \n\n Prompting readers to remember previously presented material, for\n\n example through the use of quizzes, can be an effective way to improve \n\n While testing may call to mind stressful educational experiences for \n\nmany, quizzes included in web articles can be low stakes: there is no \n\nneed to record the results or grade readers. The effect is enhanced if \n\nfeedback is given to the quiz-takers, for example by providing the \n\ncorrect answer after the user has recorded their response .\n\n \n\n Preview,\n\n click to play\n\n Full Video.\n\n assuming readers are willing to participate in the process. The idea of\n\n spaced repetition has been a popular foundation for memory building \n\napplications, for example in the Anki flash card system. More recently, \n\nauthors have experimented with building spaced repetition directly into \n\n \n\n### Personalizing Reading\n\n Content personalization — automatically modifying text and \n\nmultimedia based on a reader’s individual features or input (e.g., \n\ndemographics or location) — is a technique that has been shown to \n\nincrease engagement and learning within readers and support behavioral change .\n\n The PersaLog system gives developers tools to build personalized \n\ncontent and presents guidelines for personalization based on user \n\nresearch from practicing journalists .\n\n Other work has shown that “personalized spatial analogies,” presenting \n\ndistance measurements in regions where readers are geographically \n\nfamiliar with, help people more concretely understand new distance \n\nmeasurements within news stories .\n\n \n\n reader enters their birthplace and birth year and is shown multiple \n\nvisualizations describing the impact of climate on their hometown. Playing\n\n Preview,\n\n click to play\n\n Full Video.\n\n Personalization alone has also been used as the standout feature \n\nof multiple interactive articles. Both “How Much Hotter Is Your Hometown\n\n Than When You Were Born?” and “Human Terrain” \n\n use a reader’s location to drive stories relating to climate change and\n\n population densities respectively. Other examples ask for explicit \n\nreader input, such as a story that visualizes a reader’s net worth to \n\nchallenge a reader’s assumptions if they are wealthy or not (relative to\n\n In this visualization, professions are plotted to determine their \n\nlikelihood of being automated against their average annual wage. The \n\narticle encourages readers to use the search bar to type in their own \n\nprofession to highlight it against the others.\n\n \n\n An interactive medium has the potential to offer readers an \n\nexperience other than static, linear text. Non-linear stories, where a \n\nreader can choose their own path through the content, have the potential\n\n to provide a more personalized experience and focus on areas of user \n\n a technology focused news television program. Non-linear stories \n\npresent challenges for authors, as they must consider the myriad \n\npossible paths through the content, and consider the different possible \n\nexperiences that the audience would have when pursuing different \n\nbranches.\n\n \n\n [8](#matuschak2019quantum): In \"[Quantum Country](https://quantum.country/)\" , \n\nthe interactive textbook uses spaced repetition and allows a reader to \n\nopt-in and save their progress while reading through dense material and \n\nmathematical notion over time. Playing\n\n Preview,\n\n click to play\n\n Full Video.\n\n Another technique interactive articles often use is segmenting \n\ncontent into small pieces to be read in-between or alongside other \n\ngraphics. While we have already discussed cognitive load theory, the \n\nSegmenting Theory, the idea that complex lessons are broken into \n\nsmaller, bit-sized parts , also\n\n supports personalization within interactive articles. Providing a \n\nreader the ability to play, pause, and scrub content allows the reader \n\nto move at their own speed, comprehending the information at a speed \n\nthat works best for them. Segmenting also engages a reader’s essential \n\nprocessing without overloading their cognitive system .\n\n \n\n Multiple studies have been conducted showing that learners perform\n\n better when information is segmented, whether it be only within an \n\nanimation or within an interface with textual descriptions .\n\n One excellent example of using segmentation and animation to \n\npersonalize content delivery is “A Visual Introduction to Machine \n\nLearning,” which introduces fundamental concepts within machine learning\n\n in bite-sized pieces, while transforming a single dataset into a \n\ntrained machine learning model . \n\nExtending this idea, in “Quantum Country,” an interactive textbook \n\ncovering quantum computing, the authors implemented a user account \n\nsystem, allowing readers to save their position in the text and consume \n\n \n\n### Reducing Cognitive Load\n\n Authors must calibrate the detail at which to discuss ideas and \n\ncontent to their readers expertise and interest to not overload them. \n\nWhen topics become multifaceted and complex, a balance must be struck \n\nbetween a high-level overview of a topic and its lower-level details. \n\nOne interaction technique used to prevent a cognitive overload within a \n\nreader is “details-on-demand.”\n\n \n\n Details-on-demand has become an ubiquitous design pattern. For \n\nexample, modern operating systems offer to fetch dictionary definitions \n\nwhen a word is highlighted. When applied to visualization, this \n\ntechnique allows users to select parts of a dataset to be shown in more \n\ndetail while maintaining a broad overview. This is particularly useful \n\nwhen a change of view is not required, so that users can inspect \n\nelements of interest on a point-by-point basis in the context of the \n\nwhole . Below we highlight areas \n\nwhere details-on-demand has been successfully applied to reduce the \n\namount of information present within an interface at once.\n\n \n\n#### Data Visualization\n\n Successful visualizations not only provide the base representations and\n\n techniques for these three steps, but also bridge the gaps between them\n\n . In practice, the \n\nsolidified standard for details-on-demand in data visualization \n\nmanifests as a tooltip, typically summoned on a cursor mouseover, that \n\npresents extra information in an overlay. Given that datasets often \n\ncontain multiple attributes, tooltips can show the other attributes that\n\n \n\n#### Illustration\n\n Details-on-demand is also used in illustrations, interactive \n\ntextbooks, and museum exhibits, where highlighted segments of a figure \n\ncan be selected to display additional information about the particular \n\nsegment. For example, in “How does the eye work?” readers can select \n\nsegments of an anatomical diagram of the human eye to learn more about \n\nspecific regions, e.g., rods and cones .\n\n Another example is “Earth Primer,” an interactive textbook on tablets \n\nthat allows readers to inspect the Earth’s interior, surface, and biomes\n\n \n\n#### Mathematical Notation\n\n Formal mathematics, a historically static medium, can benefit from\n\n details-on-demand, for example, to elucidate a reader with intuition \n\nabout a particular algebraic term, present a geometric interpretation of\n\n an equation, or to help a reader retain high-level context while \n\n For example, in “Why Momentum Really Works,” equation layout is done \n\nusing Gestalt principles plus annotation to help a reader easily \n\nidentify specific terms. In “Colorized\n\n Math Equations,” the Fourier transform equation is presented in both \n\nmathematical notation and plain text, but the two are linked through a \n\nmouseover that highlights which term in the equation corresponds to \n\nwhich word in the text . \n\nAnother example that visualizes mathematics and computation is the \n\n“Image Kernels” tutorial where a reader can mouseover a real image and \n\nobserve the effect and exact computation for applying a filter over the \n\nimage .\n\n \n\n Instead of writing down long arithmetic sums, the interface allows\n\n one of Maxwell’s equations is shown. Click the two buttons to reveal, \n\nor remind yourself, what each notation mark and variable represent.\n\n \n\nEnhancing mathematics design with annotation and interactivity. Working\n\n with equations requires a sharp working memory. Optional interactivity \n\ncan help people remember specific notation and variable defintions, only\n\n#### Text\n\n While not as pervasive, text documents and other long-form textual\n\n mediums have also experimented with letting readers choose a variable \n\nlevel of detail to read. This idea was explored as early as the 1960s in\n\n StretchText, a hypertext feature that allows a reader to reveal a more \n\ndescriptive or exhaustive explanation of something by expanding or \n\n One challenge that has limited this technique’s adoption is the burden \n\nit places on authors to write multiple versions of their content. For \n\nexample, drag the slider in [9](#details-text)\n\n to read descriptions of the Universal Approximation Theorem in \n\nincreasing levels of detail. For other examples of details-on-demand for\n\n text, such as application in code documentation, see this small \n\ncollection of examples .\n\n \n\n networks can approximate any function that exists. However, we do not \n\nhave a guaranteed way to obtain such a neural network for every \n\nfunction.\n\n#### Previewing Content\n\n Details-on-demand can also be used as a method for previewing \n\ncontent without committing to another interaction or change of view. For\n\n example, when hovering over a hyperlink on Wikipedia, a preview card is\n\n shown that can contain an image and brief description; this gives \n\nreaders a quick preview of the topic without clicking through and \n\n within hypertext that present information about a particular topic in a\n\n location that does not obscure the source material. Both older and \n\nmodern preview techniques use perceptually-based animation and simple \n\ntooltips to ensure their interactions are natural and lightweight \n\nfeeling to readers .\n\n \n\nChallenges for Authoring Interactives\n\n-------------------------------------\n\n*If interactive articles provide clear benefits over other \n\nmediums for communicating complex ideas, then why aren’t they more \n\nprevalent?* \n\n Unfortunately, creating interactive articles today is difficult. \n\nDomain-specific diagrams, the main attraction of many interactive \n\narticles, must be individually designed and implemented, often from \n\nscratch. Interactions need to be intuitive and performant to achieve a \n\nnice reading experience. Needless to say, the text must also be \n\nwell-written, and, ideally, seamlessly integrated with the graphics.\n\n \n\n The act of creating a successful interactive article is closer to \n\nbuilding a website than writing a blog post, often taking significantly \n\nmore time and effort than a static article, or even an academic \n\n Most interactive articles are created using general purpose \n\nweb-development frameworks which, while expressive, can be difficult to \n\nwork with for authors who are not also web developers. Even for expert \n\nweb developers, current tools offer lower levels of abstraction than may\n\n be desired to prototype and iterate on designs.\n\n \n\n can help authors start writing quickly and even enable rapid iteration \n\nthrough various designs (for example, letting an author quickly compare \n\nbetween sequencing content using a “scroller” or “stepper” based \n\nlayout). However, Idyll does not offer any design guidance, help authors\n\n think through where interactivity would be most effectively applied, \n\nnor highlight how their content could be improved to increase its \n\nreadability and memorability. For example, Idyll encodes no knowledge of\n\n the positive impact of self-explanation, instead it requires authors to\n\n be familiar with this research and how to operationalize it.\n\n \n\n To design an interactive article successfully requires a diverse \n\nset of editorial, design, and programming skills. While some individuals\n\n are able to author these articles on their own, many interactive \n\narticles are created by a collective team consisting of multiple members\n\n with specialized skills, for example, data analysts, scripters, \n\neditors, journalists, graphic designers, and typesetters, as outlined in\n\n requires one to clone its source code using git, install \n\nproject-specific dependencies using a terminal, and be comfortable \n\nediting HTML files. All of this complexity is incidental to task of \n\nediting text.\n\n \n\n Publishing to the web brings its own challenges: while interactive\n\n articles are available to anyone with a browser, they are burdened by \n\nrapidly changing web technologies that could break interactive content \n\nafter just a few years. For this reason, easy and accessible interactive\n\n article archival is important for authors to know their work can be \n\n Authoring interactive articles also requires designing for a diverse \n\nset of devices, for example, ensuring bespoke content can be adapted for\n\n desktop and mobile screen sizes with varying connection speeds, since \n\naccessing interactive content demands more bandwidth.\n\n \n\n There are other non-technical limitations for publishing \n\ninteractive articles. For example, in non-journalism domains, there is a\n\n mis-aligned incentive structure for authoring and publishing \n\ninteractive content: why should a researcher spend time on an “extra” \n\ninteractive exposition of their work when they could instead publish \n\nmore papers, a metric by which their career depends on? While different \n\ngroups of people seek to maximize their work’s impact, legitimizing \n\ninteractive artifacts requires buy-in from a collective of communities.\n\n \n\n Making interactive articles accessible to people with disabilities\n\n is an open challenge. The dynamic medium exacerbates this problem \n\ncompared to traditional static writing, especially when articles combine\n\n multiple formats like audio, video, and text. Therefore, ensuring \n\ninteractive articles are accessible to everyone will require alternative\n\n modes of presenting content (e.g. text-to-speech, video captioning, \n\ndata physicalization, data sonification) and careful interaction design.\n\n \n\n It is also important to remember that not everything needs to be \n\ninteractive. Authors should consider the audience and context of their \n\nwork when deciding if use of interactivity would be valuable. In the \n\nworst case, interactivity may be distracting to readers or the \n\nfunctionality may go unused, the author having wasted their time \n\nimplementing it. However, even in a domain where the potential \n\n \n\nCritical Reflections\n\n--------------------\n\n We write this article not as media theorists, but as \n\npractitioners, researchers, and tool builders. While it has never been \n\neasier for writers to share their ideas online, current publishing tools\n\n largely support only static authoring and do not take full advantage of\n\n the fact that the web is a dynamic medium. We want that to change, and \n\nwe are not alone. Others from the explorable explanations community have\n\n identified design patterns that help share complex ideas through play .\n\n \n\n an annually published digital magazine that showcases the expository \n\npower that interactive dynamic media can have when effectively combined .\n\n In late 2018, we invited writers to respond to a call for proposals for\n\n our first issue focusing on exploring scientific and technological \n\nphenomena that stand to shape society at large. We sought to cover \n\ntopics that would benefit from using the interactive or otherwise \n\ndynamic capabilities of the web. Given the challenges of authoring \n\ninteractive articles, we did not ask authors to submit fully developed \n\npieces. Instead, we accepted idea submissions, and collaborated with the\n\n authors over the course of four months to develop the issue, offering \n\ntechnical, design, and editorial assistance collectively to the authors \n\nthat lacked experience in one of these areas. For example, we helped a \n\nwriter implement visualizations, a student frame a cohesive narrative, \n\nand a scientist recap history and disseminate to the public. Multiple \n\nviews from one article are shown in [10](#parametric).\n\n \n\n The article used techniques like animation, data visualizations, \n\nexplanatory diagrams, margin notes, and interactive simulations to \n\nexplain how biases occur in machine learning systems.\n\n We see *The Parametric Press* as a crucial connection between\n\n the often distinct worlds of research and practice. The project serves \n\nas a platform through which to operationalize the theories put forth by \n\neducation, journalism, and HCI researchers. Tools like Idyll which are \n\ndesigned in a research setting need to be validated and tested to ensure\n\n that they are of practical use; *The Parametric Press* facilitates\n\n this by allowing us to study its use in a real-world setting, by \n\nauthors who are personally motivated to complete their task of \n\nconstructing a high-quality interactive article, and only have secondary\n\n concerns and care about the tooling being used, if at all.\n\n \n\n \n\n| | Research | Practice |\n\n| --- | --- | --- |\n\n As researchers we can treat the project as a series of case \n\nstudies, where we were observers of the motivation and workflows which \n\nwere used to craft the stories, from their initial conception to their \n\npublication. Motivation to contribute to the project varied by author. \n\nWhere some authors had personal investment in an issue or dataset they \n\nwanted to highlight and raise awareness to broadly, others were drawn \n\ntowards the medium, recognizing its potential but not having the \n\nexpertise or support to communicate interactively. We also observed how \n\nresearch software packages like Apparatus , Idyll , and D3 \n\n fit into the production of interactive articles, and how authors must \n\ncombine these disparate tools to create an engaging experience for \n\nreaders. In one article, “On Particle Physics,” an author combined two \n\ntools in a way that allowed him to create and embed dynamic graphics \n\ndirectly into his article without writing any code beyond basic markup. \n\nOne of the creators of Apparatus had not considered this type of \n\n fantastic! Reading that article, I had no idea that Apparatus was used.\n\n This is a very exciting proof-of-concept for unconventional \n\nexplorable-explanation workflows.”*\n\n We were able to provide editorial guidance to the authors drawing \n\non our knowledge of empirical studies done in the multimedia learning \n\nand information visualization communities to recommend graphical \n\nstructures and page layouts, helping each article’s message be \n\ncommunicated most effectively. One of the most exciting outcomes of the \n\nproject is that we saw authors develop interactive communication skills \n\nlike any other skill: through continued practice, feedback, and \n\niteration. We also observed the challenges that are inherent in \n\npublishing dynamic content on the web and identified the need for \n\nimproved tooling in this area, specifically around the archiving of \n\ninteractive articles. Will an article’s code still run a year from now? \n\nTen years from now? To address interactive content archival, we set up a\n\n system to publish a digital archive of all of our articles at the time \n\nthat they are first published to the site. At the top of each article on\n\n *The Parametric Press* is an archive link that allows readers to \n\ndownload a WARC (Web ARChive) file that can “played back” without \n\nrequiring any web infrastructure. While our first iteration of the \n\nproject relied on ad-hoc solutions to these problems, we hope to show \n\nhow digital works such as ours can be published confidently knowing that\n\n they will be preserved indefinitely.\n\n \n\n As practitioners we pushed the boundaries of the current \n\ngeneration of tools designed to support the creation of interactive \n\narticles on the web. We found bugs and limitations in Idyll, a tool \n\nwhich was originally designed to support the creation of one-off \n\narticles that we used as a content management system to power an entire \n\nmagazine issue. We were forced to write patches and plugins to work \n\n We were also forced to craft designs under a more realistic set of \n\nconstraints than academics usually deal with: when creating a \n\nvisualization it is not enough to choose the most effective visual \n\nencodings, the graphics also had to be aesthetically appealing, adhere \n\nto a house style, have minimal impact on page load time and runtime \n\nperformance, be legible on both mobile and desktop devices, and not be \n\noverly burdensome to implement. Any extra hour spent implementing one \n\ngraphic was an hour that was not spent improving some other part of the \n\nissue, such as the clarity of the text, or the overall site design.\n\n \n\n There are relatively few outlets that have the skills, technology,\n\n and desire to publish interactive articles. From its inception, one of \n\nthe objectives of *The Parametric Press* is to showcase the new \n\nforms of media and publishing that are possible with tools available \n\ntoday, and inspire others to create their own dynamic writings. For \n\n told us he had wanted to write this interactive article for years yet \n\nnever had the opportunity, support, or incentive to create it. His \n\narticle drew wide interest and critical acclaim.\n\n \n\n We also wanted to take the opportunity as an independent \n\npublication to serve as a concrete example for others to follow, to \n\nrepresent a set of best practices for publishing interactive content. To\n\n that end, we made available all of the software that runs the site, \n\nincluding reusable components, custom data visualizations, and the \n\npublishing engine itself.\n\n \n\nLooking Forward\n\n---------------\n\n A diverse community has emerged to meet these challenges, \n\nexploring and experimenting with what interactive articles could be. The\n\n [Explorable Explanations community](https://explorabl.es/) \n\nis a “disorganized ‘movement’ of artists, coders & educators who \n\nwant to reunite play and learning.” Their online hub contains 170+ \n\ninteractive articles on topics ranging from art, natural sciences, \n\nsocial sciences, journalism, and civics. The curious can also find \n\ntools, tutorials, and meta-discussion around learning, play, and \n\n \n\n Many interactive articles are self-published due to a lack of \n\nplatforms that support interactive publishing. Creating more outlets \n\nthat allow authors to publish interactive content will help promote \n\ntheir development and legitimization. The few existing examples, \n\n help, but currently target a narrow group of authors, namely those who \n\nhave programming skills. Such platforms should also provide clear paths \n\nto submission, quality and editorial standards, and authoring \n\nguidelines. For example, news outlets have clear instructions for \n\npitching written pieces, yet this is under-developed for interactive \n\narticles. Lastly, there is little funding available to support the \n\ndevelopment of interactive articles and the tools that support them. \n\nResearchers do not receive grants to communicate their work, and \n\npractitioners outside of the largest news outlets are not able to afford\n\n the time and implementation investment. Providing more funding for \n\nenabling interactive articles incentivizes their creation and can \n\ncontribute to a culture where readers expect digital communications to \n\nbetter utilize the dynamic medium.\n\n \n\n We have already discussed the breadth of skills required to author\n\n an interactive article. Can we help lower the barrier to entry? While \n\nthere have been great, practical strides in this direction ,\n\n there is still opportunity for creating tools to design, develop, and \n\nevaluate interactive articles in the wild. Specific features should \n\ninclude supporting mobile-friendly adaptations of interactive graphics \n\n(for example ),\n\n creating content for different platforms besides just the web, and \n\ntools that allow people to create interactive content without code.\n\n \n\n The usefulness of interactive articles is predicated on the \n\nassumption that these interactive articles actually facilitate \n\ncommunication and learning. There is limited empirical evaluation of the\n\n effectiveness of interactive articles. The problem is exacerbated by \n\nthe fact that large publishers are unwilling to share internal metrics, \n\n provided one of the few available data points, stating that only a \n\nfraction of readers interact with non-static content, and suggested that\n\n designers should move away from interactivity .\n\n However, other research found that many readers, even those on mobile \n\ndevices, are interested in utilizing interactivity when it is a core \n\npart of the article’s message . This statement from *The New York Times*\n\n has solidified as a rule-of-thumb for designers and many choose not to \n\nutilize interactivity because of it, despite follow-up discussion that \n\ncontextualizes the original point and highlights scenarios where \n\ninteractivity can be beneficial .\n\n This means designers are potentially choosing a suboptimal presentation\n\n of their story due to this anecdote. More research is needed in order \n\nto identify the cases in which interactivity is worth the cost of \n\ncreation.\n\n \n\n We believe in the power and untapped potential of interactive \n\narticles for sparking reader’s desire to learn and making complex ideas \n\naccessible and understandable to all.\n\n \n\n", "bibliography_bib": [{"title": "Report on the role of digital access providers"}, {"title": "A personal computer for children of all ages"}, {"title": "Augmenting human intellect: A conceptual framework"}, {"title": "The diamond age"}, {"title": "The knowledge navigator"}, {"title": "Getting it out of our system"}, {"title": "PLATO"}, {"title": "PhET interactive simulations"}, {"title": "Explorable explanations"}, {"title": "How y’all, youse and you guys talk"}, {"title": "Snow fall: The avalanche at tunnel creek"}, {"title": "Why outbreaks like coronavirus spread exponentially, and how to 'flatten the curve'"}, {"title": "Attacking discrimination with smarter machine learning"}, {"title": "Coeffects: Context-aware programming languages"}, {"title": "Complexity explained"}, {"title": "What's really warming the world"}, {"title": "You draw it: How family income predicts children’s college chances"}, {"title": "The Uber Game"}, {"title": "Let's learn about waveforms"}, {"title": "The book of shaders"}, {"title": "EconGraphs"}, {"title": "To build a better ballot"}, {"title": "The atlas of redistricting"}, {"title": "Is it better to rent or buy"}, {"title": "Explorable explanation"}, {"title": "Increasing the transparency of research papers with explorable multiverse analyses"}, {"title": "Workshop on Visualization for AI Explainability"}, {"title": "Exploranation: A new science communication paradigm"}, {"title": "Research debt"}, {"title": "More than telling a story: Transforming data into visually shared stories"}, {"title": "\"Concrete\" computer manipulatives in mathematics education"}, {"title": "Cybertext: Perspectives on ergodic literature"}, {"title": "Interactive non-fiction: Towards a new approach for storytelling in digital journalism"}, {"title": "Active essays on the web"}, {"title": "Newsgames: Journalism at play"}, {"title": "Simply bells and whistles? Cognitive effects of visual aesthetics in digital longforms"}, {"title": "Learning as a function of time"}, {"title": "Emotional design in multimedia learning"}, {"title": "Hooked on data videos: assessing the effect of animation and pictographs on viewer engagement"}, {"title": "Animation: Can it facilitate?"}, {"title": "Animated transitions in statistical data graphics"}, {"title": "Hypothetical outcome plots outperform error bars and violin plots for inferences about reliability of variable ordering"}, {"title": "La perception de la causalité.(Etudes Psychol.), Vol. VI"}, {"title": "The illusion of life: Disney animation"}, {"title": "The horse in motion"}, {"title": "Emergent tool use from multi-agent autocurricula"}, {"title": "Climate spirals"}, {"title": "Earth's relentless warming sets a brutal new record in 2017"}, {"title": "Global temperature"}, {"title": "It's not your imagination. Summers are getting hotter."}, {"title": "Extensive data shows punishing reach of racism for black boys"}, {"title": "Disagreements"}, {"title": "Gun deaths in america"}, {"title": "The fallen of World War II"}, {"title": "Showing people behind data: Does anthropomorphizing visualizations elicit more empathy for human rights data?"}, {"title": "Beyond memorability: Visualization recognition and recall"}, {"title": "What makes a visualization memorable?"}, {"title": "What if the data visualization is actually people"}, {"title": "Ethical dimensions of visualization research"}, {"title": "A walk among the data"}, {"title": "Visual narrative flow: Exploring factors shaping data visualization story reading experiences"}, {"title": "Linking and layout: Exploring the integration of text and visualization in storytelling"}, {"title": "Video games and learning"}, {"title": "Cutthroat Capitalism: The Game"}, {"title": "Combining software games with education: Evaluation of its educational effectiveness"}, {"title": "Narrative visualization: Telling stories with data"}, {"title": "Parable of the polygons"}, {"title": "Drawing dynamic visualizations"}, {"title": "How to read a book: The classic guide to intelligent reading"}, {"title": "Scientific communication as sequential art"}, {"title": "How you will die"}, {"title": "On Particle Physics"}, {"title": "Should prison sentences be based on crimes that haven’t been committed yet"}, {"title": "How to use t-SNE effectively"}, {"title": "The beginner's guide to dimensionality reduction"}, {"title": "Understanding UMAP"}, {"title": "Tensorflow.js: Machine learning for the web and beyond"}, {"title": "Designing (and learning from) a teachable machine"}, {"title": "Experiments in handwriting with a neural network"}, {"title": "Direct-manipulation visualization of deep networks"}, {"title": "Gan lab: Understanding complex deep generative models using interactive visual experimentation"}, {"title": "Using artificial intelligence to augment human intelligence"}, {"title": "Who will win the presidency"}, {"title": "Who will be president"}, {"title": "Live results: Presidential election"}, {"title": "Multimedia learning"}, {"title": "3Blue1Brown"}, {"title": "Primer"}, {"title": "Revising the redundancy principle in multimedia learning"}, {"title": "Visualizing quaternions: An explorable video series"}, {"title": "Self-explanations: How students study and use examples in learning to solve problems"}, {"title": "Self-explaining expository texts: The dual processes of generating inferences and repairing mental models"}, {"title": "Explaining the gap: Visualizing one's predictions improves recall and comprehension of data"}, {"title": "You draw it: Just how bad is the drug overdose epidemic"}, {"title": "You draw it: What got better or worse during Obama's presidency"}, {"title": "They draw it!"}, {"title": "Data through others' eyes: The impact of visualizing others' expectations on visualization interpretation"}, {"title": "The Gyllenhaal experiment"}, {"title": "How do you draw a circle? We analyzed 100,000 drawings to show how culture shapes our instincts"}, {"title": "Recitation as a factor in memorizing"}, {"title": "The power of testing memory: Basic research and implications for educational practice"}, {"title": "Khan Academy"}, {"title": "The instructional effect of feedback in test-like events"}, {"title": "The critical importance of retrieval for learning"}, {"title": "How to remember anything for forever-ish"}, {"title": "Quantum country"}, {"title": "Intrinsic motivation and the process of learning: Beneficial effects of contextualization, personalization, and choice."}, {"title": "Authoring and generation of individualized patient education materials"}, {"title": "PersaLog: Personalization of news article content"}, {"title": "Generating personalized spatial analogies for distances and areas"}, {"title": "How much hotter is your hometown than when you were born"}, {"title": "Human terrain"}, {"title": "Are you rich? This income-rank quiz might change how you see yourself"}, {"title": "Quiz: Let us predict whether you’re a democrat or a republican"}, {"title": "Find Out If Your Job Will Be Automated"}, {"title": "Booze calculator: What's your drinking nationality"}, {"title": "Click 1,000: How the pick-your-own-path episode was made"}, {"title": "E-learning and the science of instruction: Proven guidelines for consumers and designers of multimedia learning"}, {"title": "Multimedia learning in an interactive self-explaining environment: What works in the design of agent-based microworlds?"}, {"title": "Pictorial aids for learning by doing in a multimedia geology simulation game."}, {"title": "A visual introduction to machine learning"}, {"title": "Beyond guidelines: What can we learn from the visual information seeking mantra?"}, {"title": "The eyes have it: A task by data type taxonomy for information visualizations"}, {"title": "Information visualization and visual data mining"}, {"title": "How the recession shaped the economy, in 255 charts"}, {"title": "How does the eye work"}, {"title": "Earth primer"}, {"title": "Progressive growing of gans for improved quality, stability, and variation"}, {"title": "A style-based generator architecture for generative adversarial networks"}, {"title": "Why momentum really works"}, {"title": "Colorized math equations"}, {"title": "Image kernels"}, {"title": "Stretchtext – hypertext note #8"}, {"title": "On variable level-of-detail documents"}, {"title": "Call for proposals winter/spring 2019"}, {"title": "A UI that lets readers control how much information they see"}, {"title": "Wikipedia Preview Card"}, {"title": "Fluid links for informed and incremental link transitions"}, {"title": "Reading and writing fluid hypertext narratives"}, {"title": "Idyll: A markup language for authoring and publishing interactive articles on the web"}, {"title": "Apparatus: A hybrid graphics editor and programming environment for creating interactive diagrams"}, {"title": "Observable"}, {"title": "LOOPY: a tool for thinking in systems"}, {"title": "Webstrates: shareable dynamic media"}, {"title": "Neural networks and deep learning"}, {"title": "How I make explorable explanations"}, {"title": "Explorable explanations: 4 more design patterns"}, {"title": "Emerging and recurring data-driven storytelling techniques: Analysis of a curated collection of recent stories"}, {"title": "Issue 01: Science & Society"}, {"title": "The myth of the impartial machine"}, {"title": "D3 data-driven documents"}, {"title": "Launching the Parametric Press"}, {"title": "Techniques for flexible responsive visualization design"}, {"title": "A Comparative Evaluation of Animation and Small Multiples for Trend Visualization on Mobile Phones"}, {"title": "Visualizing ranges over time on mobile phones: a task-based crowdsourced evaluation"}, {"title": "Why we are doing fewer interactives"}, {"title": "Capture & analysis of active reading behaviors for interactive articles on the web"}, {"title": "In defense of interactive graphics"}], "filename": "Communicating with Interactive Articles.html", "id": "46b203559d0ff677625cb3d460e502b7"} {"url": "n/a", "source": "distill", "source_type": "html", "converted_with": "python", "title": "A Discussion of 'Adversarial Examples Are Not Bugs, They Are Features': Two Examples of Useful, Non-Robust Features", "authors": ["Gabriel Goh"], "date_published": "2019-08-06", "abstract": " This article is part of a discussion of the Ilyas et al. paper “Adversarial examples are not bugs, they are features”. You can learn more in the main discussion article . ", "journal_ref": "distill-pub", "doi": "https://doi.org/10.23915/distill.00019.3", "text": "\n\n #rebuttal,\n\n .comment-info {\n\n background-color: hsl(54, 78%, 96%);\n\n border-left: solid hsl(54, 33%, 67%) 1px;\n\n padding: 1em;\n\n color: hsla(0, 0%, 0%, 0.67);\n\n }\n\n #header-info {\n\n margin-top: 0;\n\n margin-bottom: 1.5rem;\n\n display: grid;\n\n grid-template-columns: 65px max-content 1fr;\n\n grid-template-areas:\n\n \"icon explanation explanation\"\n\n \"icon back comment\";\n\n grid-column-gap: 1.5em;\n\n }\n\n #header-info .icon-multiple-pages {\n\n grid-area: icon;\n\n padding: 0.5em;\n\n content: url(images/multiple-pages.svg);\n\n }\n\n #header-info .explanation {\n\n grid-area: explanation;\n\n font-size: 85%;\n\n }\n\n #header-info .back {\n\n grid-area: back;\n\n }\n\n #header-info .back::before {\n\n content: \"←\";\n\n margin-right: 0.5em;\n\n }\n\n #header-info .comment {\n\n grid-area: comment;\n\n scroll-behavior: smooth;\n\n }\n\n #header-info .comment::before {\n\n content: \"↓\";\n\n margin-right: 0.5em;\n\n }\n\n #header-info a.back,\n\n #header-info a.comment {\n\n font-size: 80%;\n\n font-weight: 600;\n\n border-bottom: none;\n\n text-transform: uppercase;\n\n color: #2e6db7;\n\n display: block;\n\n margin-top: 0.25em;\n\n letter-spacing: 0.25px;\n\n }\n\n This article is part of a discussion of the Ilyas et al. paper\n\n *“Adversarial examples are not bugs, they are features”.*\n\n You can learn more in the\n\n [main discussion article](https://distill.pub/2019/advex-bugs-discussion/) .\n\n \n\n[Other Comments](https://distill.pub/2019/advex-bugs-discussion/#commentaries)\n\n Ilyas et al. define a *feature* as a function fff that\n\n label. But in the presence of an adversary Ilyas et al. argues\n\n the metric that truly matters is a feature’s *robust usefulness*,\n\n \n\nE[inf∥δ∥≤ϵyf(x+δ)],\n\n \\mathbf{E}\\left[\\inf\\_{\\|\\delta\\|\\leq\\epsilon}yf(x+\\delta)\\right],\n\n E[∥δ∥≤ϵinf​yf(x+δ)],\n\n its correlation with the label while under attack. Ilyas et al. \n\n like?\n\n \n\n### Non-Robust Features in Linear Models\n\n nonlinear models encountered in deep learning. As Ilyas et al \n\n to linear features of the form:\n\n \n\n \\text{and} \\quad \\mathbf{E}[x] = 0.\n\n f(x)=∥a∥Σ​aTx​whereΣ=E[xxT]andE[x]=0.\n\n The robust usefulness of a linear feature admits an elegant decomposition\n\n This\n\n \\begin{aligned}\n\n \\mathbf{E}\\left[\\inf\\_{\\|\\delta\\|\\leq\\epsilon}yf(x+\\delta)\\right] &\n\n =\\mathbf{E}\\left[yf(x)+\\inf\\_{\\|\\delta\\|\\leq\\epsilon}yf(\\delta)\\right]\\\\\n\n &\n\n \\end{aligned}\n\n into two terms:\n\n \n\n .undomargin {\n\n position: relative;\n\n left: -1em;\n\n top: 0.2em;\n\n }\n\n \n\nE[inf∥δ∥≤ϵyf(x+δ)]\n\n \\mathbf{E}\\left[\\inf\\_{\\|\\delta\\|\\leq\\epsilon}yf(x+\\delta)\\right]\n\n E[∥δ∥≤ϵinf​yf(x+δ)]\n\n===\n\nE[yf(x)]\\mathop{\\mathbf{E}[yf(x)]}E[yf(x)]\n\n−-−\n\nϵ∥a∥∗∥a∥Σ\\epsilon\\frac{\\|a\\|\\_{\\*}}{\\|a\\|\\_{\\Sigma}}ϵ∥a∥Σ​∥a∥∗​​\n\n The robust usefulness of a feature\n\n \n\n the correlation of the feature with the label\n\n \n\n the feature’s non-robustness\n\n \n\n dimensional plot.\n\n \n\n \n\n Subject to an\n\n L\\_2\n\n adversery, observe that high frequency features are both less useful and\n\n less robust.\n\n Useful Non-Useful A B C D E F \n\n Pareto frontier of points in the non-robustness and usefulness space.\n\n \n\n positive label.\n\n \\log \\left( \\frac{\\|a\\_i\\|\\_\\Sigma}{\\|a\\_i\\|} \\right) =\n\n \\log(\\lambda\\_i) Feature’s log robustness. When\n\n a\\_i's\n\n are the\n\n i^{th}\n\n eigenvalues of\n\n \\Sigma\n\n , the robustness reduces to the\n\n i^{th}\n\n singular value of\n\n \\lambda\\_i ABCDEFf-12-11-10-9-8-7-6-5-4-3-4-3-2-10 \n\n \n\n We demonstrate two constructions:\n\n \n\n (1-\\alpha) \\cdot a\\_{\\text{non-robust}} + \\alpha \\cdot a\\_{\\text{robust}}, \n\n \n\n It is surprising, thus, that the experiments of Madry et al. \n\n \n\n To cite Ilyas et al.’s response, please cite their\n\n**Response Summary**: The construction of explicit non-robust features is\n\n the useful non-robust features detected by our experiments. We also agree that\n\n non-robust features arising as “distractors” is indeed not precluded by our\n\n theoretical framework, even if it is precluded by our experiments.\n\n This simple theoretical framework sufficed for reasoning about and\n\n predicting the outcomes of our experiments\n\n We also presented a theoretical setting where we can\n\n analyze things fully rigorously in Section 4 of our paper..\n\n However, this comment rightly identifies finding a more comprehensive\n\n definition of feature as an important future research direction.\n\n \n\n**Response**: These experiments (visualizing the robustness and\n\n corroborate the existence of useful, non-robust features and make progress\n\n towards visualizing what these non-robust features actually look like. \n\nWe also appreciate the point made by the provided construction of non-robust\n\n features (as defined in our theoretical framework) that are combinations of\n\n useful+robust and useless+non-robust features. Our theoretical framework indeed\n\n enables such a scenario, even if — as the commenter already notes — our\n\n framework technically captures.) Specifically, in such a scenario, during the\n\n construction of the D^det\\widehat{\\mathcal{D}}\\_{det}D\n\ndet​ dataset, only the non-robust\n\n and useless term of the feature would be flipped. Thus, a classifier trained on\n\n such a dataset would associate the predictive robust feature with the\n\n *wrong* label and would thus not generalize on the test set. In contrast,\n\ndet​\n\n do generalize.\n\nOverall, our focus while developing our theoretical framework was on\n\n the comment points out, putting forth a theoretical framework that captures\n\n non-robust features in a very precise way is an important future research\n\n direction in itself. \n\n", "bibliography_bib": [{"title": "Adversarial examples are not bugs, they are features"}], "filename": "A Discussion of 'Adversarial Examples Are Not Bugs, They Are Features'_ Two Examples of Useful, Non-Robust Features.html", "id": "7091141891ed515931419fe02b970248"} {"url": "n/a", "source": "distill", "source_type": "html", "converted_with": "python", "title": "A Discussion of 'Adversarial Examples Are Not Bugs, They Are Features': Robust Feature Leakage", "authors": ["Gabriel Goh"], "date_published": "2019-08-06", "abstract": " This article is part of a discussion of the Ilyas et al. paper “Adversarial examples are not bugs, they are features”. You can learn more in the main discussion article . ", "journal_ref": "distill-pub", "doi": "https://doi.org/10.23915/distill.00019.2", "text": "\n\n #rebuttal,\n\n .comment-info {\n\n background-color: hsl(54, 78%, 96%);\n\n border-left: solid hsl(54, 33%, 67%) 1px;\n\n padding: 1em;\n\n color: hsla(0, 0%, 0%, 0.67);\n\n }\n\n #header-info {\n\n margin-top: 0;\n\n margin-bottom: 1.5rem;\n\n display: grid;\n\n grid-template-columns: 65px max-content 1fr;\n\n grid-template-areas:\n\n \"icon explanation explanation\"\n\n \"icon back comment\";\n\n grid-column-gap: 1.5em;\n\n }\n\n #header-info .icon-multiple-pages {\n\n grid-area: icon;\n\n padding: 0.5em;\n\n content: url(images/multiple-pages.svg);\n\n }\n\n #header-info .explanation {\n\n grid-area: explanation;\n\n font-size: 85%;\n\n }\n\n #header-info .back {\n\n grid-area: back;\n\n }\n\n #header-info .back::before {\n\n content: \"←\";\n\n margin-right: 0.5em;\n\n }\n\n #header-info .comment {\n\n grid-area: comment;\n\n scroll-behavior: smooth;\n\n }\n\n #header-info .comment::before {\n\n content: \"↓\";\n\n margin-right: 0.5em;\n\n }\n\n #header-info a.back,\n\n #header-info a.comment {\n\n font-size: 80%;\n\n font-weight: 600;\n\n border-bottom: none;\n\n text-transform: uppercase;\n\n color: #2e6db7;\n\n display: block;\n\n margin-top: 0.25em;\n\n letter-spacing: 0.25px;\n\n }\n\n This article is part of a discussion of the Ilyas et al. paper\n\n *“Adversarial examples are not bugs, they are features”.*\n\n You can learn more in the\n\n [main discussion article](https://distill.pub/2019/advex-bugs-discussion/) .\n\n \n\n[Other Comments](https://distill.pub/2019/advex-bugs-discussion/#commentaries)\n\n[Comment by Ilyas et al.](#rebuttal)\n\n Ilyas et al. report a surprising result: a model trained on\n\n \n\n \n\n### Lower Bounding Leakage\n\n Our technique for quantifying leakage consisting of two steps:\n\n \n\n specify.\n\n2. Next, we train a linear classifier as per ,\n\n Equation 3 on the datasets D^det\\hat{\\mathcal{D}}\\_{\\text{det}}D^det​ and\n\n D^rand\\hat{\\mathcal{D}}\\_{\\text{rand}}D^rand​ (Defined , Table 1) on\n\n these robust features *only*.\n\n Since Ilyas et al. only specify robustness in the two class\n\n setting:\n\n \n\n **Specification 1** \n\nFor at least one of the\n\n classes, the feature is γ\\gammaγ-robustly useful with\n\n \n\n **Specification 2** \n\n that remain static in a neighborhood of radius 0.25 on the L2L\\_2L2​ norm ball.\n\n \n\n \n\n \n\n CIFAR test set incurs an accuracy of **23.5%** (out of 88%). Doing the same on\n\n \n\n features, e.g. from a robust deep neural network.\n\n \n\n The results of D^det\\hat{\\mathcal{D}}\\_{\\text{det}}D^det​ in Table 1 of \n\n however, are on stronger footing. We find no evidence of \n\nfeature leakage (in fact, we find negative leakage — an influx!). We \n\nthus conclude that it is plausible the majority of the accuracy is \n\ndriven by\n\n non-robust features, exactly the thesis of .\n\n \n\n To cite Ilyas et al.’s response, please cite their\n\n**Response Summary**: This\n\n is a valid concern that was actually one of our motivations for creating the\n\n D^det\\widehat{\\mathcal{D}}\\_{det}D\n\ndet​ dataset (which, as the comment notes, actually\n\n has *misleading* robust features). The provided experiment further\n\n improves our understanding of the underlying phenomenon. \n\n**Response**: This comment raises a valid concern which was in fact one of\n\n the primary reasons for designing the D^det\\widehat{\\mathcal{D}}\\_{det}D\n\ndet​ dataset.\n\nrand​\n\n dataset: assign each input a random target label and do PGD towards that label.\n\n Note that unlike the D^det\\widehat{\\mathcal{D}}\\_{det}D\n\ndet​ dataset (in which the\n\nrand​\n\n dataset allows for robust features to actually have a (small) positive\n\n correlation with the label. \n\nTo see how this can happen, consider the following simple setting: we have a\n\n (as in the dataset D^rand\\widehat{\\mathcal{D}}\\_{rand}D\n\nrand​) would make this feature\n\n targeted attack might in this case induce some correlation with the\n\n to correctly classify new inputs. \n\nIn other words, starting from a dataset with no features, one can encode\n\n robust features within small perturbations. In contrast, in the\n\n D^det\\widehat{\\mathcal{D}}\\_{det}D\n\ndet​ dataset, the robust features are *correlated\n\n with the original label* (since the labels are permuted) and since they are\n\n robust, they cannot be flipped to correlate with the newly assigned (wrong)\n\n label. Still, the D^rand\\widehat{\\mathcal{D}}\\_{rand}D\n\nrand​ dataset enables us to show\n\n that (a) PGD-based adversarial examples actually alter features in the data and\n\n (b) models can learn from human-meaningless/mislabeled training data. The\n\n D^det\\widehat{\\mathcal{D}}\\_{det}D\n\ndet​ dataset, on the other hand, illustrates that the\n\n non-robust features are actually sufficient for generalization and can be\n\n preferred over robust ones in natural settings.\n\nThe experiment put forth in the comment is a clever way of showing that such\n\n leakage is indeed possible. However, we want to stress (as the comment itself\n\n does) that robust feature leakage does *not* have an impact on our main\n\n thesis — the D^det\\widehat{\\mathcal{D}}\\_{det}D\n\ndet​ dataset explicitly controls\n\n for robust\n\n feature leakage (and in fact, allows us to quantify the models’ preference for\n\n robust features vs non-robust features — see Appendix D.6 in the\n\n [paper](https://arxiv.org/abs/1905.02175)).\n\n", "bibliography_bib": [{"title": "Adversarial examples are not bugs, they are features"}], "filename": "A Discussion of 'Adversarial Examples Are Not Bugs, They Are Features' Robust Feature Leakage.html", "id": "a5eaf407f42b322cb8fa4264be9bea5a"} {"url": "n/a", "source": "distill", "source_type": "html", "converted_with": "python", "title": "A Discussion of 'Adversarial Examples Are Not Bugs, They Are Features': Discussion and Author Responses", "authors": ["Logan Engstrom", "Andrew Ilyas", "Aleksander Madry", "Shibani Santurkar", "Brandon Tran", "Dimitris Tsipras"], "date_published": "2019-08-06", "abstract": " This article is part of a discussion of the Ilyas et al. paper “Adversarial examples are not bugs, they are features”. You can learn more in the main discussion article . ", "journal_ref": "distill-pub", "doi": "https://doi.org/10.23915/distill.00019.7", "text": "\n\n #rebuttal,\n\n .comment-info {\n\n background-color: hsl(54, 78%, 96%);\n\n border-left: solid hsl(54, 33%, 67%) 1px;\n\n padding: 1em;\n\n color: hsla(0, 0%, 0%, 0.67);\n\n }\n\n #header-info {\n\n margin-top: 0;\n\n margin-bottom: 1.5rem;\n\n display: grid;\n\n grid-template-columns: 65px max-content 1fr;\n\n grid-template-areas:\n\n \"icon explanation explanation\"\n\n \"icon back comment\";\n\n grid-column-gap: 1.5em;\n\n }\n\n #header-info .icon-multiple-pages {\n\n grid-area: icon;\n\n padding: 0.5em;\n\n content: url(images/multiple-pages.svg);\n\n }\n\n #header-info .explanation {\n\n grid-area: explanation;\n\n font-size: 85%;\n\n }\n\n #header-info .back {\n\n grid-area: back;\n\n }\n\n #header-info .back::before {\n\n content: \"←\";\n\n margin-right: 0.5em;\n\n }\n\n #header-info .comment {\n\n grid-area: comment;\n\n scroll-behavior: smooth;\n\n }\n\n #header-info .comment::before {\n\n content: \"↓\";\n\n margin-right: 0.5em;\n\n }\n\n #header-info a.back,\n\n #header-info a.comment {\n\n font-size: 80%;\n\n font-weight: 600;\n\n border-bottom: none;\n\n text-transform: uppercase;\n\n color: #2e6db7;\n\n display: block;\n\n margin-top: 0.25em;\n\n letter-spacing: 0.25px;\n\n }\n\n This article is part of a discussion of the Ilyas et al. paper\n\n *“Adversarial examples are not bugs, they are features”.*\n\n You can learn more in the\n\n [main discussion article](https://distill.pub/2019/advex-bugs-discussion/) .\n\n \n\n[Other Comments](https://distill.pub/2019/advex-bugs-discussion/#commentaries)\n\n[Comment by Ilyas et al.](#rebuttal)\n\n We want to thank all the commenters for the discussion and for spending time\n\n designing experiments analyzing, replicating, and expanding upon our results.\n\n These comments helped us further refine our understanding of adversarial\n\n examples (e.g., by visualizing useful non-robust features or illustrating how\n\n robust models are successful at downstream tasks), but also highlighted aspects\n\n of our exposition that could be made more clear and explicit. \n\n Our response is organized as follows: we first recap the key takeaways from\n\n our paper, followed by some clarifications that this discussion brought to\n\n with a quick summary. \n\n We also recall some terminology from\n\n [our paper](https://arxiv.org/abs/1905.02175) that features in our responses:\n\n \n\n *Datasets*: Our experiments involve the following variants of the given\n\n dataset DDD (consists of sample-label pairs (xxx, yyy)) The\n\n exact details for construction of the datasets can be found in our\n\n [paper](https://arxiv.org/abs/1905.02175), and\n\n the datasets themselves can be downloaded at :\n\n \n\n* D^R\\widehat{\\mathcal{D}}\\_{R}D\n\nR​: Restrict each sample xxx to features that are used by a *robust*\n\n model.\n\n* D^NR\\widehat{\\mathcal{D}}\\_{NR}D\n\nNR​: Restrict each sample xxx to features that are used by a *standard*\n\n model.\n\n* D^det\\widehat{\\mathcal{D}}\\_{det}D\n\ndet​: Adversarially perturb each sample xxx using a standard model in a\n\n *consistent manner* towards class y+1modCy + 1\\mod Cy+1modC.\n\n* D^rand\\widehat{\\mathcal{D}}\\_{rand}D\n\nrand​: Adversarially perturb each sample xxx using a standard model\n\n towards a\n\n *uniformly random* class.\n\nMain points\n\n-----------\n\n### *Takeaway #1:* Adversarial examples as innate\n\n brittleness vs. useful features (sensitivity vs reliance)\n\nThe goal of our experiments with non-robust features is to understand\n\n how adversarial examples fit into the following two worlds:\n\n \n\n* **World 1: Adversarial examples exploit directions irrelevant for\n\n classification.** In this world, adversarial examples arise from\n\n sensitivity to a signal that is unimportant for classification. For\n\n instance, suppose there is a feature f(x)f(x)f(x) that is not generalizing\n\n on the dataNote that f(x)f(x)f(x) could be correlated with the label\n\n in the training set but not in expectation/on the test\n\n set., but the model for some reason puts a lot of weight on\n\n it, *i.e., this sensitivity is an aberration “hallucinated” by the\n\n model*. Adversarial examples correspond to perturbing the input\n\n to change this feature by a small amount. This perturbation, however,\n\n would be orthogonal to how the model actually typically makes\n\n predictions (on natural data). (Note that this is just a single\n\n illustrative example — the key characteristic of this world is that\n\n features “flipped” when making adversarial examples are separate from the\n\n ones actually used to classify inputs.)\n\n* **World 2: Adversarial examples exploit features that are useful for\n\n classification.** In this world, adversarial perturbations\n\n can correspond to changes in the input that manipulate features relevant to\n\n classification. Thus, models base their (mostly correct) predictions on\n\n features that can be altered via small perturbations.\n\n Recent works provide some theoretical evidence that adversarial examples\n\n can arise from finite-sample overfitting\n\n or\n\n other concentration of\n\n measure-based phenomena, thus\n\n supporting\n\n the “World 1” viewpoint on\n\n adversarial examples. The question is: is “World 1” the right way to\n\n think about adversarial examples? If so, this would be good news — under\n\n this mindset, adversarial robustness might just be a matter of getting\n\n better, “bug-free” models (for example, by reducing overfitting).\n\n \n\n Our findings show, however, that the “World 1” mindset alone does not\n\n fully capture adversarial vulnerability; “World 2“ must be taken into\n\n account. Adversarial examples can — and do, if generated via standard\n\n methods — rely on “flipping” features that are actually useful for\n\n classification. Specifically, we show that by relying *only* on\n\n perturbations corresponding to standard first-order adversarial attacks\n\n one can learn models that generalize to the test set. This means that\n\n these perturbations truly correspond to directions that are relevant for\n\n classifying new, unmodified inputs from the dataset. In summary, our\n\n message is: \n\n**Adversarial vulnerability can arise from\n\n flipping features in the data that are useful for\n\n classification of *correct* inputs.**\n\nIn particular, note that our experiments (training on the\n\n D^rand\\widehat{\\mathcal{D}}\\_{rand}D\n\nrand​ and D^det\\widehat{\\mathcal{D}}\\_{det}D\n\ndet​\n\n datasets) would not have the same result in World 1. Concretely, in\n\n coordinate f(x)f(x)f(x) that is not generalizing for “natural images.” Then,\n\n adversarial examples towards either class can be made by simply making\n\n f(x)f(x)f(x) slightly positive or slightly negative. However, a classifier\n\n learned from these adversarial examples would *not* generalize to\n\n the true dataset (since it would learn to depend on a feature that is not\n\n useful on natural images).\n\n### *Takeaway #2*: Learning from “meaningless” data\n\nAnother implication of our experiments is that models may not even\n\n *need* any information which we as humans view as “meaningful” in order\n\n to do well (in the generalization sense) on standard image datasets. (Our\n\n D^NR\\widehat{\\mathcal{D}}\\_{NR}D\n\nNR​ dataset is a perfect example of this.)\n\n### *Takeaway #3*: Cannot fully attribute adversarial\n\n examples to X\n\nWe also show that we cannot\n\n conclusively fully attribute adversarial examples to any specific aspect of the\n\n standard training framework (BatchNorm, ResNets, SGD, etc.). In particular, our\n\n “robust dataset” D^R\\widehat{\\mathcal{D}}\\_{R}D\n\nR​ is a counterexample to any claim of the form “given any\n\n dataset, training with BatchNorm/SGD/ResNets/overparameterization/etc. leads to\n\n adversarial vulnerability” (as classifiers with all of these components,\n\n when trained on D^R\\widehat{\\mathcal{D}}\\_{R}D\n\nR​, generalize robustly to\n\n CIFAR-10). In that sense, the dataset clearly plays a role in\n\n the emergence of adversarial examples. (Also, further corroborating this is\n\n Preetum’s “adversarial squares” dataset [here](#PreetumResponse),\n\n label noise or overfitting.) \n\nA Few Clarifications\n\n--------------------\n\nIn addition to further refining our understanding of adversarial examples,\n\n the comments were also very useful in pointing out which aspects of our\n\n claims could benefit from further clarification. To this end, we make these\n\n clarifications below in the form of a couple “non-claims” — claims that we did\n\n *not* intend to make. We’ll also update our paper in order to make\n\n these clarifications explicit.\n\n### Non-Claim #1: “Adversarial examples *cannot* be bugs”\n\n Our goal is to say that since adversarial examples can arise from\n\n well-generalizing features, simply patching up the “bugs” in ML models will\n\n not get rid of adversarial vulnerability — we also need to make sure our\n\n models learn the right features. This, however, does not mean that\n\n adversarial vulnerability *cannot* arise from “bugs”. In fact, note\n\n that several papers \n\n have proven that adversarial vulnerability can\n\n arise from what we refer to as “bugs,” e.g. finite-sample overfitting,\n\n concentration of measure, high dimensionality, etc. Furthermore,\n\n We would like to thank Preetum for pointing out that this issue may be a\n\n natural misunderstanding, and for exploring this point in even more depth\n\n in his response below.\n\n \n\n### Non-Claim #2: “Adversarial examples are purely a result of the dataset”\n\n Even though we [demonstrated](#cannotpin) that datasets do\n\n play a role in the emergence of adversarial examples, we do not intend to\n\n claim that this role is exclusive. In particular, just because the data\n\n *admits* non-robust functions that are well-generalizing (useful\n\n non-robust features), doesn’t mean that *any* model will learn to\n\n pick up these features. For example, it could be that the well-generalizing\n\n features that cause adversarial examples are only learnable by certain\n\n architectures. However, we do show that there is a way, via only\n\n altering the dataset, to induce robust models — thus, our results indicate\n\n that adversarial vulnerability indeed cannot be completely disentangled\n\n from the dataset (more on this in [Takeaway #3](#cannotpin)).\n\n summaries.\n\n \n\nResponses to comments\n\n---------------------\n\n### Adversarial Example Researchers Need to Expand What is Meant by\n\n “Robustness” (Dan Hendrycks, Justin Gilmer)\n\n**Response Summary**:\n\n appears “meaningless” to humans.\n\n to rely on.\n\n \n\n**Response**: The fact that models can learn to classify correctly based\n\n purely on the high-frequency component of the training set is neat! This nicely\n\n complements one of our [takeaways](#takeaway1): models will rely on\n\n useful features even if these features appear incomprehensible to humans. \n\n Also, while non-robustness to noise can be an indicator of models using\n\n More often than not, the brittleness of ML models to noise was instead regarded\n\n expected that progress towards “better”/”bug-free” models will lead to them\n\n being more robust to noise and adversarial examples. \n\n small subset of the perturbations we want our models to be robust to. Note,\n\n however, that the focus of our work is human-alignment — to that end, we\n\n demonstrate that models rely on features sensitive to patterns that are\n\n imperceptible to humans. Thus, the existence of other families of\n\n incomprehensible but useful features would provide even more support for our\n\n future research.\n\n \n\n### Robust Feature Leakage (Gabriel Goh)\n\n**Response Summary**:\n\n the motivations for designing the D^det\\widehat{\\mathcal{D}}\\_{det}D\n\ndet​ dataset.\n\n \n\n**Response**: This comment raises a valid concern which was in fact one of\n\n the primary reasons for designing the D^det\\widehat{\\mathcal{D}}\\_{det}D\n\ndet​ dataset.\n\nrand​\n\n dataset: assign each input a random target label and do PGD towards that label.\n\n Note that unlike the D^det\\widehat{\\mathcal{D}}\\_{det}D\n\ndet​ dataset (in which the\n\nrand​\n\n dataset allows for robust features to actually have a (small) positive\n\n correlation with the label. \n\nTo see how this can happen, consider the following simple setting: we have a\n\n (as in the dataset D^rand\\widehat{\\mathcal{D}}\\_{rand}D\n\nrand​) would make this feature\n\n targeted attack might in this case induce some correlation with the\n\n to correctly classify new inputs. \n\nIn other words, starting from a dataset with no features, one can encode\n\n robust features within small perturbations. In contrast, in the\n\n D^det\\widehat{\\mathcal{D}}\\_{det}D\n\ndet​ dataset, the robust features are *correlated\n\n with the original label* (since the labels are permuted) and since they are\n\n robust, they cannot be flipped to correlate with the newly assigned (wrong)\n\n label. Still, the D^rand\\widehat{\\mathcal{D}}\\_{rand}D\n\nrand​ dataset enables us to show\n\n that (a) PGD-based adversarial examples actually alter features in the data and\n\n (b) models can learn from human-meaningless/mislabeled training data. The\n\n D^det\\widehat{\\mathcal{D}}\\_{det}D\n\ndet​ dataset, on the other hand, illustrates that the\n\n non-robust features are actually sufficient for generalization and can be\n\n preferred over robust ones in natural settings.\n\nThe experiment put forth in the comment is a clever way of showing that such\n\n leakage is indeed possible. However, we want to stress (as the comment itself\n\n does) that robust feature leakage does *not* have an impact on our main\n\n thesis — the D^det\\widehat{\\mathcal{D}}\\_{det}D\n\ndet​ dataset explicitly controls\n\n for robust\n\n feature leakage (and in fact, allows us to quantify the models’ preference for\n\n robust features vs non-robust features — see Appendix D.6 in the\n\n [paper](https://arxiv.org/abs/1905.02175)).\n\n### Two Examples of Useful, Non-Robust Features (Gabriel Goh)\n\n interesting direction of developing a more fine-grained definition of features.\n\n \n\n**Response**: These experiments (visualizing the robustness and\n\n corroborate the existence of useful, non-robust features and make progress\n\n towards visualizing what these non-robust features actually look like. \n\nWe also appreciate the point made by the provided construction of non-robust\n\n features (as defined in our theoretical framework) that are combinations of\n\n useful+robust and useless+non-robust features. Our theoretical framework indeed\n\n enables such a scenario, even if — as the commenter already notes — our\n\n takeaway](#takeaway1) are actually stronger than our theoretical\n\n framework technically captures.) Specifically, in such a scenario, during the\n\n construction of the D^det\\widehat{\\mathcal{D}}\\_{det}D\n\ndet​ dataset, only the non-robust\n\n and useless term of the feature would be flipped. Thus, a classifier trained on\n\n such a dataset would associate the predictive robust feature with the\n\n *wrong* label and would thus not generalize on the test set. In contrast,\n\ndet​\n\n do generalize.\n\nOverall, our focus while developing our theoretical framework was on\n\n the comment points out, putting forth a theoretical framework that captures\n\n non-robust features in a very precise way is an important future research\n\n direction in itself. \n\n### Adversarially Robust Neural Style Transfer\n\n (Reiichiro Nakano)\n\n**Response Summary**:\n\n trained models will have in neural network art!\n\n interesting links between robustness and style transfer. \n\n**Response**: These experiments are really cool! It is interesting that\n\n preventing the reliance of a model on non-robust features improves performance\n\n on style transfer, even without an explicit task-related objective (i.e. we\n\n didn’t train the networks to be better for style transfer). \n\n We also found the discussion of VGG as a “mysterious network” really\n\n performance more generally. Though not a complete answer, we made a couple of\n\n observations while investigating further: \n\n*Style transfer does work with AlexNet:* One wrinkle in the idea that\n\n the most naturally robust network — AlexNet is. However, based on our own\n\n testing, style transfer does seem to work with AlexNet out-of-the-box, as\n\n long as we use a few early layers in the network (in a similar manner to\n\n VGG): \n\nStyle transfer using AlexNet, using\n\n conv\\_1 through conv\\_4.\n\n Observe that even though style transfer still works, there are checkerboard\n\n patterns emerging — this seems to be a similar phenomenon to the one noticed\n\n in the comment in the context of robust models.\n\n This might be another indication that these two phenomena (checkerboard\n\n patterns and style transfer working) are not as intertwined as previously\n\n thought.\n\n \n\n*From prediction robustness to layer robustness:* Another\n\n potential wrinkle here is that both AlexNet and VGG are not that\n\n much more robust than ResNets (for which style transfer completely fails),\n\n and yet seem to have dramatically better performance. To try to\n\n explain this, recall that style transfer is implemented as a minimization of a\n\n combined objective consisting of a style loss and a content loss. We found,\n\n however, that the network we use to compute the\n\n style loss is far more important\n\n than the one for the content loss. The following demo illustrates this — we can\n\n actually use a non-robust ResNet for the content loss and everything works just\n\n fine:\n\nStyle transfer seems to be rather\n\n invariant to the choice of content network used, and very sensitive\n\n to the style network used.\n\nTherefore, from now on, we use a fixed ResNet-50 for the content loss as a\n\n control, and only worry about the style loss. \n\nNow, note that the way that style loss works is by using the first few\n\n layers of the relevant network. Thus, perhaps it is not about the robustness of\n\n for style transfer? \n\n To test this hypothesis, we measure the robustness of a layer fff as:\n\n \n\nR(f)=Ex1∼D[maxx′∥f(x′)−f(x1)∥2]Ex1,x2∼D[∥f(x1)−f(x2)∥2]\n\n {\\mathbb{E}\\_{x\\_1, x\\_2 \\sim D}\\left[\\|f(x\\_1) - f(x\\_2)\\|\\_2\\right]}\n\n R(f)=Ex1​,x2​∼D​[∥f(x1​)−f(x2​)∥2​]Ex1​∼D​[maxx′​∥f(x′)−f(x1​)∥2​]​\n\n Essentially, this quantity tells us how much we can change the\n\n representations are between images in general. We’ve plotted this value for\n\n the first few layers in a couple of different networks below: \n\nThe robustness R(f)R(f)R(f) of the first\n\n four layers of VGG16, AlexNet, and robust/standard ResNet-50\n\n trained on ImageNet.\n\n Here, it becomes clear that, the first few layers of VGG and AlexNet are\n\n actually almost as robust as the first few layers of the robust ResNet!\n\n This is perhaps a more convincing indication that robustness might have\n\n something to with VGG’s success in style transfer after all.\n\n \n\n Finally, suppose we restrict style transfer to only use a single layer of\n\n the network when computing the style lossUsually\n\n style transfer uses\n\n several layers in the loss function to get the most \n\nvisually appealing results — here we’re only interested in whether or \n\nnot style transfer works (i.e.\n\n actually confers some style onto the image).. Again, the more\n\n robust layers seem to indeed work better for style transfer! Since all of the\n\n layers in the robust ResNet are robust, style transfer yields non-trivial\n\n results even using the last layer alone. Conversely, VGG and AlexNet seem to\n\n excel in the earlier layers (where they are non-trivially robust) but fail when\n\n using exclusively later (non-robust) layers: \n\nStyle transfer using a single layer. The\n\n names of the layers and their robustness R(f)R(f)R(f) are printed below\n\n each style transfer result. We find that for both networks, the robust\n\n layers seem to work (for the robust ResNet, every layer is robust).\n\n Of course, there is much more work to be done here, but we are excited\n\n to see further work into understanding the role of both robustness and the VGG\n\n in network-based image manipulation. \n\n### Adversarial Examples are Just Bugs, Too (Preetum\n\n Nakkiran)\n\n**Response Summary**:\n\n providing an\n\n example of adversarial examples that arise from “bugs”.\n\n \n\n**Response**: As mentioned [above](#nonclaim1),\n\n we did not intend to claim\n\n that adversarial examples arise *exclusively* from (useful) features but rather\n\n that useful non-robust features exist and are thus (at least\n\n partially) responsible for adversarial vulnerability. In fact,\n\n prior work already shows how in theory adversarial examples can arise from\n\n insufficient samples or finite-sample overfitting , and the experiments\n\n presented here (particularly, the adversarial squares) constitute a neat\n\n real-world demonstration of these facts. \n\n Our main thesis that “adversarial examples will not just go away as we fix\n\n stemming from “bugs.” As long as adversarial examples can stem from non-robust\n\n features (which the commenter seems to agree with), fixing these bugs will not\n\n solve the problem of adversarial examples. \n\nMoreover, with regards to feature “leakage” from PGD, recall that in\n\n or D\\_det dataset, the non-robust features are associated with the\n\n correct label whereas the robust features are associated with the wrong\n\n one. We wanted to emphasize that, as shown in [Appendix D.6](LINK),\n\n models trained on our D\\_det dataset actually generalize *better* to\n\n the non-robust feature-label association that to the robust\n\n feature-label association. In contrast, if PGD introduced a small\n\n “leakage” of non-robust features, then we would expect the trained model\n\n would still predominantly use the robust feature-label association. \n\n That said, the experiments cleverly zoom in on some more fine-grained\n\n nuances in our understanding of adversarial examples. One particular thing that\n\n stood out to us is that by creating a set of adversarial examples that are\n\n *explicitly* non-transferable, one also prevents new classifiers from learning\n\n features from that dataset. This finding thus makes the connection between\n\n transferability of adversarial examples and their containing generalizing\n\n features even stronger! Indeed, we can add the constructed dataset into our\n\n “D^det\\widehat{\\mathcal{D}}\\_{det}D\n\ndet​ learnability vs transferability” plot\n\n (Figure 3 in the paper) — the point\n\n corresponding to this dataset fits neatly onto the trendline! \n\n attacks\n\n### Learning from Incorrectly Labeled Data (Eric Wallace)\n\n**Response Summary**:\n\n settings.\n\n \n\n**Response**: Since our experiments work across different architectures,\n\n “distillation” in weight space cannot arise. Thus, from what we understand, the\n\n “distillation” hypothesis suggested here is referring to “feature distillation”\n\n (i.e. getting models which use the same features as the original), which is\n\n actually precisely our hypothesis too. Notably, this feature distillation would\n\n are good for classification (see [World 1](#world1) and\n\n [World 2](#world2)) — in that case, the distilled\n\n model would only use features that generalize poorly, and would thus generalize\n\n poorly itself. \n\n Moreover, we would argue that in the experiments presented (learning from\n\n mislabeled data), the same kind of distillation is happening. For instance, a\n\n moderately accurate model might associate “green background” with “frog” thus\n\n labeling “green” images as “frogs” (e.g., the horse in the comment’s figure).\n\n Training a new model on this dataset will thus associate “green” with “frog”\n\n from Fashion-MNIST” experiment in the comment). This corresponds exactly to\n\n learning features from labels, akin to how deep networks “distill” a good\n\n decision boundary from human annotators. In fact, we find these experiments\n\n a very interesting illustration of feature distillation that complements\n\n our findings. \n\n We also note that an analogy to logistic regression here is only possible\n\n due to the low VC-dimension of linear classifiers (namely, these classifiers\n\n have dimension ddd). In particular, given any classifier with VC-dimension\n\n networks have been shown to have extremely large VC-dimension (in particular,\n\n bigger than the size of the training set ). So even though\n\n labelling\n\n d+1d+1d+1 random\n\n points model-consistently is sufficient to recover a linear model, it is not\n\n necessarily sufficient to recover a deep neural network. For instance, Milli et\n\n al. are not able to reconstruct a ResNet-18\n\n using only its predictions on random Gaussian inputs. (Note that we are using a\n\n ResNet-50 in our experiments.) \n\n Finally, it seems that the only potentially problematic explanation for\n\n our experiments (namely, that enough model-consistent points can recover a\n\n particular, Preetum is able to design a\n\n dataset where training on mislabeled inputs *that are model-consistent*\n\n does not at all recover the decision boundary of the original model. More\n\n generally, the “model distillation” perspective raised here is unable to\n\n distinguish between the dataset created by Preetum below, and those created\n\n with standard PGD (as in our D^det\\widehat{\\mathcal{D}}\\_{det}D\n\ndet​ and\n\n D^rand\\widehat{\\mathcal{D}}\\_{rand}D\n\nrand​ datasets).\n\n \n\n", "bibliography_bib": [{"title": "A boundary tilting persepective on the phenomenon of adversarial examples"}, {"title": "Adversarially Robust Generalization Requires More Data"}, {"title": "Adversarial spheres"}, {"title": "Adversarial vulnerability for any classifier"}, {"title": "The curse of concentration in robust learning: Evasion and poisoning attacks from concentration of measure"}, {"title": "Are adversarial examples inevitable?"}, {"title": "Understanding deep learning requires rethinking generalization"}, {"title": "Model reconstruction from model explanations"}], "filename": "A Discussion of 'Adversarial Examples Are Not Bugs, They Are Features' Discussion and Author Responses.html", "id": "5b85b47b4f1ddbf76271ccc247361175"} {"url": "n/a", "source": "distill", "source_type": "html", "converted_with": "python", "title": "Why Momentum Really Works", "authors": ["Gabriel Goh"], "date_published": "2017-04-04", "abstract": " ", "journal_ref": "distill-pub", "doi": "https://doi.org/10.1016/0041-5553(64)90137-5", "text": "\n\nWhy Momentum Really Works\n\n=========================\n\nOptimumSolutionStarting Point\n\nStep-size α = 0.02\n\n00.0030.006\n\nMomentum β = 0.99\n\n0.000.5000.990\n\n We often think of Momentum as a means of dampening oscillations \n\nand speeding up the iterations, leading to faster convergence. But it \n\nhas other interesting behavior. It allows a larger range of step-sizes \n\nto be used, and creates its own oscillations. What is going on?\n\n \n\n dt-byline {\n\n font-size: 12px;\n\n line-height: 18px;\n\n display: block;\n\n border-top: 1px solid rgba(0, 0, 0, 0.1);\n\n border-bottom: 1px solid rgba(0, 0, 0, 0.1);\n\n color: rgba(0, 0, 0, 0.5);\n\n padding-top: 12px;\n\n padding-bottom: 12px;\n\n }\n\n dt-article.centered dt-byline {\n\n text-align: center;\n\n }\n\n dt-byline a,\n\n dt-article dt-byline a {\n\n text-decoration: none;\n\n border-bottom: none;\n\n }\n\n dt-article dt-byline a:hover {\n\n text-decoration: underline;\n\n border-bottom: none;\n\n }\n\n dt-byline .authors {\n\n text-align: left;\n\n }\n\n dt-byline .name {\n\n display: inline;\n\n text-transform: uppercase;\n\n }\n\n dt-byline .affiliation {\n\n display: inline;\n\n }\n\n dt-byline .date {\n\n display: block;\n\n text-align: left;\n\n }\n\n dt-byline .year, dt-byline .month {\n\n display: inline;\n\n }\n\n dt-byline .citation {\n\n display: block;\n\n text-align: left;\n\n }\n\n dt-byline .citation div {\n\n display: inline;\n\n }\n\n @media(min-width: 768px) {\n\n dt-byline {\n\n }\n\n }\n\n @media(min-width: 1080px) {\n\n dt-byline {\n\n border-bottom: none;\n\n margin-bottom: 70px;\n\n }\n\n dt-byline a:hover {\n\n color: rgba(0, 0, 0, 0.9);\n\n }\n\n dt-byline .authors {\n\n display: inline-block;\n\n }\n\n dt-byline .author {\n\n display: inline-block;\n\n margin-right: 12px;\n\n /\\*padding-left: 20px;\\*/\n\n /\\*border-left: 1px solid #ddd;\\*/\n\n }\n\n dt-byline .affiliation {\n\n display: block;\n\n }\n\n dt-byline .author:last-child {\n\n margin-right: 0;\n\n }\n\n dt-byline .name {\n\n display: block;\n\n }\n\n dt-byline .date {\n\n border-left: 1px solid rgba(0, 0, 0, 0.1);\n\n padding-left: 15px;\n\n margin-left: 15px;\n\n display: inline-block;\n\n }\n\n dt-byline .year, dt-byline .month {\n\n display: block;\n\n }\n\n dt-byline .citation {\n\n border-left: 1px solid rgba(0, 0, 0, 0.15);\n\n padding-left: 15px;\n\n margin-left: 15px;\n\n display: inline-block;\n\n }\n\n dt-byline .citation div {\n\n display: block;\n\n }\n\n }\n\n[Gabriel Goh](http://gabgoh.github.io/)\n\n[UC Davis](http://math.ucdavis.edu/)\n\nApril. 4\n\n2017\n\n[Citation:\n\nGoh, 2017](#citation)\n\n // Render Foreground\n\n .alpha(0.003)\n\n .beta(0)\n\n (d3.select(\"#banana\").style(\"position\",\"relative\"))\n\n var iterChange = iterControl.control\n\n var getw0 = iterControl.w0\n\n var StepRange = d3.scaleLinear().domain([0,100]).range([0,0.0062])\n\n var MomentumRange = d3.scaleLinear().domain([0,100]).range([0,0.98])\n\n var update = function (i,j) { iterChange(i, 0, getw0()) }\n\n var slidera = sliderGen([230, 40])\n\n .ticks([0,0.003,0.006])\n\n .ticktitles( function(d,i) { return [\"0\", \"0.003\", \"0.006\"][i]})\n\n .change( function (i) {\n\n iterChange(getalpha(), getbeta(), getw0() )\n\n } )\n\n .startxval(0.003)\n\n .cRadius(7)\n\n .shifty(-12)\n\n .margins(20,20)\n\n var sliderb = sliderGen([230, 40])\n\n .ticks([0,0.5,0.99])\n\n .change( function (i) {\n\n iterChange(getalpha(), getbeta(), getw0() )\n\n } )\n\n .cRadius(7)\n\n .shifty(-12)\n\n .startxval(0.74)\n\n .margins(20,20)\n\n var getalpha = slidera( d3.select(\"#sliderAlpha\")).xval\n\n var getbeta = sliderb( d3.select(\"#sliderBeta\")).xval\n\n iterChange(getalpha(), getbeta(), getw0() )\n\n \n\n Here’s a popular story about momentum [1, 2, 3]:\n\n gradient descent is a man walking down a hill. He follows the steepest \n\npath downwards; his progress is slow, but steady. Momentum is a heavy \n\nball rolling down the same hill. The added inertia acts both as a \n\nsmoother and an accelerator, dampening oscillations and causing us to \n\nbarrel through narrow valleys, small humps and local minima.\n\n \n\n This standard story isn’t wrong, but it fails to explain many \n\nimportant behaviors of momentum. In fact, momentum can be understood far\n\n more precisely if we study it on the right model.\n\n \n\n One nice model is the convex quadratic. This model is rich enough to \n\nreproduce momentum’s local dynamics in real problems, and yet simple \n\nenough to be understood in closed form. This balance gives us powerful \n\ntraction for understanding this algorithm.\n\n \n\n---\n\n We begin with gradient descent. The algorithm has many virtues, but \n\nspeed is not one of them. It is simple — when optimizing a smooth \n\nfunction \n\ndt-math[block] {\n\n display: block;\n\n}\n\nfff, we make a small step in the gradient\n\n \n\ndt-math[block] {\n\n display: block;\n\n}\n\nwk+1=wk−α∇f(wk).w^{k+1} = w^k-\\alpha\\nabla f(w^k).w​k+1​​=w​k​​−α∇f(w​k​​).\n\n For a step-size small enough, gradient descent makes a monotonic \n\nimprovement at every iteration. It always converges, albeit to a local \n\nminimum. And under a few weak curvature conditions it can even get there\n\n at an exponential rate.\n\n \n\n But the exponential decrease, though appealing in theory, can often be\n\n infuriatingly small. Things often begin quite well — with an \n\nimpressive, almost immediate decrease in the loss. But as the iterations\n\n progress, things start to slow down. You start to get a nagging feeling\n\n you’re not making as much progress as you should be. What has gone \n\nwrong?\n\n \n\ndt-math[block] {\n\n display: block;\n\n}\n\nfff\n\n which aren’t scaled properly. The landscapes are often described as \n\nvalleys, trenches, canals and ravines. The iterates either jump between \n\nvalleys, or approach the optimum in small, timid steps. Progress along \n\ncertain directions grind to a halt. In these unfortunate regions, \n\ngradient descent fumbles.\n\n \n\ndt-math[block] {\n\n display: block;\n\n}\n\nzk+1=βzk+∇f(wk)wk+1=wk−αzk+1\n\n \\begin{aligned}\n\n z^{k+1}&=\\beta z^{k}+\\nabla f(w^{k})\\\\[0.4em]\n\n w^{k+1}&=w^{k}-\\alpha z^{k+1}\n\n \\end{aligned}\n\n ​z​k+1​​​w​k+1​​​​​=βz​k​​+∇f(w​k​​)​=w​k​​−αz​k+1​​​​\n\n The change is innocent, and costs almost nothing. When \n\ndt-math[block] {\n\n display: block;\n\n}\n\nβ=0\\beta = 0β=0 , we recover gradient descent. But for \n\ndt-math[block] {\n\n display: block;\n\n}\n\nβ=0.99\\beta = 0.99β=0.99 (sometimes \n\ndt-math[block] {\n\n display: block;\n\n}\n\n0.9990.9990.999,\n\n if things are really bad), this appears to be the boost we need. Our \n\niterations regain that speed and boldness it lost, speeding to the \n\noptimum with a renewed energy.\n\n \n\n Optimizers call this minor miracle “acceleration”.\n\n \n\n The new algorithm may seem at first glance like a cheap hack. A simple\n\n trick to get around gradient descent’s more aberrant behavior — a \n\nsmoother for oscillations between steep canyons. But the truth, if \n\nanything, is the other way round. It is gradient descent which is the \n\nhack. First, momentum gives up to a quadratic speedup on many functions.\n\n 1\n\n This is no small matter — this is similar to the speedup you get from \n\nthe Fast Fourier Transform, Quicksort, and Grover’s Algorithm. When the \n\nuniverse gives you quadratic speedups, you should start to pay \n\nattention.\n\n \n\n But there’s more. A lower bound, courtesy of Nesterov [5],\n\n states that momentum is, in a certain very narrow and technical sense, \n\noptimal. Now, this doesn’t mean it is the best algorithm for all \n\nfunctions in all circumstances. But it does satisfy some curiously \n\nbeautiful mathematical properties which scratch a very human itch for \n\nperfection and closure. But more on that later. Let’s say this for \n\nnow — momentum is an algorithm for the book.\n\n \n\n---\n\nFirst Steps: Gradient Descent\n\n-----------------------------\n\n \n\ndt-math[block] {\n\n display: block;\n\n}\n\nf(w)=12wTAw−bTw,w∈Rn.\n\n f(w) = \\tfrac{1}{2}w^TAw - b^Tw, \\qquad w \\in \\mathbf{R}^n.\n\n f(w)=​2​​1​​w​T​​Aw−b​T​​w,w∈R​n​​.\n\n Assume \n\ndt-math[block] {\n\n display: block;\n\n}\n\nAAA is symmetric and invertible, then the optimal solution \n\ndt-math[block] {\n\n display: block;\n\n}\n\nw⋆w^{\\star}w​⋆​​ occurs at\n\n \n\ndt-math[block] {\n\n display: block;\n\n}\n\nw⋆=A−1b. w^{\\star} = A^{-1}b.w​⋆​​=A​−1​​b.\n\ndt-math[block] {\n\n display: block;\n\n}\n\n etc) and captures all the key features of pathological curvature. And \n\nmore importantly, we can write an exact closed formula for gradient \n\ndescent on this function.\n\n \n\n This is how it goes. Since \n\ndt-math[block] {\n\n display: block;\n\n}\n\n∇f(w)=Aw−b\\nabla f(w)=Aw - b∇f(w)=Aw−b, the iterates are\n\n \n\ndt-math[block] {\n\n display: block;\n\n}\n\nwk+1=wk−α(Awk−b).\n\n w^{k+1}=w^{k}- \\alpha (Aw^{k} - b).\n\n w​k+1​​=w​k​​−α(Aw​k​​−b).\n\n Here’s the trick. There is a very natural space to view gradient \n\ndescent where all the dimensions act independently — the eigenvectors of\n\n \n\ndt-math[block] {\n\n display: block;\n\n}\n\nAAA.\n\n \n\nOptimum\n\nOptimum\n\n deleteQueue.push(renderLoading(d3.select(\"#change\\_of\\_variables\")))\n\n renderQueue.push(function(callback) {\n\n var U = givens(Math.PI/4)\n\n var Ut = numeric.transpose(U)\n\n // Render Foreground\n\n var left = d3.select(\"#mom1\").style(\"border\", \"1px solid rgba(0, 0, 0, 0.2)\")\n\n var c1 = genIterDiagram(quadf, [0,0], [[-3,3],[-3,3]])\n\n .width(340)\n\n .height(340)\n\n .iters(300)\n\n .alpha(0.018)\n\n .showSolution(false)\n\n .pathWidth(1)\n\n .circleRadius(1.5)\n\n .pointerScale(0.8)\n\n .showStartingPoint(false)\n\n .drag(function() {\n\n c2.control(c1.alpha(),\n\n c1.beta(),\n\n numeric.dot(U,c1.w0())) })\n\n (left)\n\n var right = d3.select(\"#mom2\").style(\"border\", \"1px solid rgba(0, 0, 0, 0.2)\")\n\n var c2 = genIterDiagram(eyef, [0,0], [[-3,3],[-3,3]])\n\n .width(340)\n\n .height(340)\n\n .iters(300)\n\n .alpha(0.018)\n\n .showSolution(false)\n\n .pathWidth(1)\n\n .circleRadius(1.5)\n\n .pointerScale(0.8)\n\n .showStartingPoint(false)\n\n .drag(function() {\n\n c1.control(c2.alpha(),\n\n c2.beta(),\n\n numeric.dot(Ut,c2.w0())) })\n\n (right)\n\n // Initialize\n\n c2.control(0.018,0,[-2.5,1])\n\n c1.control(0.018,0,numeric.dot(Ut,[-2.5,1]));\n\n callback(null);\n\n});\n\n Every symmetric matrix \n\ndt-math[block] {\n\n display: block;\n\n}\n\nAAA has an eigenvalue decomposition\n\n \n\ndt-math[block] {\n\n display: block;\n\n}\n\nA=Q diag(λ1,…,λn) QT,Q=[q1,…,qn],\n\n A=Q diag(λ​1​​,…,λ​n​​) Q​T​​,Q=[q​1​​,…,q​n​​],\n\n and, as per convention, we will assume that the \n\ndt-math[block] {\n\n display: block;\n\n}\n\nλi\\lambda\\_iλ​i​​’s are sorted, from smallest \n\ndt-math[block] {\n\n display: block;\n\n}\n\nλ1\\lambda\\_1λ​1​​ to biggest \n\ndt-math[block] {\n\n display: block;\n\n}\n\nλn\\lambda\\_nλ​n​​. If we perform a change of basis, \n\ndt-math[block] {\n\n display: block;\n\n}\n\n \n\ndt-math[block] {\n\n display: block;\n\n}\n\nxik+1=xik−αλixik=(1−αλi)xik=(1−αλi)k+1xi0\n\n \\begin{aligned}\n\n x\\_{i}^{k+1} & =x\\_{i}^{k}-\\alpha \\lambda\\_ix\\_{i}^{k} \\\\[0.4em]\n\n &= (1-\\alpha\\lambda\\_i)x^k\\_i=(1-\\alpha \\lambda\\_i)^{k+1}x^0\\_i\n\n \\end{aligned}\n\n Moving back to our original space \n\ndt-math[block] {\n\n display: block;\n\n}\n\nwww, we can see that\n\n \n\ndt-math[block] {\n\n display: block;\n\n}\n\nwk−w⋆=Qxk=∑inxi0(1−αλi)kqi\n\n w^k - w^\\star = Qx^k=\\sum\\_i^n x^0\\_i(1-\\alpha\\lambda\\_i)^k q\\_i\n\n w​k​​−w​⋆​​=Qx​k​​=​i​∑​n​​x​i​0​​(1−αλ​i​​)​k​​q​i​​\n\n and there we have it — gradient descent in closed form.\n\n \n\n### Decomposing the Error\n\n The above equation admits a simple interpretation. Each element of \n\ndt-math[block] {\n\n display: block;\n\n}\n\nx0x^0x​0​​ is the component of the error in the initial guess in the \n\ndt-math[block] {\n\n display: block;\n\n}\n\nQQQ-basis. There are \n\ndt-math[block] {\n\n display: block;\n\n}\n\nnnn\n\n such errors, and each of these errors follows its own, solitary path to\n\n the minimum, decreasing exponentially with a compounding rate of \n\ndt-math[block] {\n\n display: block;\n\n}\n\n1−αλi1-\\alpha\\lambda\\_i1−αλ​i​​. The closer that number is to \n\ndt-math[block] {\n\n display: block;\n\n}\n\n111, the slower it converges.\n\n \n\n For most step-sizes, the eigenvectors with largest eigenvalues \n\nconverge the fastest. This triggers an explosion of progress in the \n\nfirst few iterations, before things slow down as the smaller \n\neigenvectors’ struggles are revealed. By writing the contributions of \n\neach eigenspace’s error to the loss\n\n \n\ndt-math[block] {\n\n display: block;\n\n}\n\nf(wk)−f(w⋆)=∑(1−αλi)2kλi[xi0]2\n\n f(w^{k})-f(w^{\\star})=\\sum(1-\\alpha\\lambda\\_{i})^{2k}\\lambda\\_{i}[x\\_{i}^{0}]^2\n\n f(w​k​​)−f(w​⋆​​)=∑(1−αλ​i​​)​2k​​λ​i​​[x​i​0​​]​2​​\n\n we can visualize the contributions of each error component to the loss.\n\n \n\ndt-math[block] {\n\n display: block;\n\n}\n\nλ1=0.01\\lambda\\_1=0.01λ​1​​=0.01, \n\ndt-math[block] {\n\n display: block;\n\n}\n\nλ2=0.1\\lambda\\_2=0.1λ​2​​=0.1, and \n\ndt-math[block] {\n\n display: block;\n\n}\n\nλ3=1\\lambda\\_3=1λ​3​​=1 respectively. \n\n Step-size\n\n \n\n Optimal Step-size\n\n \n\n012\n\n deleteQueue.push(renderLoading(d3.select(\"#milestones\\_gd\")))\n\n renderQueue.push(function(callback) {\n\n var graphDiv = d3.select(\"#obj\")\n\n .style(\"width\", 920 + \"px\")\n\n .style(\"height\", 300 + \"px\")\n\n .style(\"top\", \"90px\")\n\n .style(\"position\", \"relative\")\n\n .style(\"margin-left\", \"auto\")\n\n .style(\"margin-right\", \"auto\")\n\n .attr(\"width\", 920)\n\n .attr(\"height\", 500)\n\n var svg = graphDiv.append(\"svg\")\n\n .attr(\"width\", 920)\n\n .attr(\"height\", 300)\n\n .style(\"position\",\"absolute\")\n\n .style(\"left\", \"15px\")\n\n var updateSliderGD = renderMilestones(svg, function() {});\n\n var alphaHTML = MathCache(\"alpha-equals\");\n\n var slidera = sliderGen([250, 80])\n\n .ticks([0,1,200/(101),2])\n\n .change( function (i) {\n\n d3.select(\"#stepSizeMilestones\")\n\n .html(\"Stepsize \" + html )\n\n updateSliderGD(i,0.000)\n\n } )\n\n .ticktitles(function(d,i) { return [0,1,\"\",2][i] })\n\n .startxval(200/(101))\n\n .cRadius(7)\n\n .shifty(-12)\n\n .shifty(10)\n\n .margins(20,20)(d3.select(\"#sliderStep\"))\n\n // renderDraggable(svg, [133.5, 23], [114.5, 90], 2, \" \").attr(\"opacity\", 0.1)\n\n // renderDraggable(svg, [133.5, 88], [115.5, 95], 2, \" \").attr(\"opacity\", 0.1)\n\n d3.select(\"#milestones\\_gd\\_optstep\").on(\"click\", slidera.init)\n\n svg.append(\"text\")\n\n .attr(\"class\", \"katex morsd mathit\")\n\n .style(\"font-size\", \"19px\")\n\n .style(\"font-family\",\"KaTeX\\_Math\")\n\n .attr(\"x\", 105)\n\n .attr(\"y\", 50)\n\n .attr(\"text-anchor\", \"end\")\n\n .attr(\"fill\", \"gray\")\n\n svg.append(\"text\")\n\n .style(\"font-size\", \"13px\")\n\n .attr(\"x\", 0)\n\n .attr(\"y\", 80)\n\n .attr(\"dy\", 0)\n\n .attr(\"transform\", \"translate(110,0)\")\n\n .attr(\"class\", \"caption\")\n\n .attr(\"text-anchor\", \"end\")\n\n .attr(\"fill\", \"gray\")\n\n .text(\"At the initial point, the error in each component is equal.\")\n\n svg.selectAll(\".caption\").call(wrap, 100)\n\n svg.append(\"text\")\n\n .style(\"font-size\", \"13px\")\n\n .attr(\"x\", 420)\n\n .attr(\"y\", 270)\n\n .attr(\"dy\", 0)\n\n .attr(\"dx\", -295)\n\n .attr(\"text-anchor\", \"start\")\n\n .attr(\"fill\", \"gray\")\n\n callback(null);\n\n });\n\n \n\n### Choosing A Step-size\n\n The above analysis gives us immediate guidance as to how to set a step-size \n\ndt-math[block] {\n\n display: block;\n\n}\n\nα\\alphaα. In order to converge, each \n\ndt-math[block] {\n\n display: block;\n\n}\n\n \n\ndt-math[block] {\n\n display: block;\n\n}\n\n0<αλi<2.0<\\alpha\\lambda\\_i<2.0<αλ​i​​<2.\n\ndt-math[block] {\n\n display: block;\n\n}\n\nλ1\\lambda\\_1λ​1​​ or \n\ndt-math[block] {\n\n display: block;\n\n}\n\nλn\\lambda\\_nλ​n​​:\n\n \n\ndt-math[block] {\n\n display: block;\n\n}\n\nrate(α) = maxi∣1−αλi∣ = max{∣1−αλ1∣, ∣1−αλn∣}\n\n \\begin{aligned}\\text{rate}(\\alpha) & ~=~ \n\n\\max\\_{i}\\left|1-\\alpha\\lambda\\_{i}\\right|\\\\[0.9em] & ~=~ \n\n\\max\\left\\{|1-\\alpha\\lambda\\_{1}|,~ |1-\\alpha\\lambda\\_{n}|\\right\\} \n\n\\end{aligned}\n\n ​rate(α)​​​​ = ​i​max​​∣1−αλ​i​​∣​ = max{∣1−αλ​1​​∣, ∣1−αλ​n​​∣}​​\n\n This overall rate is minimized when the rates for \n\ndt-math[block] {\n\n display: block;\n\n}\n\nλ1\\lambda\\_1λ​1​​ and \n\ndt-math[block] {\n\n display: block;\n\n}\n\nλn\\lambda\\_nλ​n​​\n\n are the same — this mirrors our informal observation in the previous \n\nsection that the optimal step-size causes the first and last \n\neigenvectors to converge at the same rate. If we work this through we \n\nget:\n\n \n\ndt-math[block] {\n\n display: block;\n\n}\n\noptimal α = argminα rate(α) = 2λ1+λnoptimal rate = minα rate(α) = λn/λ1−1λn/λ1+1\n\n \\begin{aligned}\n\n \\text{optimal }\\alpha ~=~{\\mathop{\\text{argmin}}\\limits\\_\\alpha} \n\n~\\text{rate}(\\alpha) & ~=~\\frac{2}{\\lambda\\_{1}+\\lambda\\_{n}}\\\\[1.4em]\n\n \\text{optimal rate} ~=~{\\min\\_\\alpha} ~\\text{rate}(\\alpha) & \n\n~=~\\frac{\\lambda\\_{n}/\\lambda\\_{1}-1}{\\lambda\\_{n}/\\lambda\\_{1}+1}\n\n \\end{aligned}\n\n Notice the ratio \n\ndt-math[block] {\n\n display: block;\n\n}\n\nλn/λ1\\lambda\\_n/\\lambda\\_1λ​n​​/λ​1​​\n\n determines the convergence rate of the problem. In fact, this ratio \n\nappears often enough that we give it a name, and a symbol — the \n\ncondition number.\n\n \n\ndt-math[block] {\n\n display: block;\n\n}\n\ncondition number:=κ:=λnλ1\n\n \\text{condition number} := \\kappa :=\\frac{\\lambda\\_n}{\\lambda\\_1}\n\n condition number:=κ:=​λ​1​​​​λ​n​​​​\n\ndt-math[block] {\n\n display: block;\n\n}\n\nA−1bA^{-1}bA​−1​​b is to perturbations in \n\ndt-math[block] {\n\n display: block;\n\n}\n\ndt-math[block] {\n\n display: block;\n\n}\n\nκ=1\\kappa = 1κ=1\n\n is ideal, giving convergence in one step (of course, the function is \n\ntrivial). Unfortunately the larger the ratio, the slower gradient \n\ndescent will be. The condition number is therefore a direct measure of \n\npathological curvature.\n\n \n\n---\n\nExample: Polynomial Regression\n\n------------------------------\n\ndt-math[block] {\n\n display: block;\n\n}\n\nnnn to be exact, one for each of the eigenvectors of \n\ndt-math[block] {\n\n display: block;\n\n}\n\ndt-math[block] {\n\n display: block;\n\n}\n\n \n\n Lets see how this plays out in polynomial regression. Given 1D data, \n\ndt-math[block] {\n\n display: block;\n\n}\n\nξi\\xi\\_iξ​i​​, our problem is to fit the model\n\n \n\ndt-math[block] {\n\n display: block;\n\n}\n\nmodel(ξ)=w1p1(ξ)+⋯+wnpn(ξ)pi=ξ↦ξi−1\n\n model(ξ)=w​1​​p​1​​(ξ)+⋯+w​n​​p​n​​(ξ)p​i​​=ξ↦ξ​i−1​​\n\n to our observations, \n\ndt-math[block] {\n\n display: block;\n\n}\n\ndid\\_id​i​​. This model, though nonlinear in the input \n\ndt-math[block] {\n\n display: block;\n\n}\n\n \n\ndt-math[block] {\n\n display: block;\n\n}\n\n0p10 p\\_1-2.00p​1​​\n\ndt-math[block] {\n\n display: block;\n\n}\n\n+++\n\ndt-math[block] {\n\n display: block;\n\n}\n\n0p10 p\\_1-2.00p​2​​\n\ndt-math[block] {\n\n display: block;\n\n}\n\n+++\n\ndt-math[block] {\n\n display: block;\n\n}\n\n0p10 p\\_12.00p​3​​\n\ndt-math[block] {\n\n display: block;\n\n}\n\n+++\n\ndt-math[block] {\n\n display: block;\n\n}\n\n0p10 p\\_12.00p​4​​\n\ndt-math[block] {\n\n display: block;\n\n}\n\n+++\n\ndt-math[block] {\n\n display: block;\n\n}\n\n0p10 p\\_12.00p​5​​\n\ndt-math[block] {\n\n display: block;\n\n}\n\n+++\n\ndt-math[block] {\n\n display: block;\n\n}\n\n0p10 p\\_1-2.00p​6​​\n\ndt-math[block] {\n\n display: block;\n\n}\n\n===\n\ndt-math[block] {\n\n display: block;\n\n}\n\nmodel\\text{model}model0+0+0+0+0+0=0\n\nscrub values\n\n deleteQueue.push(renderLoading(d3.select(\"#poly0\")))\n\n renderQueue.push(function(callback) {\n\n // Preprocess x, get eigendecomposition, etc\n\n var x = [-0.6, -0.55,-0.5,-0.45,-0.4,0.4,0.45,0.5,0.55,0.6]\n\n var D = vandermonde(x, 5)\n\n var Eigs = eigSym(numeric.dot(numeric.transpose(D),D))\n\n var U = Eigs.U\n\n var lambda = Eigs.lambda\n\n // Preprocess y\n\n var b = [-3/2,-4/2,-5/2,-3/2,-2/2,1/2,2/2,3/2,2/2,1/2]\n\n var Dtb = numeric.dot(b,D)\n\n var sol = numeric.mul(numeric.dot(U, Dtb), lambda.map(inv))\n\n var step = 1.8/lambda[0]\n\n var iter = geniter(U, lambda, Dtb, step)\n\n var eigensum = d3.select(\"#poly0\")\n\n var wi = [-2,-2,2,2,2,-2]\n\n function refit(b) {\n\n var Dtb = numeric.dot(b,D)\n\n var sol = numeric.mul(numeric.dot(U, Dtb), lambda.map(inv))\n\n var Utsol = numeric.dot(sol,U)\n\n eigenControl.updateweights(Utsol)\n\n }\n\n // Swoopy Annotator\n\n var annotations = [\n\n {\n\n \"x\": 0,\n\n \"y\": 0,\n\n \"path\": \"M 60,5 A 19.018 19.018 0 0 0 36,27\",\n\n \"text\": \"scrub values\",\n\n \"textOffset\": [\n\n 64,\n\n 9\n\n ]\n\n }\n\n ]\n\n drawAnnotations(d3.select(\"#poly0f\"), annotations)\n\n callback(null);\n\n });\n\n \n\n Because of the linearity, we can fit this model to our data \n\ndt-math[block] {\n\n display: block;\n\n}\n\nξi\\xi\\_iξ​i​​ using linear regression on the model mismatch\n\n \n\ndt-math[block] {\n\n display: block;\n\n}\n\nminimizew12∑i(model(ξi)−di)2  =  12∥Zw−d∥2\n\n minimize​w​​​2​​1​​​i​∑​​(model(ξ​i​​)−d​i​​)​2​​  =  ​2​​1​​∥Zw−d∥​2​​\n\n where\n\n \n\ndt-math[block] {\n\n display: block;\n\n}\n\nZ=(1ξ1ξ12…ξ1n−11ξ2ξ22…ξ2n−1⋮⋮⋮⋱⋮1ξmξm2…ξmn−1).\n\n Z=\\left(\\begin{array}{ccccc}\n\n 1 & \\xi\\_{1} & \\xi\\_{1}^{2} & \\ldots & \\xi\\_{1}^{n-1}\\\\\n\n 1 & \\xi\\_{2} & \\xi\\_{2}^{2} & \\ldots & \\xi\\_{2}^{n-1}\\\\\n\n \\vdots & \\vdots & \\vdots & \\ddots & \\vdots\\\\\n\n 1 & \\xi\\_{m} & \\xi\\_{m}^{2} & \\ldots & \\xi\\_{m}^{n-1}\n\n \\end{array}\\right).\n\ndt-math[block] {\n\n display: block;\n\n}\n\nQQQ (the eigenvectors of \n\ndt-math[block] {\n\n display: block;\n\n}\n\nZTZZ^T ZZ​T​​Z). So let’s recast our regression problem in the basis of \n\ndt-math[block] {\n\n display: block;\n\n}\n\nQQQ. First, we do a change of basis, by rotating \n\ndt-math[block] {\n\n display: block;\n\n}\n\nwww into \n\ndt-math[block] {\n\n display: block;\n\n}\n\nQwQwQw, and counter-rotating our feature maps \n\ndt-math[block] {\n\n display: block;\n\n}\n\nppp into eigenspace, \n\ndt-math[block] {\n\n display: block;\n\n}\n\n \n\ndt-math[block] {\n\n display: block;\n\n}\n\nmodel(ξ) = x1p¯1(ξ) + ⋯ + xnp¯n(ξ)p¯i=∑qijpj.\n\n model(ξ) = x​1​​​p​¯​​​1​​(ξ) + ⋯ + x​n​​​p​¯​​​n​​(ξ)​p​¯​​​i​​=∑q​ij​​p​j​​.\n\n This model is identical to the old one. But these new features \n\ndt-math[block] {\n\n display: block;\n\n}\n\np¯\\bar{p}​p​¯​​\n\n (which I call “eigenfeatures”) and weights have the pleasing property \n\nthat each coordinate acts independently of the others. Now our \n\noptimization problem breaks down, really, into \n\ndt-math[block] {\n\n display: block;\n\n}\n\nnnn\n\n small 1D optimization problems. And each coordinate can be optimized \n\ngreedily and independently, one at a time in any order, to produce the \n\nfinal, global, optimum. The eigenfeatures are also much more \n\ninformative:\n\n \n\ndt-math[block] {\n\n display: block;\n\n}\n\n0p¯10 \\bar{p}\\_10.458​p​¯​​​1​​\n\ndt-math[block] {\n\n display: block;\n\n}\n\n+++\n\ndt-math[block] {\n\n display: block;\n\n}\n\n0p¯10 \\bar{p}\\_10.903​p​¯​​​2​​\n\ndt-math[block] {\n\n display: block;\n\n}\n\n+++\n\ndt-math[block] {\n\n display: block;\n\n}\n\n0p¯10 \\bar{p}\\_16.13​p​¯​​​3​​\n\ndt-math[block] {\n\n display: block;\n\n}\n\n+++\n\ndt-math[block] {\n\n display: block;\n\n}\n\n0p¯10 \\bar{p}\\_112.4​p​¯​​​4​​\n\ndt-math[block] {\n\n display: block;\n\n}\n\n+++\n\ndt-math[block] {\n\n display: block;\n\n}\n\n0p¯10 \\bar{p}\\_1128​p​¯​​​5​​\n\ndt-math[block] {\n\n display: block;\n\n}\n\n+++\n\ndt-math[block] {\n\n display: block;\n\n}\n\n0p¯10 \\bar{p}\\_1261​p​¯​​​6​​\n\ndt-math[block] {\n\n display: block;\n\n}\n\n===\n\ndt-math[block] {\n\n display: block;\n\n}\n\n deleteQueue.push(renderLoading(d3.select(\"#poly1\")))\n\n renderQueue.push(function(callback) {\n\n var inv = function(lambda) { return 1/lambda }\n\n // Preprocess x, get eigendecomposition, etc\n\n var x = [-0.6, -0.55,-0.5,-0.45,-0.4,0.4,0.45,0.5,0.55,0.6]\n\n var D = vandermonde(x, 5)\n\n var Eigs = eigSym(numeric.dot(numeric.transpose(D),D))\n\n var U = Eigs.U\n\n var lambda = Eigs.lambda\n\n // Preprocess y\n\n var b = [-3/2,-4/2,-5/2,-3/2,-2/2,1/2,2/2,3/2,2/2,1/2]\n\n var Dtb = numeric.dot(b,D)\n\n var sol = numeric.mul(numeric.dot(U, Dtb), lambda.map(inv))\n\n var step = 1.8/lambda[0]\n\n var iter = geniter(U, lambda, Dtb, step)\n\n var eigensum = d3.select(\"#poly1\")\n\n var wi = lambda.slice(0).map(scal)\n\n function refit(b) {\n\n var Dtb = numeric.dot(b,D)\n\n var sol = numeric.mul(numeric.dot(U, Dtb), lambda.map(inv))\n\n var Utsol = numeric.dot(sol,U)\n\n eigenControl.updateweights(sol)\n\n }\n\n var eigenControl = renderEigenPanel(eigensum, U, x, b, wi, refit, true)\n\n var annotate = eigensum\n\n annotate.append(\"figcaption\")\n\n .style(\"width\", 230 + \"px\")\n\n .style(\"height\", 150 + \"px\")\n\n .style(\"left\", \"0px\")\n\n .style(\"position\", \"absolute\")\n\n .style(\"padding\", \"10px\")\n\n annotate.append(\"figcaption\")\n\n .style(\"width\", 230 + \"px\")\n\n .style(\"height\", 150 + \"px\")\n\n .style(\"left\", \"260px\")\n\n .style(\"position\", \"absolute\")\n\n .style(\"padding\", \"10px\")\n\n annotate.append(\"figcaption\")\n\n .style(\"width\", 230 + \"px\")\n\n .style(\"height\", 150 + \"px\")\n\n .style(\"left\", 530 + \"px\")\n\n .style(\"position\", \"absolute\")\n\n .style(\"padding\", \"10px\")\n\n // Swoopy Annotator\n\n var annotations = [\n\n {\n\n \"x\": 0,\n\n \"y\": 0,\n\n \"path\": \"M 807,198 A 26.661 26.661 0 0 1 838,159\",\n\n \"text\": \"drag points to fit data\",\n\n \"textOffset\": [\n\n 799,\n\n 214\n\n ]\n\n }]\n\n drawAnnotations(eigensum, annotations)\n\n callback(null);\n\n });\n\n \n\n The observations in the above diagram can be justified mathematically.\n\n From a statistical point of view, we would like a model which is, in \n\nsome sense, robust to noise. Our model cannot possibly be meaningful if \n\nthe slightest perturbation to the observations changes the entire model \n\ndramatically. And the eigenfeatures, the principal components of the \n\ndata, give us exactly the decomposition we need to sort the features by \n\nits sensitivity to perturbations in \n\ndt-math[block] {\n\n display: block;\n\n}\n\ndid\\_id​i​​’s.\n\n The most robust components appear in the front (with the largest \n\neigenvalues), and the most sensitive components in the back (with the \n\nsmallest eigenvalues).\n\n \n\n This measure of robustness, by a rather convenient coincidence, is \n\nalso a measure of how easily an eigenspace converges. And thus, the \n\n“pathological directions” — the eigenspaces which converge the \n\nslowest — are also those which are most sensitive to noise! So starting \n\nat a simple initial point like \n\ndt-math[block] {\n\n display: block;\n\n}\n\n000\n\n (by a gross abuse of language, let’s think of this as a prior), we \n\ntrack the iterates till a desired level of complexity is reached. Let’s \n\nsee how this plays out in gradient descent.\n\n \n\ndt-math[block] {\n\n display: block;\n\n}\n\n0p¯10 \\bar{p}\\_1-0.389​p​¯​​​1​​\n\ndt-math[block] {\n\n display: block;\n\n}\n\n+++\n\ndt-math[block] {\n\n display: block;\n\n}\n\n0p¯10 \\bar{p}\\_1-2.47​p​¯​​​2​​\n\ndt-math[block] {\n\n display: block;\n\n}\n\n+++\n\ndt-math[block] {\n\n display: block;\n\n}\n\n0p¯10 \\bar{p}\\_1-0.481​p​¯​​​3​​\n\ndt-math[block] {\n\n display: block;\n\n}\n\n+++\n\ndt-math[block] {\n\n display: block;\n\n}\n\n0p¯10 \\bar{p}\\_1-0.540​p​¯​​​4​​\n\ndt-math[block] {\n\n display: block;\n\n}\n\n+++\n\ndt-math[block] {\n\n display: block;\n\n}\n\n0p¯10 \\bar{p}\\_1-0.0128​p​¯​​​5​​\n\ndt-math[block] {\n\n display: block;\n\n}\n\n+++\n\ndt-math[block] {\n\n display: block;\n\n}\n\n0p¯10 \\bar{p}\\_10.0543​p​¯​​​6​​\n\ndt-math[block] {\n\n display: block;\n\n}\n\n===\n\ndt-math[block] {\n\n display: block;\n\n}\n\n deleteQueue.push(renderLoading(d3.select(\"#poly2\")))\n\n renderQueue.push(function(callback) {\n\n var inv = function(lambda) { return 1/lambda }\n\n // Preprocess x, get eigendecomposition, etc\n\n var x = [-0.6, -0.55,-0.5,-0.45,-0.4,0.4,0.45,0.5,0.55,0.6]\n\n var b = [-3/2,-4/2,-5/2,-3/2,-2/2,1/2,2/2,3/2,2/2,1/2]\n\n var D = vandermonde(x, 5)\n\n var Eigs = eigSym(numeric.dot(numeric.transpose(D),D))\n\n var U = Eigs.U\n\n var lambda = Eigs.lambda\n\n // Preprocess y\n\n var Dtb = numeric.dot(b,D)\n\n var sol = numeric.mul(numeric.dot(U, Dtb), lambda.map(inv))\n\n var step = 1.8/lambda[0]\n\n var iter = geniter(U, lambda, Dtb, step)\n\n var eigensum = d3.select(\"#poly2\")\n\n var wi = lambda.slice(0).map(scal)\n\n function refit(b) {\n\n var Dtb = numeric.dot(b,D)\n\n iter = geniter(U, lambda, Dtb, step)\n\n onChange(sliderControl.slidera.xval())\n\n }\n\n var eigenControl = renderEigenPanel(eigensum, U, x, b, wi, refit, true)\n\n var barlengths = getStepsConvergence(lambda, step).map(Math.log)\n\n var onChange = function(i) {\n\n eigenControl.updateweights(numeric.dot(U,iter(Math.floor(Math.exp(i-0.1)) )))\n\n }\n\n d3.select(\"#poly2\").append(\"figcaption\")\n\n .style(\"width\", \"120px\")\n\n .style(\"position\", \"absolute\")\n\n .style(\"left\", \"820px\")\n\n .style(\"top\",\"200px\")\n\n sliderControl.slidera.init()\n\n // var figwidth = d3.select(\"#poly2\").style(\"width\")\n\n // var figheight = d3.select(\"#poly2\").style(\"height\")\n\n // var svgannotate = d3.select(\"#poly2\")\n\n // .append(\"svg\")\n\n // .style(\"width\", figwidth)\n\n // .style(\"height\", figheight)\n\n // .style(\"position\", \"absolute\")\n\n // .style(\"top\",\"0px\")\n\n // .style(\"left\",\"0px\")\n\n // .style(\"pointer-events\",\"none\")\n\n // renderDraggable(svgannotate,\n\n // [139.88888549804688, 243.77951049804688],\n\n // [121.88888549804688, 200.77951049804688],\n\n // 5,\n\n // \"We begin at x=w=0\");\n\n // Swoopy Annotator\n\n var annotations = [\n\n {\n\n \"x\": 0,\n\n \"y\": 0,\n\n \"path\": \"M 74,202 A 52.274 52.274 0 0 0 134,245\",\n\n \"text\": \"We begin at x=w=0\",\n\n \"textOffset\": [\n\n 21,\n\n 198\n\n ]\n\n }\n\n ]\n\n drawAnnotations(d3.select(\"#poly2\"), annotations)\n\n callback(null);\n\n });\n\n \n\n This effect is harnessed with the heuristic of early stopping : by \n\nstopping the optimization early, you can often get better generalizing \n\nresults. Indeed, the effect of early stopping is very similar to that of\n\n more conventional methods of regularization, such as Tikhonov \n\nRegression. Both methods try to suppress the components of the smallest \n\neigenvalues directly, though they employ different methods of spectral \n\ndecay.2\n\n But early stopping has a distinct advantage. Once the step-size is \n\nchosen, there are no regularization parameters to fiddle with. Indeed, \n\nin the course of a single optimization, we have the entire family of \n\nmodels, from underfitted to overfitted, at our disposal. This gift, it \n\nseems, doesn’t come at a price. A beautiful free lunch [7] indeed.\n\n \n\n---\n\nThe Dynamics of Momentum\n\n------------------------\n\n Let’s turn our attention back to momentum. Recall that the momentum update is\n\n \n\ndt-math[block] {\n\n display: block;\n\n}\n\nzk+1=βzk+∇f(wk)wk+1=wk−αzk+1.\n\n \\begin{aligned}\n\n z^{k+1}&=\\beta z^{k}+\\nabla f(w^{k})\\\\[0.4em]\n\n w^{k+1}&=w^{k}-\\alpha z^{k+1}.\n\n \\end{aligned}\n\n ​z​k+1​​​w​k+1​​​​​=βz​k​​+∇f(w​k​​)​=w​k​​−αz​k+1​​.​​\n\n Since \n\ndt-math[block] {\n\n display: block;\n\n}\n\n \n\ndt-math[block] {\n\n display: block;\n\n}\n\nzk+1=βzk+(Awk−b)wk+1=wk−αzk+1.\n\n \\begin{aligned}\n\n z^{k+1}&=\\beta z^{k}+ (Aw^{k}-b)\\\\[0.4em]\n\n w^{k+1}&=w^{k}-\\alpha z^{k+1}.\n\n \\end{aligned}\n\n ​z​k+1​​​w​k+1​​​​​=βz​k​​+(Aw​k​​−b)​=w​k​​−αz​k+1​​.​​\n\n Following [8], we go through the same motions, with the change of basis \n\ndt-math[block] {\n\n display: block;\n\n}\n\nxk=Q(wk−w⋆)\n\n x^{k} = Q(w^{k} - w^\\star)x​k​​=Q(w​k​​−w​⋆​​) and \n\ndt-math[block] {\n\n display: block;\n\n}\n\nyk=Qzk y^{k} = Qz^{k}y​k​​=Qz​k​​, to yield the update rule\n\n \n\ndt-math[block] {\n\n display: block;\n\n}\n\nyik+1=βyik+λixikxik+1=xik−αyik+1.\n\n \\begin{aligned}\n\n y\\_{i}^{k+1}&=\\beta y\\_{i}^{k}+\\lambda\\_{i}x\\_{i}^{k}\\\\[0.4em]\n\n x\\_{i}^{k+1}&=x\\_{i}^{k}-\\alpha y\\_{i}^{k+1}.\n\n \\end{aligned}\n\n ​y​i​k+1​​​x​i​k+1​​​​​=βy​i​k​​+λ​i​​x​i​k​​​=x​i​k​​−αy​i​k+1​​.​​\n\n in which each component acts independently of the other components (though \n\ndt-math[block] {\n\n display: block;\n\n}\n\nxikx^k\\_ix​i​k​​ and \n\ndt-math[block] {\n\n display: block;\n\n}\n\nyiky^k\\_iy​i​k​​ are coupled). This lets us rewrite our iterates as\n\n 3\n\ndt-math[block] {\n\n display: block;\n\n}\n\n(yikxik)=Rk(yi0xi0)R=(βλi−αβ1−αλi).\n\n \\left(\\!\\!\\begin{array}{c}\n\n y\\_{i}^{k}\\\\\n\n x\\_{i}^{k}\n\n \\end{array}\\!\\!\\right)=R^k\\left(\\!\\!\\begin{array}{c}\n\n y\\_{i}^{0}\\\\\n\n x\\_{i}^{0}\n\n \\end{array}\\!\\!\\right)\n\n \\qquad\n\n R = \\left(\\!\\!\\begin{array}{cc}\n\n \\beta & \\lambda\\_{i}\\\\\n\n -\\alpha\\beta & 1-\\alpha\\lambda\\_{i}\n\n \\end{array}\\!\\!\\right).\n\n (​y​i​k​​​x​i​k​​​​)=R​k​​(​y​i​0​​​x​i​0​​​​)R=(​β​−αβ​​​λ​i​​​1−αλ​i​​​​).\n\n There are many ways of taking a matrix to the \n\ndt-math[block] {\n\n display: block;\n\n}\n\nkthk^{th}k​th​​ power. But for the \n\ndt-math[block] {\n\n display: block;\n\n}\n\ndt-math[block] {\n\n display: block;\n\n}\n\nRRR, \n\ndt-math[block] {\n\n display: block;\n\n}\n\nσ1\\sigma\\_1σ​1​​ and \n\ndt-math[block] {\n\n display: block;\n\n}\n\nσ2\\sigma\\_2σ​2​​.\n\n \n\ndt-math[block] {\n\n display: block;\n\n}\n\nRk={σ1kR1−σ2kR2σ1≠σ2σ1k(kR/σ1−(k−1)I)σ1=σ2,Rj=R−σjIσ1−σ2\n\n\\color{#AAA}{\\color{black}{R^{k}}=\\begin{cases}\n\n\\sigma\\_{1}^{k}(kR/\\sigma\\_1-(k-1)I) & \\sigma\\_{1}=\\sigma\\_{2}\n\n\\end{cases},\\qquad R\\_{j}=\\frac{R-\\sigma\\_{j}I}{\\sigma\\_{1}-\\sigma\\_{2}}}\n\n This formula is rather complicated, but the takeaway here is that it \n\nplays the exact same role the individual convergence rates, \n\ndt-math[block] {\n\n display: block;\n\n}\n\n1−αλi1-\\alpha\\lambda\\_i1−αλ​i​​\n\n do in gradient descent. But instead of one geometric series, we have \n\ntwo coupled series, which may have real or complex values. The \n\nconvergence rate is therefore the slowest of the two rates, \n\ndt-math[block] {\n\n display: block;\n\n}\n\nmax{∣σ1∣,∣σ2∣}\\max\\{|\\sigma\\_{1}|,|\\sigma\\_{2}|\\}max{∣σ​1​​∣,∣σ​2​​∣}\n\n4.\n\n By plotting this out, we see there are distinct regions of the \n\nparameter space which reveal a rich taxonomy of convergence behavior [10]:\n\n \n\ns1 = \n\n s2 = \n\n complex02410Momentum \n\ndt-math[block] {\n\n display: block;\n\n}\n\nβ=\\beta=β=Step-size \n\ndt-math[block] {\n\n display: block;\n\n}\n\nα=\\alpha=α=\n\n0.00.20.40.60.81.01.2\n\n Convergence Rate\n\n \n\n A plot of \n\ndt-math[block] {\n\n display: block;\n\n}\n\n \n\ndt-math[block] {\n\n display: block;\n\n}\n\ndt-math[block] {\n\n display: block;\n\n}\n\n2β2\\sqrt{\\beta}2√​β​​​ is independent of \n\ndt-math[block] {\n\n display: block;\n\n}\n\nα\\alphaα and \n\ndt-math[block] {\n\n display: block;\n\n}\n\nλi\\lambda\\_iλ​i​​.\n\ndt-math[block] {\n\n display: block;\n\n}\n\ndt-math[block] {\n\n display: block;\n\n}\n\nxik−xi∗x\\_i^k - x\\_i^\\*x​i​k​​−x​i​∗​​**1-Step Convergence**When \n\ndt-math[block] {\n\n display: block;\n\n}\n\nα=1/λi\\alpha = 1/\\lambda\\_iα=1/λ​i​​, and \n\ndt-math[block] {\n\n display: block;\n\n}\n\ndt-math[block] {\n\n display: block;\n\n}\n\nxik−xi∗x\\_i^k - x\\_i^\\*x​i​k​​−x​i​∗​​**Monotonic Oscillations**When \n\ndt-math[block] {\n\n display: block;\n\n}\n\nα>1/λi\\alpha > 1/\\lambda\\_iα>1/λ​i​​, the iterates flip between \n\ndt-math[block] {\n\n display: block;\n\n}\n\n+++ and \n\ndt-math[block] {\n\n display: block;\n\n}\n\ndt-math[block] {\n\n display: block;\n\n}\n\nxik−xi∗x\\_i^k - x\\_i^\\*x​i​k​​−x​i​∗​​**Divergence**When \n\ndt-math[block] {\n\n display: block;\n\n}\n\n deleteQueue.push(renderLoading(d3.select(\"#momentum2D\")))\n\n renderQueue.push(function(callback) {\n\n var defaults = [[0.0015, 0.9],\n\n [0.0015, 0.125],\n\n [0.01, 0.00001],\n\n [0.02, 0.05 ],\n\n [0.025, 0.235 ]]\n\n coor = render2DSliderGen(\n\n function(a,b,bold) {\n\n var xy = coor(a,b)\n\n updatePaths[0](xy[0], xy[1],bold)\n\n updateStemGraphs[0](a,b)\n\n },\n\n function(a,b,bold) {\n\n var xy = coor(a,b)\n\n updatePaths[1](xy[0], xy[1],bold)\n\n updateStemGraphs[1](a,b)\n\n },\n\n function(a,b,bold) {\n\n var xy = coor(a,b)\n\n updatePaths[2](xy[0], xy[1],bold)\n\n updateStemGraphs[2](a,b)\n\n },\n\n function(a,b,bold) {\n\n var xy = coor(a,b)\n\n updatePaths[3](xy[0], xy[1],bold)\n\n updateStemGraphs[3](a,b)\n\n },\n\n function(a,b,bold) {\n\n var xy = coor(a,b)\n\n updatePaths[4](xy[0], xy[1],bold)\n\n updateStemGraphs[4](a,b)\n\n }, defaults)(d3.select(\"#momentumCanvas\"))\n\n var tax = renderTaxonomy(d3.select(\"#momentum2D\"))\n\n var updatePaths = renderOverlay(d3.select(\"#momentumOverlay\"), tax.div)\n\n var updateStemGraphs = tax.update\n\n colorMap(\n\n d3.select(\"#momentumAnnotation\"),\n\n 180,\n\n d3.scaleLinear().domain([0,1.2001]).range([0, 180])\n\n )\n\n var up = function (i, alpha, beta) {\n\n var xy = coor(alpha, beta)\n\n updatePaths[i](xy[0], xy[1], true)\n\n updateStemGraphs[i](alpha,beta)\n\n }\n\n for (var i = 0; i<5; i++) {\n\n up(i,defaults[i][0], defaults[i][1])\n\n }\n\n renderMath(document.getElementById(\"momentum2D\"))\n\n callback(null);\n\n });\n\n \n\n For what values of \n\ndt-math[block] {\n\n display: block;\n\n}\n\nα\\alphaα and \n\ndt-math[block] {\n\n display: block;\n\n}\n\nβ\\betaβ does momentum converge? Since we need both \n\ndt-math[block] {\n\n display: block;\n\n}\n\nσ1\\sigma\\_1σ​1​​ and \n\ndt-math[block] {\n\n display: block;\n\n}\n\nσ2\\sigma\\_2σ​2​​ to converge, our convergence criterion is now \n\ndt-math[block] {\n\n display: block;\n\n}\n\n \n\ndt-math[block] {\n\n display: block;\n\n}\n\n We recover the previous result for gradient descent when \n\ndt-math[block] {\n\n display: block;\n\n}\n\n \n\n---\n\n### The Critical Damping Coefficient\n\n The true magic happens, however, when we find the sweet spot of \n\ndt-math[block] {\n\n display: block;\n\n}\n\nα\\alphaα and \n\ndt-math[block] {\n\n display: block;\n\n}\n\nβ\\betaβ. Let us try to first optimize over \n\ndt-math[block] {\n\n display: block;\n\n}\n\nβ\\betaβ.\n\n \n\n Momentum admits an interesting physical interpretation when \n\ndt-math[block] {\n\n display: block;\n\n}\n\nα\\alphaα is [11]\n\n small: it is a discretization of a damped harmonic oscillator. Consider\n\n a physical simulation operating in discrete time (like a video game).\n\n \n\ndt-math[block] {\n\n display: block;\n\n}\n\nyik+1y\\_{i}^{k+1}y​i​k+1​​\n\ndt-math[block] {\n\n display: block;\n\n}\n\n===\n\ndt-math[block] {\n\n display: block;\n\n}\n\n+++\n\ndt-math[block] {\n\n display: block;\n\n}\n\nλixik\\lambda\\_{i}x\\_{i}^{k}λ​i​​x​i​k​​\n\n and perturbed by an external force field\n\n \n\n We can think of \n\ndt-math[block] {\n\n display: block;\n\n}\n\n−yik-y\\_i^k−y​i​k​​ as **velocity**\n\ndt-math[block] {\n\n display: block;\n\n}\n\nβyik\\beta y\\_{i}^{k}βy​i​k​​\n\n which is dampened at each step\n\n \n\ndt-math[block] {\n\n display: block;\n\n}\n\nxik+1x\\_i^{k+1}x​i​k+1​​\n\ndt-math[block] {\n\n display: block;\n\n}\n\n===\n\ndt-math[block] {\n\n display: block;\n\n}\n\nxik−αyik+1x\\_i^k - \\alpha y\\_i^{k+1}x​i​k​​−αy​i​k+1​​\n\n And \n\ndt-math[block] {\n\n display: block;\n\n}\n\nxxx is our particle’s **position**\n\n which is moved at each step by a small amount in the direction of the velocity \n\ndt-math[block] {\n\n display: block;\n\n}\n\nyik+1y^{k+1}\\_iy​i​k+1​​.\n\n \n\n renderMath(document.getElementById(\"momentum\\_annotations\"))\n\n \n\ndt-math[block] {\n\n display: block;\n\n}\n\n \n\nβ = 10.9850.950.90.850.8 \n\nλ = 0 0.02 0.1 0.25VelocityDampingExternal ForcePosition**dt-math[block] {\n\n display: block;\n\n}\n\nβ\\betaβ: Horizontal Axis** \n\nWhen \n\ndt-math[block] {\n\n display: block;\n\n}\n\nλi=0\\lambda\\_i = 0λ​i​​=0 and \n\ndt-math[block] {\n\n display: block;\n\n}\n\nβ=1\\beta=1β=1, the object moves at constant speed. As \n\ndt-math[block] {\n\n display: block;\n\n}\n\n display: block;\n\n}\n\nλ\\lambdaλ: Vertical Axis** \n\n The external force causes the particle to return to the origin. \n\nCombining damping and the force field, the particle behaves like a \n\ndamped harmonic oscillator, returning lazily to equlibrium.\n\nInitial point x = 1, y = 1\n\n deleteQueue.push(renderLoading(d3.select(\"#phasediagram0\")))\n\n renderQueue.push(function(callback) {\n\n phaseDiagram\\_dec(d3.select(\"#phasediagram0div\"))\n\n renderMath(d3.select(\"#phasediagram0div\").node())\n\n var figure = d3.select(\"#phasediagram0\")\n\n var figwidth = figure.style(\"width\")\n\n var figheight = figure.style(\"height\")\n\n var svg = figure.append(\"svg\")\n\n .style(\"width\", figwidth)\n\n .style(\"height\", figheight)\n\n .style(\"position\", \"absolute\")\n\n .style(\"top\",\"0px\")\n\n .style(\"left\",\"0px\")\n\n .style(\"pointer-events\", \"none\")\n\n callback(null);\n\n renderMath(document.getElementById(\"phasediagram0\"))\n\n });\n\n \n\n This system is best imagined as a weight suspended on a spring. We \n\npull the weight down by one unit, and we study the path it follows as it\n\n returns to equilibrium. In the analogy, the spring is the source of our\n\n external force \n\ndt-math[block] {\n\n display: block;\n\n}\n\ndt-math[block] {\n\n display: block;\n\n}\n\nxikx^k\\_ix​i​k​​ and the speed \n\ndt-math[block] {\n\n display: block;\n\n}\n\nyiky^k\\_iy​i​k​​ are 0. The choice of \n\ndt-math[block] {\n\n display: block;\n\n}\n\nβ\\betaβ crucially affects the rate of return to equilibrium.\n\n \n\nreachesoptimum**Overdamping** When \n\ndt-math[block] {\n\n display: block;\n\n}\n\nβ\\betaβ is too small (e.g. in Gradient Descent, \n\ndt-math[block] {\n\n display: block;\n\n}\n\ndt-math[block] {\n\n display: block;\n\n}\n\ndt-math[block] {\n\n display: block;\n\n}\n\nRRR are repeated, when \n\ndt-math[block] {\n\n display: block;\n\n}\n\ndt-math[block] {\n\n display: block;\n\n}\n\nβ\\betaβ\n\n is too large we're under-damping. Here the resistance is too small, and\n\n spring oscillates up and down forever, missing the optimal value over \n\nand over. 1.00.80.60.40.20.0Momentum β\n\n deleteQueue.push(renderLoading(d3.select(\"#phasediagram1\")))\n\n renderQueue.push(function(callback) {\n\n phaseDiagram(d3.select(\"#phasediagram1\"));\n\n renderMath(document.getElementById(\"phasediagram1\"))\n\n callback(null);\n\n });\n\n \n\n The critical value of \n\ndt-math[block] {\n\n display: block;\n\n}\n\ndt-math[block] {\n\n display: block;\n\n}\n\niii) of \n\ndt-math[block] {\n\n display: block;\n\n}\n\ndt-math[block] {\n\n display: block;\n\n}\n\n1−αλi1-\\alpha\\lambda\\_i1−αλ​i​​! Alas, this only applies to the error in the \n\ndt-math[block] {\n\n display: block;\n\n}\n\nithi^{th}i​th​​ eigenspace, with \n\ndt-math[block] {\n\n display: block;\n\n}\n\nα\\alphaα fixed.\n\n \n\n### Optimal parameters\n\n To get a global convergence rate, we must optimize over both \n\ndt-math[block] {\n\n display: block;\n\n}\n\nα\\alphaα and \n\ndt-math[block] {\n\n display: block;\n\n}\n\nβ\\betaβ. This is a more complicated affair,6 but they work out to be\n\n \n\ndt-math[block] {\n\n display: block;\n\n}\n\nα=(2λ1+λn)2β=(λn−λ1λn+λ1)2\n\n \\alpha = \n\n\\left(\\frac{2}{\\sqrt{\\lambda\\_{1}}+\\sqrt{\\lambda\\_{n}}}\\right)^{2} \\quad \n\n\\beta = \n\n Plug this into the convergence rate, and you get\n\n \n\ndt-math[block] {\n\n display: block;\n\n}\n\nκ−1κ+1\\frac{\\sqrt{\\kappa}-1}{\\sqrt{\\kappa}+1}​√​κ​​​+1​​√​κ​​​−1​​\n\n Convergence rate, **Momentum**\n\ndt-math[block] {\n\n display: block;\n\n}\n\nκ−1κ+1 \\frac{\\kappa-1}{\\kappa+1}​κ+1​​κ−1​​\n\n Convergence rate, **Gradient Descent**\n\n renderMath(document.getElementById(\"conv\\_rate\\_comparisons\"))\n\n \n\n With barely a modicum of extra effort, we have essentially square \n\nrooted the condition number! These gains, in principle, require explicit\n\n knowledge of \n\ndt-math[block] {\n\n display: block;\n\n}\n\nλ1\\lambda\\_1λ​1​​ and \n\ndt-math[block] {\n\n display: block;\n\n}\n\ndt-math[block] {\n\n display: block;\n\n}\n\ndt-math[block] {\n\n display: block;\n\n}\n\n111. So set \n\ndt-math[block] {\n\n display: block;\n\n}\n\nβ\\betaβ as close to \n\ndt-math[block] {\n\n display: block;\n\n}\n\n111 as you can, and then find the highest \n\ndt-math[block] {\n\n display: block;\n\n}\n\n \n\nWe can do the same decomposition here with momentum, with eigenvalues\n\n \n\ndt-math[block] {\n\n display: block;\n\n}\n\nλ1=0.01\\lambda\\_1=0.01λ​1​​=0.01,\n\n \n\ndt-math[block] {\n\n display: block;\n\n}\n\nλ2=0.1\\lambda\\_2=0.1λ​2​​=0.1, and\n\n \n\ndt-math[block] {\n\n display: block;\n\n}\n\n \n\ndt-math[block] {\n\n display: block;\n\n}\n\nf(wk)−f(w⋆)f(w^k) - f(w^\\star)f(w​k​​)−f(w​⋆​​)\n\n Note that the optimal parameters do not necessarily imply the fastest \n\nconvergence, though, only the fastest asymptotic convergence rate. \n\nStep-size α = \n\nMomentum β = \n\n02401\n\n020406080100120140Eigenvalue 123\n\nOptimal parameters\n\n deleteQueue.push(renderLoading(d3.select(\"#milestonesMomentumFig\")))\n\n renderQueue.push(function(callback) {\n\n var graphDiv = d3.select(\"#milestonesMomentum\")\n\n .style(\"width\", 920 + \"px\")\n\n .style(\"height\", 300 + \"px\")\n\n .style(\"top\", \"170px\")\n\n .style(\"position\", \"relative\")\n\n .style(\"margin-left\", \"auto\")\n\n .style(\"margin-right\", \"auto\")\n\n .attr(\"width\", 920)\n\n .attr(\"height\", 500)\n\n var svg = graphDiv.append(\"svg\").attr(\"width\", 940).attr(\"height\", 500)\n\n var update = renderMilestones(svg, function() {})\n\n var reset = slider2D(d3.select(\"#sliderStep2D\"),\n\n function(x,y) { update(x,y) },\n\n 1,\n\n 100)\n\n // // Swoopy Annotator\n\n annotations = [\n\n {\n\n \"x\": 0,\n\n \"y\": 0,\n\n \"path\": \"M 798,98 A 27.97 27.97 0 0 0 760.9999389648438,70.9999771118164\",\n\n \"text\": \"Optimal parameters\",\n\n \"textOffset\": [\n\n 740,\n\n 109\n\n ]\n\n }\n\n ]\n\n var sel = drawAnnotations(d3.select(\"#milestonesMomentumFig\"), annotations)\n\n callback(null);\n\n });\n\n \n\n While the loss function of gradient descent had a graceful, monotonic \n\ncurve, optimization with momentum displays clear oscillations. These \n\nripples are not restricted to quadratics, and occur in all kinds of \n\nfunctions in practice. They are not cause for alarm, but are an \n\nindication that extra tuning of the hyperparameters is required.\n\n \n\n---\n\nExample: The Colorization Problem\n\n---------------------------------\n\ndt-math[block] {\n\n display: block;\n\n}\n\nGGG be the graph with vertices as pixels, \n\ndt-math[block] {\n\n display: block;\n\n}\n\ndt-math[block] {\n\n display: block;\n\n}\n\n \n\ndt-math[block] {\n\n display: block;\n\n}\n\nminimize\\text{minimize} minimize\n\ndt-math[block] {\n\n display: block;\n\n}\n\n The **colorizer** pulls distinguished pixels towards 1\n\n \n\ndt-math[block] {\n\n display: block;\n\n}\n\n+++\n\ndt-math[block] {\n\n display: block;\n\n}\n\n The **smoother** spreads out the color\n\n \n\n renderMath(document.getElementById(\"colorizer\\_equation\"))\n\n \n\n The optimal solution to this problem is a vector of all \n\ndt-math[block] {\n\n display: block;\n\n}\n\n111’s 7.\n\n An inspection of the gradient iteration reveals why we take a long time\n\n to get there. The gradient step, for each component, is some form of \n\nweighted average of the current value and its neighbors:\n\n \n\ndt-math[block] {\n\n display: block;\n\n}\n\nwik+1=wik−α∑j∈N(wik−wjk)−{α(wik−1)i∈D0i∉D\n\n\\alpha(w\\_{i}^{k}-1) & i\\in D\\\\\n\n0 & i\\notin D\n\n\\end{cases}\n\n w​i​k+1​​=w​i​k​​−α​j∈N​∑​​(w​i​k​​−w​j​k​​)−{​α(w​i​k​​−1)​0​​​i∈D​i∉D​​\n\n This kind of local averaging is effective at smoothing out local \n\nvariations in the pixels, but poor at taking advantage of global \n\nstructure. The updates are akin to a drop of ink, diffusing through \n\nwater. Movement towards equilibrium is made only through local \n\ncorrections and so, left undisturbed, its march towards the solution is \n\nslow and laborious. Fortunately, momentum speeds things up \n\nsignificantly.\n\n \n\ndt-math[block] {\n\n display: block;\n\n}\n\nRnR^nR​n​​.\n\n The smallest eigenvalues have low frequencies, hence gradient descent \n\ncorrects high frequency errors well but not low frequency ones.\n\n \n\n deleteQueue.push(renderLoading(d3.select(\"#flow\")))\n\n renderQueue.push(function(callback) {\n\n d3.queue()\n\n .defer(d3.json, \"assets/data/Sigma.json\")\n\n .defer(d3.json, \"assets/data/matrix.json\")\n\n .defer(d3.json, \"assets/data/Uval.json\")\n\n .await(function(error, FlowSigma, M, FlowU) {\n\n if (error) {\n\n console.error(\"Error loading data files\");\n\n }\n\n else {\n\n var reset = renderFlowWidget(d3.select(\"#flow\"), FlowSigma, M, FlowU)\n\n // Swoopy Annotator\n\n var annotations = [\n\n {\n\n \"x\": 0,\n\n \"y\": 0,\n\n \"path\": \"M 389,55 A 28.57 28.57 0 0 0 352,30\",\n\n \"text\": \"Optimal parameters\",\n\n \"textOffset\": [\n\n 341,\n\n 65\n\n ]\n\n }\n\n ]\n\n var sel = drawAnnotations(d3.select(\"#flow\"), annotations)\n\n }\n\n });\n\n callback(null);\n\n });\n\n \n\n In vectorized form, the colorization problem is\n\n \n\ndt-math[block] {\n\n display: block;\n\n}\n\nminimize\\text{minimize}minimize\n\n The **smoother**’s quadratic form is the **Graph Laplacian**\n\ndt-math[block] {\n\n display: block;\n\n}\n\ndt-math[block] {\n\n display: block;\n\n}\n\n+++\n\ndt-math[block] {\n\n display: block;\n\n}\n\n12xTLGx\\frac{1}{2}x^{T}L\\_{G}x​2​​1​​x​T​​L​G​​x\n\n And the colorizer is a small low rank correction with a linear term. \n\ndt-math[block] {\n\n display: block;\n\n}\n\neie\\_ie​i​​ is the \n\ndt-math[block] {\n\n display: block;\n\n}\n\nithi^{th}i​th​​ unit vector.\n\n \n\n The Laplacian matrix, \n\ndt-math[block] {\n\n display: block;\n\n}\n\nLGL\\_GL​G​​\n\n8,\n\n which dominates the behavior of the optimization problem, is a valuable\n\n bridge between linear algebra and graph theory. This is a rich field of\n\n study, but one fact is pertinent to our discussion here. The \n\nconditioning of \n\ndt-math[block] {\n\n display: block;\n\n}\n\nLGL\\_GL​G​​,\n\n here defined as the ratio of the second eigenvector to the last (the \n\nfirst eigenvalue is always 0, with eigenvector equal to the matrix of \n\nall 1′s), is directly connected to the connectivity of the graph.\n\n \n\nSmall world graphs, like expanders and dense graphs, have excellent conditioning\n\nThe conditioning of grids improves with its dimensionality.\n\nAnd long, wiry graphs, like paths, condition poorly. \n\n deleteQueue.push(renderLoading(d3.select(\"#laplacianConditioning\")))\n\n renderQueue.push(function(callback) {\n\n var g1 = d3.select(\"#graph\").append(\"g\")\n\n genExpander(g1)\n\n var g2 = d3.select(\"#graph\").append(\"g\").attr(\"transform\", \"translate(300,0)\")\n\n genGrid(g2)\n\n var g3 = d3.select(\"#graph\").append(\"g\").attr(\"transform\", \"translate(600,0)\")\n\n genPath(g3);\n\n callback(null);\n\n });\n\n \n\n These observations carry through to the colorization problem, and the \n\nintuition behind it should be clear. Well connected graphs allow rapid \n\ndiffusion of information through the edges, while graphs with poor \n\nconnectivity do not. And this principle, taken to the extreme, furnishes\n\n a class of functions so hard to optimize they reveal the limits of \n\nfirst order optimization.\n\n \n\n---\n\n The Limits of Descent\n\n-----------------------\n\n Let’s take a step back. We have, with a clever trick, improved the \n\nconvergence of gradient descent by a quadratic factor with the \n\nintroduction of a single auxiliary sequence. But is this the best we can\n\n do? Could we improve convergence even more with two sequences? Could \n\none perhaps choose the \n\ndt-math[block] {\n\n display: block;\n\n}\n\nα\\alphaα’s and \n\ndt-math[block] {\n\n display: block;\n\n}\n\n \n\n Unfortunately, while improvements to the momentum algorithm do exist, \n\nthey all run into a certain, critical, almost inescapable lower bound.\n\n \n\n### Adventures in Algorithmic Space\n\n To understand the limits of what we can do, we must first formally \n\ndefine the algorithmic space in which we are searching. Here’s one \n\npossible definition. The observation we will make is that both gradient \n\ndescent and momentum can be “unrolled”. Indeed, since\n\n \n\ndt-math[block] {\n\n display: block;\n\n}\n\n \\begin{array}{lll}\n\n w^{1} & \\!= & \\!w^{0} ~-~ \\alpha\\nabla f(w^{0})\\\\[0.35em]\n\n w^{2} & \\!= & \\!w^{1} ~-~ \\alpha\\nabla f(w^{1})\\\\[0.35em]\n\n & \\!= & \\!w^{0} ~-~ \\alpha\\nabla f(w^{0}) ~-~ \\alpha\\nabla f(w^{1})\\\\[0.35em]\n\n & ~ \\!\\vdots \\\\\n\n \\end{array}\n\n we can write gradient descent as\n\n \n\ndt-math[block] {\n\n display: block;\n\n}\n\nwk+1  =  w0 − α∑ik∇f(wi).\n\n w^{k+1} ~~=~~ w^{0} ~-~ \\alpha\\sum\\_i^k\\nabla f(w^{i}).\n\n w​k+1​​  =  w​0​​ − α​i​∑​k​​∇f(w​i​​).\n\n A similar trick can be done with momentum:\n\n \n\ndt-math[block] {\n\n display: block;\n\n}\n\nwk+1  =  w0 + α∑ik(1−βk+1−i)1−β∇f(wi).\n\n w​k+1​​  =  w​0​​ + α​i​∑​k​​​1−β​​(1−β​k+1−i​​)​​∇f(w​i​​).\n\n In fact, all manner of first order algorithms, including the Conjugate\n\n Gradient algorithm, AdaMax, Averaged Gradient and more, can be written \n\n(though not quite so neatly) in this unrolled form. Therefore the class \n\nof algorithms for which\n\n \n\ndt-math[block] {\n\n display: block;\n\n}\n\nwk+1  =  w0 + ∑ikγik∇f(wi) for some γik\n\n w​k+1​​  =  w​0​​ + ​i​∑​k​​γ​i​k​​∇f(w​i​​) for some γ​i​k​​\n\n contains momentum, gradient descent and a whole bunch of other \n\nalgorithms you might dream up. This is what is assumed in Assumption \n\n \n\ndt-math[block] {\n\n display: block;\n\n}\n\nwk+1  =  w0 + ∑ikΓik∇f(wi) for some diagonal matrix Γik.\n\n w​k+1​​  =  w​0​​ + ​i​∑​k​​Γ​i​k​​∇f(w​i​​) for some diagonal matrix Γ​i​k​​.\n\n This class of methods covers most of the popular algorithms for \n\ntraining neural networks, including ADAM and AdaGrad. We shall refer to \n\nthis class of methods as “Linear First Order Methods”, and we will show a\n\n single function all these methods ultimately fail on.\n\n \n\n### The Resisting Oracle\n\n Earlier, when we talked about the colorizer problem, we observed that \n\nwiry graphs cause bad conditioning in our optimization problem. Taking \n\nthis to its extreme, we can look at a graph consisting of a single \n\npath — a function so badly conditioned that Nesterov called a variant of\n\n it “the worst function in the world”. The function follows the same \n\nstructure as the colorizer problem, and we shall call this the Convex \n\nRosenbrock,\n\n \n\ndt-math[block] {\n\n display: block;\n\n}\n\nfn(w)f^n(w)f​n​​(w)\n\ndt-math[block] {\n\n display: block;\n\n}\n\n===\n\n with a colorizer of one node\n\n \n\ndt-math[block] {\n\n display: block;\n\n}\n\n12(w1−1)2\\frac{1}{2}\\left(w\\_{1}-1\\right)^{2}​2​​1​​(w​1​​−1)​2​​\n\ndt-math[block] {\n\n display: block;\n\n}\n\n+++\n\ndt-math[block] {\n\n display: block;\n\n}\n\n strong couplings of adjacent nodes in the path,\n\n \n\ndt-math[block] {\n\n display: block;\n\n}\n\n+++\n\ndt-math[block] {\n\n display: block;\n\n}\n\n2κ−1∥w∥2.\\frac{2}{\\kappa-1}\\|w\\|^{2}.​κ−1​​2​​∥w∥​2​​.\n\n and a small regularization term.\n\n \n\n The optimal solution of this problem is\n\n \n\ndt-math[block] {\n\n display: block;\n\n}\n\nwi⋆=(κ−1κ+1)i\n\n w\\_{i}^{\\star}=\\left(\\frac{\\sqrt{\\kappa}-1}{\\sqrt{\\kappa}+1}\\right)^{i}\n\n w​i​⋆​​=(​√​κ​​​+1​​√​κ​​​−1​​)​i​​\n\n and the condition number of the problem \n\ndt-math[block] {\n\n display: block;\n\n}\n\nfnf^nf​n​​ approaches \n\ndt-math[block] {\n\n display: block;\n\n}\n\nκ\\kappaκ as \n\ndt-math[block] {\n\n display: block;\n\n}\n\ndt-math[block] {\n\n display: block;\n\n}\n\nw0=0w^0 = 0w​0​​=0.\n\n \n\n02401\n\nStep-size α = \n\nMomentum β = \n\nHere we see the first 50 iterates of momentum on the Convex Rosenbrock for \n\ndt-math[block] {\n\n display: block;\n\n}\n\nThe\n\n remaining expanding space is the “light cone” of our iterate’s \n\ninfluence. Momentum does very well here with the optimal parameters. \n\nIteration 01020304050\n\nError\n\n0.00.20.40.60.81.0\n\nWeights\n\n0.00.20.40.60.81.0\n\nOptimal parameters\n\n deleteQueue.push(renderLoading(d3.select(\"#rosenViz\")))\n\n renderQueue.push(function(callback) {\n\n var alpha = 0.003\n\n var beta = 0.8\n\n var b = zeros(25); b[0] = 249.75\n\n var iter = geniterMomentum(RQ, RLambda, b, alpha, beta).iter\n\n var res = function(i) { return iter(i)[0] }\n\n var sampleSVG = d3.select(\"#iterates\")\n\n .style(\"display\", \"block\")\n\n .append(\"svg\")\n\n .attr(\"width\", 770)\n\n .attr(\"height\", 700)\n\n var xstar = iter(10000000)\n\n var numIters = 52\n\n var pathstr = \"M 8,0\"\n\n for (var j = 0; j < 27; j++) {\n\n if (j != 26 && j > 0) {\n\n pathstr = pathstr + \"L \" + j\\*8 + \",\" + (j-1)\\*8 + \"L \" + j\\*8 + \",\" + j\\*8\n\n }\n\n if (j == 0) {\n\n //pathstr = pathstr + \"L \" + j\\*8 + \",\" + j\\*8\n\n }\n\n if (j == 26) {\n\n pathstr = pathstr + \"L \" + j\\*8 + \",\" + (j-1)\\*8\n\n }\n\n }\n\n pathstr = pathstr + \"L \"+ 26\\*8 + \",0 L 8,0 \"\n\n var Disps3 = []\n\n for (var j = 0; j < numIters; j++) {\n\n var disp = errorPlot.append(\"g\")\n\n var denter = disp.selectAll(\"rect\")\n\n .data(iter(j)[1])\n\n .enter()\n\n denter.append(\"rect\")\n\n .style(\"fill\", function(d,i) { return rosen(d) })\n\n .attr(\"height\", 7.7)\n\n .attr(\"width\", 7.7)\n\n .attr(\"x\", function(d, i){return i\\*8+ 10})\n\n .attr(\"y\", function(d, i){return j\\*8 })\n\n Disps3.push(disp)\n\n }\n\n var Disps4 = []\n\n for (var j = 0; j < numIters; j++) {\n\n var disp = errorPlot.append(\"g\")\n\n var denter = disp.selectAll(\"rect\")\n\n .data(iter(j)[1])\n\n .enter()\n\n denter.append(\"rect\")\n\n .style(\"fill\", function(d,i) { return jet(d) })\n\n .attr(\"height\", 7.7)\n\n .attr(\"width\", 7.7)\n\n .attr(\"x\", function(d, i){return i\\*8+ 10})\n\n .attr(\"y\", function(d, i){return j\\*8 })\n\n if ((j % 10 == 0) || j == 0) {\n\n disp.append(\"text\")\n\n .attr(\"class\", \"figtext\")\n\n .attr(\"text-anchor\", \"end\")\n\n .attr(\"x\", 0)\n\n .attr(\"y\", function(d, i){return (j\\*8 + 10) })\n\n .html( (j == 0) ? \"Iteration 0\" : \"\" + j)\n\n }\n\n Disps4.push(disp)\n\n }\n\n function update(alpha, beta) {\n\n for (var j = 0; j < numIters; j++) {\n\n var iterj = iter(j)\n\n }\n\n }\n\n update(0.0035749243028120182, 0.8095238095238095)\n\n colorMap( d3.select(\"#rosen\\_colorbar1\"),\n\n 194,\n\n rosen,\n\n d3.scaleLinear().domain([0,1]).range([0, 194]) )\n\n colorMap( d3.select(\"#rosen\\_colorbar2\"),\n\n 194,\n\n jet,\n\n d3.scaleLinear().domain([0,1]).range([0, 194]) );\n\n // Swoopy Annotator\n\n var annotations = [\n\n {\n\n \"x\": 0,\n\n \"y\": 0,\n\n \"path\": \"M 817,95 A 23.869 23.869 0 0 0 792,72\",\n\n \"text\": \"Optimal parameters\",\n\n \"textOffset\": [\n\n 767,\n\n 109\n\n ]\n\n }\n\n ]\n\n var sel = drawAnnotations(d3.select(\"#rosenViz\"), annotations)\n\n callback(null);\n\n });\n\n \n\n The observations made in the above diagram are true for any Linear \n\nFirst Order algorithm. Let us prove this. First observe that each \n\ncomponent of the gradient depends only on the values directly before and\n\n after it:\n\n \n\ndt-math[block] {\n\n display: block;\n\n}\n\n∇f(x)i=2wi−wi−1−wi+1+4κ−1wi,i≠1.\n\n ∇f(x)​i​​=2w​i​​−w​i−1​​−w​i+1​​+​κ−1​​4​​w​i​​,i≠1.\n\n Therefore the fact we start at 0 guarantees that that component must \n\nremain stoically there till an element either before or after it turns \n\nnonzero. And therefore, by induction, for any linear first order \n\nalgorithm,\n\n \n\ndt-math[block] {\n\n display: block;\n\n}\n\n \\begin{array}{lllllllll}\n\n w^{0} & = & [~~0, & 0, & 0, & \\ldots & 0, \n\n& 0, & \\ldots & 0~]\\\\[0.35em]\n\n w^{1} & = & [~w\\_{1}^{1}, & 0, & 0, & \\ldots \n\n& 0, & 0, & \\ldots & 0~]\\\\[0.35em]\n\n w^{2} & = & [~w\\_{1}^{2}, & w\\_{2}^{2}, & 0, & \n\n\\ldots & 0, & 0, & \\ldots & 0~]\\\\[0.35em]\n\n & ~ \\vdots \\\\\n\n w^{k} & = & [~w\\_{1}^{k}, & w\\_{2}^{k}, & w\\_{3}^{k}, \n\n& \\ldots & w\\_{k}^{k}, & 0, & \\ldots & 0~].\\\\\n\n \\end{array}\n\ndt-math[block] {\n\n display: block;\n\n}\n\nkkk steps to move from \n\ndt-math[block] {\n\n display: block;\n\n}\n\nw0w\\_0w​0​​ to \n\ndt-math[block] {\n\n display: block;\n\n}\n\nwkw\\_kw​k​​. We can therefore sum up the errors which cannot have changed yet9:\n\n \n\ndt-math[block] {\n\n display: block;\n\n}\n\n∥wk−w⋆∥∞≥maxi≥k+1{∣wi⋆∣}=(κ−1κ+1)k+1=(κ−1κ+1)k∥w0−w⋆∥∞.\n\n \\begin{aligned}\n\n\\|w^{k}-w^{\\star}\\|\\_{\\infty}&\\geq\\max\\_{i\\geq \n\n \\end{aligned}\n\n As \n\ndt-math[block] {\n\n display: block;\n\n}\n\nnnn gets large, the condition number of \n\ndt-math[block] {\n\n display: block;\n\n}\n\nfnf^nf​n​​ approaches \n\ndt-math[block] {\n\n display: block;\n\n}\n\nκ\\kappaκ.\n\n And the gap therefore closes; the convergence rate that momentum \n\npromises matches the best any linear first order algorithm can do. And \n\nwe arrive at the disappointing conclusion that on this problem, we \n\ncannot do better.\n\n \n\n Like many such lower bounds, this result must not be taken literally, \n\nbut spiritually. It, perhaps, gives a sense of closure and finality to \n\nour investigation. But this is not the final word on first order \n\noptimization. This lower bound does not preclude the possibility, for \n\nexample, of reformulating the problem to change the condition number \n\nitself! There is still much room for speedups, if you understand the \n\nright places to look.\n\n \n\nMomentum with Stochastic Gradients\n\n----------------------------------\n\n There is a final point worth addressing. All the discussion above \n\nassumes access to the true gradient — a luxury seldom afforded in modern\n\n machine learning. Computing the exact gradient requires a full pass \n\nover all the data, the cost of which can be prohibitively expensive. \n\nInstead, randomized approximations of the gradient, like minibatch \n\nsampling, are often used as a plug-in replacement of \n\ndt-math[block] {\n\n display: block;\n\n}\n\n∇f(w)\\nabla f(w)∇f(w). We can write the approximation in two parts,\n\n \n\ndt-math[block] {\n\n display: block;\n\n}\n\n∇f(w)\\nabla f(w)∇f(w)\n\n the true gradient\n\n \n\ndt-math[block] {\n\n display: block;\n\n}\n\n+++\n\ndt-math[block] {\n\n display: block;\n\n}\n\nerror(w).\\text{error}(w).error(w).\n\n and an approximation error. \n\nIf the estimator is unbiased e.g. \n\ndt-math[block] {\n\n display: block;\n\n}\n\nE[error(w)]=0\\mathbf{E}[\\text{error}(w)] = 0E[error(w)]=0\n\n renderMath(document.getElementById(\"truegradientpluserror\"))\n\n \n\n It is helpful to think of our approximate gradient as the injection of\n\n a special kind of noise into our iteration. And using the machinery \n\ndeveloped in the previous sections, we can deal with this extra term \n\ndirectly. On a quadratic, the error term cleaves cleanly into a separate\n\n term, where 10\n\ndt-math[block] {\n\n display: block;\n\n}\n\n(yikxik) \\left(\\begin{array}{c}\n\n y\\_{i}^{k}\\\\\n\n x\\_{i}^{k}\n\n \\end{array}\\right)(​y​i​k​​​x​i​k​​​​)\n\n the noisy iterates are a sum of\n\n \n\ndt-math[block] {\n\n display: block;\n\n}\n\n===\n\ndt-math[block] {\n\n display: block;\n\n}\n\nRk(yi0xi0)R^{k}\\left(\\begin{array}{c}\n\n y\\_{i}^{0}\\\\\n\n x\\_{i}^{0}\n\n \\end{array}\\right)R​k​​(​y​i​0​​​x​i​0​​​​)\n\n the noiseless, deterministic iterates and\n\n \n\ndt-math[block] {\n\n display: block;\n\n}\n\n+++\n\ndt-math[block] {\n\n display: block;\n\n}\n\nϵik∑j=1kRk−j(1−α)\\epsilon^k\\_i \\sum\\_{j=1}^{k}R^{k-j}\\left(\\begin{array}{c}\n\n1\\\\\n\n-\\alpha\n\n\\end{array}\\right)ϵ​i​k​​​j=1​∑​k​​R​k−j​​(​1​−α​​)\n\n a decaying sum of the errors, where \n\ndt-math[block] {\n\n display: block;\n\n}\n\nϵk=Q⋅error(wk)\\epsilon^k = Q \\cdot \\text{error}(w^k)ϵ​k​​=Q⋅error(w​k​​).\n\n \n\n renderMath(document.getElementById(\"iteratespluserror\"))\n\n \n\n The error term, \n\ndt-math[block] {\n\n display: block;\n\n}\n\nϵk\\epsilon^kϵ​k​​, with its dependence on the \n\ndt-math[block] {\n\n display: block;\n\n}\n\nwkw^kw​k​​, is a fairly hairy object. Following [10],\n\n we model this as independent 0-mean Gaussian noise. In this simplified \n\nmodel, the objective also breaks into two separable components, a sum of\n\n a deterministic error and a stochastic error\n\n 11, visualized here.\n\n \n\nWe decompose the expected value of the objective value \n\ndt-math[block] {\n\n display: block;\n\n}\n\n \n\ndt-math[block] {\n\n display: block;\n\n}\n\nEf(w)−f(w⋆)\\mathbf{E} f(w) - f(w^\\star) Ef(w)−f(w​⋆​​)\n\n The small black dots are a single run of stochastic gradient\n\nStep-size α = \n\nMomentum β = \n\nAs [1]\n\n observes, the optimization has two phases. In the initial transient \n\nphase the magnitude of the noise is smaller than the magnitude of the \n\ngradient, and Momentum still makes good progress. In the second, \n\nstochastic phase, the noise overwhelms the gradient, and momentum is \n\nless effective.\n\n02401\n\n020406080100120140Fine-tuning phaseTransient phase\n\n deleteQueue.push(renderLoading(d3.select(\"#stochastic\")))\n\n renderQueue.push(function(callback) {\n\n var graphDiv = d3.select(\"#stochasticgd\")\n\n .style(\"width\", 920 + \"px\")\n\n .style(\"height\", 300 + \"px\")\n\n .style(\"top\", \"170px\")\n\n .style(\"position\", \"relative\")\n\n .style(\"margin-left\", \"auto\")\n\n .style(\"margin-right\", \"auto\")\n\n .attr(\"width\", 920)\n\n .attr(\"height\", 500)\n\n var svg = graphDiv.append(\"svg\").attr(\"width\", 940).attr(\"height\", 500)\n\n var update = renderStochasticMilestones(svg, function() {})\n\n slidera(0.5,0.5)\n\n });\n\n \n\n Note that there are a set of unfortunate tradeoffs which seem to pit \n\nthe two components of error against each other. Lowering the step-size, \n\nfor example, decreases the stochastic error, but also slows down the \n\nrate of convergence. And increasing momentum, contrary to popular \n\nbelief, causes the errors to compound. Despite these undesirable \n\nproperties, stochastic gradient descent with momentum has still been \n\nshown to have competitive performance on neural networks. As [1]\n\n has observed, the transient phase seems to matter more than the \n\nfine-tuning phase in machine learning. And in fact, it has been recently\n\n suggested [12]\n\n that this noise is a good thing — it acts as a implicit regularizer, \n\nwhich, like early stopping, prevents overfitting in the fine-tuning \n\nphase of optimization.\n\n \n\n---\n\nOnwards and Downwards\n\n---------------------\n\n The study of acceleration is seeing a small revival within the \n\noptimization community. If the ideas in this article excite you, you may\n\n wish to read [13],\n\n which fully explores the idea of momentum as the discretization of a \n\ncertain differential equation. But other, less physical, interpretations\n\n exist. There is an algebraic interpretation of momentum in terms of \n\n connecting momentum to older methods, like the Ellipsoid method. And \n\n But like the proverbial blind men feeling an elephant, momentum seems \n\nlike something bigger than the sum of its parts. One day, hopefully \n\nsoon, the many perspectives will converge into a satisfying whole.\n\n \n\n", "bibliography_bib": null, "filename": "Why Momentum Really Works.html", "id": "92c3ad2ed891f7ca110169401a5eda4a"} {"url": "n/a", "source": "distill", "source_type": "html", "converted_with": "python", "title": "Research Debt", "authors": ["Chris Olah", "Shan Carter"], "date_published": "2017-03-22", "abstract": "Achieving a research-level understanding of most topics is like climbing a mountain. Aspiring researchers must struggle to understand vast bodies of work that came before them, to learn techniques, and to gain intuition. Upon reaching the top, the new researcher begins doing novel work, throwing new stones onto the top of the mountain and making it a little taller for whoever comes next. ", "journal_ref": "distill-pub", "doi": "https://doi.org/10.23915/distill.00005", "text": "\n\n .mountain-illustration {\n\n margin-top: 20px;\n\n margin-bottom: 10px;\n\n width: 180%;\n\n margin-left: -40%;\n\n }\n\n @media (min-width: 768px) {\n\n .mountain-illustration {\n\n max-width: 1200px;\n\n margin-left: auto;\n\n width: 100%;\n\n }\n\n }\n\n dt-article > h1:first-of-type {\n\n margin-top: 0;\n\n }\n\n \n\nAchieving a research-level understanding of most topics is like \n\nclimbing a mountain. Aspiring researchers must struggle to understand \n\nvast bodies of work that came before them, to learn techniques, and to \n\ngain intuition. Upon reaching the top, the new researcher begins doing \n\nnovel work, throwing new stones onto the top of the mountain and making \n\nit a little taller for whoever comes next.\n\n \n\nMathematics is a striking example of this. For centuries, \n\ncountless minds have climbed the mountain range of mathematics and laid \n\nnew boulders at the top. Over time, different peaks formed, built on top\n\n of particularly beautiful results. Now the peaks of mathematics are so \n\nnumerous and steep that no person can climb them all. Even with a \n\nlifetime of dedicated effort, a mathematician may only enjoy some of \n\ntheir vistas.\n\n \n\nPeople expect the climb to be hard. It reflects the tremendous \n\nprogress and cumulative effort that’s gone into mathematics. The climb \n\nis seen as an intellectual pilgrimage, the labor a rite of passage. But \n\nthe climb could be massively easier. It’s entirely possible to build \n\n \n\nThe climb isn’t progress: the climb is a mountain of debt.\n\n \n\nThe Debt\n\n--------\n\nProgrammers talk about technical debt: there are ways to write \n\nsoftware that are faster in the short run but problematic in the long \n\nrun. Managers talk about institutional debt: institutions can grow \n\nquickly at the cost of bad practices creeping in. Both are easy to \n\naccumulate but hard to get rid of.\n\n \n\nResearch can also have debt. It comes in several forms:\n\n \n\n* **Poor Exposition** – Often, there is no good explanation of \n\nimportant ideas and one has to struggle to understand them. This problem\n\n is so pervasive that we take it for granted and don’t appreciate how \n\nmuch better things could be.\n\n* **Undigested Ideas** – Most ideas start off rough and hard to \n\nunderstand. They become radically easier as we polish them, developing \n\nthe right analogies, language, and ways of thinking.\n\n* **Bad abstractions and notation** – Abstractions and notation \n\nare the user interface of research, shaping how we think and \n\ncommunicate. Unfortunately, we often get stuck with the first formalisms\n\n to develop even when they’re bad. For example, an object with extra \n\nelectrons is negative, and pi is wrong.\n\n* **Noise** – Being a researcher is like standing in the middle \n\nof a construction site. Countless papers scream for your attention and \n\nthere’s no easy way to filter or summarize them.Because\n\n most work is explained poorly, it takes a lot of energy to understand \n\neach piece of work. For many papers, one wants a simple one sentence \n\nexplanation of it, but needs to fight with it to get that sentence. \n\nBecause the simplest way to get the attention of interested parties is \n\nto get everyone’s attention, we get flooded with work. Because we \n\nThe insidious thing about research debt is that it’s normal. \n\nEveryone takes it for granted, and doesn’t realize that things could be \n\ndifferent. For example, it’s normal to give very mediocre explanations \n\nof research, and people perceive that to be the ceiling of explanation \n\nquality. On the rare occasions that truly excellent explanations come \n\nalong, people see them as one-off miracles rather than a sign that we \n\ncould systematically be doing better.\n\n \n\nInterpretive Labor\n\n------------------\n\nThere’s a tradeoff between the energy put into explaining an idea, \n\nand the energy needed to understand it. On one extreme, the explainer \n\ncan painstakingly craft a beautiful explanation, leading their audience \n\nto understanding without even realizing it could have been difficult. On\n\n the other extreme, the explainer can do the absolute minimum and \n\n \n\nMany explanations are not one-to-one. People give lectures, \n\nwrite books, or communicate online. In these one-to-many cases, each \n\nmember of the audience pays the cost of understanding, even though the \n\n are coefficients for the trade off between energy on the explanation \n\nside and energy on the understanding side. That is a is the energy spent\n\n on explaining and b is the corresponding effort needed to understand. \n\n example, Christopher’s average blog post is read by over 100,000 \n\npeople; if he can save each reader just one second, he’s saved humanity \n\n30 hours.\n\n![](Research%20Debt_files/publish-one-many-crop.jpg)\n\nIn research, we often have a group of researchers all trying to \n\nunderstand each other. Just like before, the cost of explaining stays \n\nconstant as the group grows, but the cost of understanding increases \n\nwith each new member. At some size, the effort to understand everyone \n\nelse becomes too much. As a defense mechanism, people specialize, \n\nfocusing on a narrower area of interest. The maintainable size of the \n\nfield is controlled by how its members trade off the energy between \n\ncommunicating and understanding.\n\n \n\nResearch debt is the accumulation of missing interpretive \n\nlabor. It’s extremely natural for young ideas to go through a stage of \n\ndebt, like early prototypes in engineering. The problem is that we often\n\n stop at that point. Young ideas aren’t ending points for us to put in a\n\n paper and abandon. When we let things stop there the debt piles up. It \n\nbecomes harder to understand and build on each other’s work and the \n\nfield fragments.\n\n \n\nClear Thinking\n\n--------------\n\nIt’s worth being clear that research debt isn’t just about ideas \n\nnot being explained well. It’s a lack of digesting ideas – or, at least,\n\n a lack of the public version of ideas being digested.Often,\n\n some individuals have a much more developed version of an idea than is \n\npublicly shared. There are a lot of reasons for not sharing it (in \n\n \n\nDeveloping good abstractions, notations, visualizations, and so\n\n forth, is improving the user interfaces for ideas. This helps both with\n\n understanding ideas for the first time and with thinking clearly about \n\nthem. Conversely, if we can’t explain an idea well, that’s often a sign \n\nthat we don’t understand it as well as we could.\n\n \n\nIt shouldn’t be that surprising that these two largely go hand \n\nin hand. Part of thinking is having a conversation with ourselves.\n\n \n\nResearch Distillation\n\n---------------------\n\nResearch distillation is the opposite of research debt. It can be \n\nincredibly satisfying, combining deep scientific understanding, empathy,\n\n and design to do justice to our research and lay bare beautiful \n\ninsights.\n\n \n\nDistillation is also hard. It’s tempting to think of explaining\n\n an idea as just putting a layer of polish on it, but good explanations \n\noften involve transforming the idea. This kind of refinement of an idea \n\ncan take just as much effort and deep understanding as the initial \n\ndiscovery.\n\n \n\nThis leaves us with no easy way out. We can’t solve research \n\ndebt by having one person write a textbook: their energy is spread too \n\nthin to polish every idea from scratch. We can’t outsource distillation \n\nto less skilled non-experts: refining and explaining ideas requires \n\ncreativity and deep understanding, just as much as novel research.\n\n \n\nResearch distillation doesn’t have to be you, but it does have to be us.\n\n \n\nWhere are the Distillers?\n\n-------------------------\n\nLike the theoretician, the experimentalist or the research \n\nengineer, the research distiller is an integral role for a healthy \n\nresearch community. Right now, almost no one is filling it.\n\n \n\nWhy do researchers not work on distillation? One possibility is\n\n perverse incentives, like wanting your work to look difficult. Those \n\ncertainly exist, but we don’t think they’re the main factor.\n\n There are a lot of perverse incentives that push against explaining \n\nthings well, sharing data, and so forth. This is especially true when \n\nthe work you are doing isn’t that interesting or isn’t reproducible and \n\nyou want to obscure that. Or if you have a lot of competitors and don’t \n\nwant them to catch up. \n\nHowever, our experience is that most good \n\nresearchers don’t seem that motivated by these kind of factors. Instead,\n\n the main issue is that it isn’t worthwhile for them to divert energy \n\nfrom pursuing results to distill things. Perhaps things are different in\n\n other fields, or I’m not cynical enough.\n\nLots of people want to work on research distillation. \n\nUnfortunately, it’s very difficult to do so, because we don’t support \n\nthem.There is a strange kind of informal \n\nsupport for people working on research distillation. Christopher has \n\npersonally benefitted a great deal from this. But it’s unreliable and \n\nnot widely advertised, which makes it hard to build a career on.\n\nAn aspiring research distiller lacks many things that are easy\n\n to take for granted: a career path, places to learn, examples and role \n\nmodels. Underlying this is a deeper issue: their work isn’t seen as a \n\nreal research contribution. We need to fix this.\n\nAn Ecosystem for Distillation\n\n-----------------------------\n\nIf you are excited to distill ideas, seek clarity, and build \n\nbeautiful explanations, we are letting you down. You have something \n\nprecious to contribute and we aren’t supporting you the way we should.\n\n \n\n \n\n* Distill Infrastructure – Tools for making beautiful interactive essays.\n\nThis is just a start: there’s a lot more that needs to be done. A \n\ncomplete ecosystem for this kind of work needs several other components,\n\n including places where one can learn these skills and reliable sources \n\nof employment doing this kind of work. We’re optimistic that will come \n\nwith time.\n\n \n\n---\n\nFurther Reading\n\n---------------\n\n![](Research%20Debt_files/headstand.jpg)\n\n* **Visual Mathematics:**\n\n is particularly striking, but there are several lovely examples of new \n\n* **Explorable Explanations:**\n\n There’s a loose community exploring how the interactive medium \n\nenabled by computers can be used to communicate and think in ways \n\npreviously impossible. These ideas start, as many ideas in computing do,\n\n with work done by Douglas Engelbart and Alan Kay.\n\n \n\nThere are also explorations of how we can augment our ability\n\n to think in this new medium, bringing previously inaccessible ideas \n\n* **Research Distribution:**\n\n* **Open-Notebook Science:**\n\n This seems really important. Traditionally, if one doesn’t turn \n\nresearch into a paper, it’s basically as though you didn’t do it. This \n\ncreates a strong incentive for all research to be dressed up as an \n\nimportant paper, increasing noise.\n\n* **Discussion of Debt and Distillation:**\n\n A number of mathematicians have discussed what we call research \n\n I’d draw particular attention to Thurston’s account of accidentally \n\nkilling a field by drowning it in research debt in section 6 of .\n\n \n\n", "bibliography_bib": [{"title": "The Tau Manifesto"}, {"title": "Interpretive Labor"}, {"title": "The utopia of rules: On technology, stupidity, and the secret joys of bureaucracy"}, {"title": "The Mythical Man-Month"}, {"title": "Visual Complex Analysis"}, {"title": "Visual Group Theory"}, {"title": "A visual explanation of Jensen's inequality"}, {"title": "Visual Differential Geometry"}, {"title": "A Research Center for Augmenting Human Intellect"}, {"title": "Personal Dynamic Media"}, {"title": "Explorable explanations"}, {"title": "Visualizing Algorithms"}, {"title": "How to Fold a Julia Fractal"}, {"title": "Back to the Future of Handwriting Recognition"}, {"title": "Media for thinking the unthinkable"}, {"title": "Learnable programming"}, {"title": "Thought as a Technology"}, {"title": "Toward an Exploratory Medium for Mathematics"}, {"title": "On Proof and Progress in Mathematics"}, {"title": "A beginner’s guide to forcing"}, {"title": "Recoltes et Semailles"}, {"title": "Statement on conceptual contributions in theory"}], "filename": "Research Debt.html", "id": "eced25627b980131fb49a7a711848d81"} {"url": "n/a", "source": "distill", "source_type": "html", "converted_with": "python", "title": "Self-Organising Textures", "authors": ["Eyvind Niklasson", "Alexander Mordvintsev", "Ettore Randazzo", "Michael Levin"], "date_published": "2021-02-11", "abstract": " This article is part of the Differentiable Self-organizing Systems Thread, an experimental format collecting invited short articles delving into differentiable self-organizing systems, interspersed with critical commentary from several experts in adjacent fields. ", "journal_ref": "distill-pub", "doi": "https://doi.org/10.23915/distill.00027.003", "text": "\n\n### Contents\n\n* [NCA as pattern generators](#nca-as-pattern-generators)\n\n* [Related work](#related-work)\n\n[Feature Visualization](#feature-visualization)\n\n* [NCA with Inception](#nca-with-inception)\n\n[Other interesting findings](#other-interesting-findings)\n\n* [Robustness](#robustness)\n\n* [Hidden States](#hidden-states)\n\n[Conclusion](#conclusion)\n\n![](Self-Organising%20Textures_files/multiple-pages.svg)\n\n This article is part of the\n\n an experimental format collecting invited short articles delving into\n\n differentiable self-organizing systems, interspersed with critical\n\n commentary from several experts in adjacent fields.\n\n \n\n[Self-classifying MNIST Digits](https://distill.pub/2020/selforg/mnist/)\n\n The inductive bias imposed by using cellular automata is powerful. A \n\nsystem of individual agents running the same learned local rule can \n\nsolve surprisingly complex tasks. Moreover, individual agents, or cells,\n\n can learn to coordinate their behavior even when separated by large \n\ndistances. By construction, they solve these tasks in a massively \n\n way. Each cell must be able to take on the role of any other cell - as a\n\n result they tend to generalize well to unseen situations.\n\nIn this work, we apply NCA to the task of texture synthesis. This \n\ntask involves reproducing the general appearance of a texture template, \n\nas opposed to making pixel-perfect copies. We are going to focus on \n\ntexture losses that allow for a degree of ambiguity. After training NCA \n\nmodels to reproduce textures, we subsequently investigate their learned \n\nbehaviors and observe a few surprising effects. Starting from these \n\ninvestigations, we make the case that the cells learn distributed, \n\nlocal, algorithms. \n\nPatterns, textures and physical processes\n\n-----------------------------------------\n\n![](Self-Organising%20Textures_files/zebra.jpg)\n\nA pair of Zebra. Zebra are said to have unique stripes.\n\nZebra stripes are an iconic texture. Ask almost anyone to identify \n\nzebra stripes in a set of images, and they will have no trouble doing \n\nso. Ask them to describe what zebra stripes look like, and they will \n\ngladly tell you that they are parallel stripes of slightly varying \n\nwidth, alternating in black and white. And yet, they may also tell you \n\nthat no two zebra have the same set of stripes\n\n Perhaps an apocryphal claim, but at the very lowest level every zebra \n\nwill be unique. Ourp point is - “zebra stripes” as a concept in human \n\nunderstanding refers to the general structure of a black and white \n\nstriped pattern and not to a specific mapping from location to colour..\n\n This is because evolution has programmed the cells responsible for \n\ncreating the zebra pattern to generate a pattern of a certain quality, \n\nwith certain characteristics, as opposed to programming them with the \n\nblueprints for an exact bitmap of the edges and locations of stripes to \n\nbe moulded to the surface of the zebra’s body.\n\nPut another way, patterns and textures are ill-defined concepts. The \n\nCambridge English Dictionary defines a pattern as “any regularly \n\nrepeated arrangement, especially a design made from repeated lines, \n\nshapes, or colours on a surface”. This definition falls apart rather \n\nquickly when looking at patterns and textures that impart a feeling or \n\nquality, rather than a specific repeating property. A coloured fuzzy \n\nrug, for instance, can be considered a pattern or a texture, but is \n\ncomposed of strands pointing in random directions with small random \n\nvariations in size and color, and there is no discernable regularity to \n\nthe pattern. Penrose tilings do not repeat (they are not translationally\n\n invariant), but show them to anyone and they’ll describe them as a \n\npattern or a texture. Most patterns in nature are outputs of locally \n\ninteracting processes that may or may not be stochastic in nature, but \n\nare often based on fairly simple rules. There is a large body of work on\n\n models which give rise to such patterns in nature; most of it is \n\ninspired by Turing’s seminal paper on morphogenesis. \n\nSuch patterns are very common in developmental biology .\n\n In addition to coat colors and skin pigmentation, invariant large-scale\n\n patterns, arising in spite of stochastic low-level dynamics, are a key \n\nfeature of peripheral nerve networks, vascular networks, somites (blocks\n\n of tissue demarcated in embryogenesis that give rise to many organs), \n\nand segments of anatomical and genetic-level features, including whole \n\nbody plans (e.g., snakes and centipedes) and appendages (such as \n\ndemarcation of digit fields within the vertebrate limb).\n\n These kinds of patterns are generated by reaction-diffusion processes, \n\nbioelectric signaling, planar polarity, and other cell-to-cell \n\ncommunication mechanisms.\n\n Patterns in biology are not only structural, but also physiological, as\n\n in the waves of electrical activity in the brain and the dynamics of \n\ngene regulatory networks. These gene regulatory networks, for example, \n\ncan support computation sufficiently sophisticated as to be subject to \n\nLiar paradoxes See [liar paradox](https://en.wikipedia.org/wiki/Liar_paradox).\n\n In principle, gene regulatory networks can express paradoxical \n\nbehaviour, such as that expression of factor A represses the expression \n\n Studying the emergence and control of such patterns can help us to \n\nunderstand not only their evolutionary origins, but also how they are \n\nrecognized (either in the visual system of a second observer or in \n\nadjacent cells during regeneration) and how they can be modulated for \n\nthe purposes of regenerative medicine.\n\nAs a result, when having any model learn to produce textures or \n\npatterns, we want it to learn a generative process for the pattern. We \n\ncan think of such a process as a means of sampling from the distribution\n\n governing this pattern. The first hurdle is to choose an appropriate \n\nloss function, or qualitative measure of the pattern. To do so, we \n\nemploy ideas from Gatys et. al .\n\n NCA become the parametrization for an image which we “stylize” in the \n\nstyle of the target pattern. In this case, instead of restyling an \n\nexisting image, we begin with a fully unconstrained setting: the output \n\nof an untrained, randomly initialized, NCA. The NCA serve as the \n\n“renderer” or “generator”, and a pre-trained differentiable model serves\n\n as a distinguisher of the patterns, providing the gradient necessary \n\nfor the renderer to learn to produce a pattern of a certain style.\n\n### From Turing, to Cellular Automata, to Neural Networks\n\nNCA are well suited for generating textures. To understand why, we’ll\n\n demonstrate parallels between texture generation in nature and NCA. \n\nGiven these parallels, we argue that NCA are a good model class for \n\ntexture generation.\n\n#### PDEs\n\nIn “The Chemical Basis of Morphogenesis” ,\n\n Alan Turing suggested that simple physical processes of reaction and \n\ndiffusion, modelled by partial differential equations, lie behind \n\npattern formation in nature, such as the aforementioned zebra stripes. \n\nExtensive work has since been done to identify PDEs modeling \n\nreaction-diffusion and evaluating their behaviour. One of the more \n\ncelebrated examples is the Gray-Scott model of reaction diffusion (,).\n\n This process has a veritable zoo of interesting behaviour, explorable \n\nby simply tuning the two parameters. We strongly encourage readers to \n\nvisit this [interactive atlas](http://mrob.com/pub/comp/xmorphia/)\n\n of the different regions of the Gray-Scott reaction diffusion model to \n\nget a sense for the extreme variety of behaviour hidden behind two \n\nTo tackle the problem of reproducing our textures, we propose a more \n\ngeneral version of the above systems, described by a simple Partial \n\nDifferential Equation (PDE) over the state space of an image. \n\nIntuitively, we have defined a system where every point of the image \n\nchanges with time, in a way that depends on how the image currently \n\nchanges across space, with respect to its immediate neighbourhood. \n\nReaders may start to recognize the resemblance between this and another \n\nsystem based on immediately local interactions.\n\n#### To CAs\n\nDifferential equations governing natural phenomena are usually \n\nevaluated using numerical differential equation solvers. Indeed, this is\n\n sometimes the **only** way to solve them, as many PDEs and\n\n ODEs of interest do not have closed form solutions. This is even the \n\n Numerically solving PDEs and ODEs is a vast and well-established field.\n\n One of the biggest hammers in the metaphorical toolkit for numerically \n\nevaluating differential equations is discretization: the process of \n\nconverting the variables of the system from continuous space to a \n\ndiscrete space, where numerical integration is tractable. When using \n\nsome ODEs to model a change in a phenomena over time, for example, it \n\nmakes sense to advance through time in discrete steps, possibly of \n\nvariable size. \n\nWe now show that numerically integrating the aforementioned PDE is \n\nThe logical approach to discretizing the space the PDE operates on is\n\n to discretize the continuous 2D image space into a 2D raster grid. \n\nBoundary conditions are of concern but we can address them by moving to a\n\n toroidal world where each dimension wraps around on itself. \n\nSimilarly to space, we choose to treat time in a discretized fashion \n\nand evaluate our NCA at fixed-sized time steps. This is equivalent to \n\nexplicit Euler integration. However, here we make an important deviation\n\n from traditional PDE numerical integration methods for two reasons. \n\nFirst, if all cells are updated synchronously, initial conditions s0s\\_0s0​\n\n must vary from cell-to-cell in order to break the symmetry. Second, the\n\n physical implementation of the synchronous model would require the \n\nexistence of a global clock, shared by all cells. One way to work around\n\n the former is by initializing the grid with random noise, but in the \n\nspirit of self organisation we instead choose to decouple the cell \n\nupdates by asynchronously evaluating the CA. We sample a subset of all \n\ncells at each time-step to update. This introduces both asynchronicity \n\nin time (cells will sometimes operate on information from their \n\nneighbours that is several timesteps old), and asymmetry in space, \n\nsolving both aforementioned issues.\n\nOur next step towards representing a PDE with cellular automata is to\n\n\\begin{bmatrix}\n\n-1 & 0 & 1\\\\-2 & 0 & 2 \\\\-1 & 0 & 1\n\n\\end{bmatrix}\n\n&\n\n\\begin{bmatrix}\n\n-1 & -2 & -1\\\\ 0 & 0 & 0 \\\\1 & 2 & 1\n\n\\end{bmatrix}\n\n&\n\n\\begin{bmatrix}\n\n1 & 2 & 1\\\\2 & -12 & 2 \\\\1 & 2 & 1\n\n\\end{bmatrix}\n\n\\\\\n\nSobel\\_x & Sobel\\_y & Laplacian\n\nWith all the pieces in place, we now have a space-discretized version\n\n of our PDE that looks very much like a Cellular Automata: the time \n\nevolution of each discrete point in the raster grid depends only on its \n\nimmediate neighbours. These discrete operators allow us to formalize our\n\n PDE as a CA. To double check that this is true, simply observe that as \n\nour grid becomes very fine, and the asynchronous updates approach \n\nuniformity, the dynamics of these discrete operators will reproduce the \n\ncontinuous dynamics of the original PDE as we defined it.\n\n#### To Neural Networks\n\nThe final step in implementing the above general PDE for texture \n\ngeneration is to translate it to the language of deep learning. \n\nFortunately, all the operations involved in iteratively evaluating the \n\ngeneralized PDE exist as common operations in most deep learning \n\nframeworks. We provide both a Tensorflow and a minimal PyTorch \n\nimplementation for reference, and refer readers to these for details on \n\nour implementation. \n\n### NCA as pattern generators\n\n#### Model:\n\n![](Self-Organising%20Textures_files/texture_model.svg)\n\nTexture NCA model.\n\nWe build on the Growing CA NCA model ,\n\n complete with built-in quantization of weights, stochastic updates, and\n\n the batch pool mechanism to approximate long-term training. For further\n\n details on the model and motivation, we refer readers to this work.\n\n#### Loss function:\n\n \n\n![](Self-Organising%20Textures_files/texture_training.svg)\n\nTexture NCA model.\n\n in the form of the raw activation values of the neurons in these \n\nlayers. Finally, we run our NCA forward for between 32 and 64 \n\n of activations of these neurons with the NCA as input and their \n\nactivations with the template image as input. We keep the weights of VGG\n\n frozen and use ADAM to update the weights of the NCA.\n\n#### Dataset:\n\n The aim of this dataset is to provide a benchmark for measuring the \n\nability of vision models to recognize and categorize textures and \n\ndescribe textures using words. The textures were collected to match 47 \n\n“attributes” such as “bumpy” or “polka-dotted”. These 47 attributes were\n\n in turn distilled from a set of common words used to describe textures \n\nidentified by Bhusan, Rao and Lohse . \n\n#### Results:\n\nAfter a few iterations of training, we see the NCA converge to a \n\nsolution that at first glance looks similar to the input template, but \n\nnot pixel-wise identical. The very first thing to notice is that the \n\nThis is not completely unexpected. In *Differentiable Parametrizations*,\n\n the authors noted that the images produced when backpropagating into \n\nimage space would end up different each time the algorithm was run due \n\nto the stochastic nature of the parametrizations. To work around this, \n\nthey introduced some tricks to maintain **alignment** \n\nbetween different visualizations. In our model, we find that we attain \n\nsuch alignment along the temporal dimension without optimizing for it; a\n\n welcome surprise. We believe the reason is threefold. First, reaching \n\nand maintaining a static state in an NCA appears to be non-trivial in \n\ncomparison to a dynamic one, so much so that in Growing CA a pool of NCA\n\n states at various iteration times had to be maintained and sampled as \n\nstarting states to simulate loss being applied after a time period \n\nlonger than the NCAs iteration period, to achieve a static stability. We\n\n employ the same sampling mechanism here to prevent the pattern from \n\ndecaying, but in this case the loss doesn’t enforce a static fixed \n\ntarget; rather it guides the NCA towards any one of a number of states \n\nthat minimizes the style loss. Second, we apply our loss after a random \n\nnumber of iterations of the NCA. This means that, at any given time \n\nstep, the pattern must be in a state that minimizes the loss. Third, the\n\n stochastic updates, local communication, and quantization all limit and\n\n regularize the magnitude of updates at each iteration. This encourages \n\nchanges to be small between one iteration and the next. We hypothesize \n\nthat these properties combined encourage the NCA to find a solution \n\nwhere each iteration is **aligned** with the previous \n\niteration. We perceive this alignment through time as motion, and as we \n\niterate the NCA we observe it traversing a manifold of locally aligned \n\nsolutions. \n\n based on the aforementioned findings and qualitative observation of the\n\n NCA. We proceed to demonstrate some exciting behaviours of NCA trained \n\non different template images. \n\nAn NCA trained to create a pattern in the style of **chequered\\_0121.jpg**.\n\nWe notice that: \n\n* Initially, a non-aligned grid of black and white quadrilaterals is formed.\n\n to more closely approximate squares. Quadrilaterals of both colours \n\neither emerge or disappear. Both of these behaviours seem to be an \n\nattempt to find local consistency.\n\n* After a longer time, the grid tends to achieve perfect consistency.\n\nSuch behaviour is not entirely unlike what one would expect in a \n\nhand-engineered algorithm to produce a consistent grid with local \n\ncommunication. For instance, one potential hand-engineered approach \n\nwould be to have cells first try and achieve local consistency, by \n\nchoosing the most common colour from the cells surrounding them, then \n\nattempting to form a diamond of correct size by measuring distance to \n\nthe four edges of this patch of consistent colour, and moving this \n\nboundary if it were incorrect. Distance could be measured by using a \n\nhidden channel to encode a gradient in each direction of interest, with \n\neach cell decreasing the magnitude of this channel as compared to its \n\nneighbour in that direction. A cell could then localize itself within a \n\ndiamond by measuring the value of two such gradient channels. The \n\nappearance of such an algorithm would bear resemblance to the above - \n\nwith patches of cells becoming either black, or white, diamonds then \n\nresizing themselves to achieve consistency.\n\nAn NCA trained to create a pattern in the style of **bubbly\\_0101.jpg**.\n\nIn this video, the NCA has learned to reproduce a texture based on a \n\ntemplate of clear bubbles on a blue background. One of the most \n\ninteresting behaviours we observe is that the density of the bubbles \n\nremains fairly constant. If we re-initialize the grid states, or \n\ninteractively destroy states, we see a multitude of bubbles re-forming. \n\nHowever, as soon as two bubbles get too close to each other, one of them\n\n spontaneously collapses and disappears, ensuring a constant density of \n\nIf we speed the animation up, we see that different bubbles move at \n\ndifferent speeds, yet they never collide or touch each other. Bubbles \n\nalso maintain their structure by self-correcting; a damaged bubble can \n\nre-grow.\n\nThis behaviour is remarkable because it arises spontaneously, without\n\n any external or auxiliary losses. All of these properties are learned \n\nfrom a combination of the template image, the information stored in the \n\nlayers of VGG, and the inductive bias of the NCA. The NCA learned a rule\n\n that effectively approximates many of the properties of the bubbles in \n\nthe original image. Moreover, it has learned a process that generates \n\nthis pattern in a way that is robust to damage and looks realistic to \n\nhumans. \n\nAn NCA trained to create a pattern in the style of **interlaced\\_0172.jpg**.\n\nHere we see one of our favourite patterns: a simple geometric \n\n“weave”. Again, we notice the NCA seems to have learned an algorithm for\n\n producing this pattern. Each “thread” alternately joins or detaches \n\nfrom other threads in order to produce the final pattern. This is \n\nstrikingly similar to what one would attempt to implement, were one \n\nasked to programmatically generate the above pattern. One would try to \n\ndesign some sort of stochastic algorithm for weaving individual threads \n\ntogether with other nearby threads.\n\nAn NCA trained to create a pattern in the style of **banded\\_0037.jpg**.\n\nHere, misaligned stripe fragments travel up or down the stripe until \n\neither they merge to form a single straight stripe or a stripe shrinks \n\nand disappears. Were this to be implemented algorithmically with local \n\ncommunication, it is not infeasible that a similar algorithm for finding\n\n consistency among the stripes would be used.\n\n### Related work\n\nThis foray into pattern generation is by no means the first. There \n\nhas been extensive work predating deep-learning, in particular \n\nsuggesting deep connections between spatial patterning of anatomical \n\nstructure and temporal patterning of cognitive and computational \n\nprocesses (e.g., reviewed in ).\n\n Hans Spemann, one of the heroes of classical developmental biology, \n\nsaid “Again and again terms have been used which point not to physical \n\nbut to psychical analogies. It was meant to be more than a poetical \n\nmetaphor. It was meant to my conviction that the suitable reaction of a \n\ngerm fragment, endowed with diverse potencies, in an embryonic ‘field’… \n\nis not a common chemical reaction, like all vital processes, are \n\ncomparable, to nothing we know in such degree as to vital processes of \n\nwhich we have the most intimate knowledge.” .\n\n More recently, Grossberg quantitatively laid out important \n\nsimilarities between developmental patterning and computational \n\nneuroscience . As briefly touched \n\nupon, the inspiration for much of the work came from Turing’s work on \n\npattern generation through local interaction, and later papers based on \n\nthis principle. However, we also wish to acknowledge some works that we \n\nfeel have a particular kinship with ours. \n\n#### Patch sampling\n\nEarly work in pattern generation focused on texture sampling. Patches\n\n were often sampled from the original image and reconstructed or \n\nrejoined in different ways to obtain an approximation of the texture. \n\nThis method has also seen recent success with the work of Gumin .\n\n#### Deep learning\n\nGatys et. al’s work , \n\nreferenced throughout, has been seminal with regards to the idea that \n\nstatistics of certain layers in a pre-trained network can capture \n\ntextures or styles in an image. There has been extensive work building \n\non this idea, including playing with other parametrisations for image \n\ngeneration and optimizing the generation process . \n\nOther work has focused on using a convolutional generator combined \n\nwith path sampling and trained using an adversarial loss to produce \n\ntextures of similar quality . \n\n#### Interactive Evolution of Camouflage\n\n Craig Reynolds uses a texture description language, consisting of \n\ngenerators and operators, to parametrize a texture patch, which is \n\npresented to human viewers who have to decide which patches are the \n\nworst at “camouflaging” themselves against a chosen background texture. \n\nThe population is updated in an evolutionary fashion to maximize \n\n“camouflage”, resulting in a texture exhibiting the most camouflage (to \n\nhuman eyes) after a number of iterations. We see strong parallels with \n\nour work - instead of a texture generation language, we have an NCA \n\nparametrize the texture, and instead of human reviewers we use VGG as an\n\n evaluator of the quality of a generated pattern. We believe a \n\nfundamental difference lies in the solution space of an NCA. A texture \n\ngeneration language comes with a number of inductive biases and learns a\n\n deterministic mapping from coordinates to colours. Our method appears \n\nto learn more general algorithms and behaviours giving rise to the \n\ntarget pattern.\n\nFeature visualization\n\n---------------------\n\n![](Self-Organising%20Textures_files/butterfly_eye.jpg)\n\nA butterfly with an “eye-spot” on the wings.\n\nWe have now explored some of the fascinating behaviours learned by \n\nthe NCA when presented with a template image. What if we want to see \n\nthem learn even more “unconstrained” behaviour? \n\nSome butterflies have remarkably lifelike eyes on their wings. It’s \n\nunlikely the butterflies are even aware of this incredible artwork on \n\ntheir own bodies. Evolution placed these there to trigger a response of \n\nfear in potential predators or to deflect attacks from them .\n\n It is likely that neither the predator nor the butterfly has a concept \n\n regarding the consciousness of the other, but evolution has identified a\n\n region of morphospace for this organism that exploits \n\npattern-identifying features of predators to trick them into fearing a \n\nharmless bug instead of consuming it. \n\nEven more remarkable is the fact that the individual cells composing \n\nthe butterfly’s wings can self assemble into coherent, beautiful, shapes\n\n The coordination required to produce these features implies \n\nself-organization over hundreds or thousands of cells to generate a \n\ncoherent image of an eye that evolved simply to act as a visual stimuli \n\nfor an entirely different species, because of the local nature of \n\ncell-to-cell communication. Of course, this pales in comparison to the \n\nmorphogenesis that occurs in animal and plant bodies, where structures \n\nconsisting of millions of cells will specialize and coordinate to \n\ngenerate the target morphology. \n\n Just as neuroscientists and biologists have often treated cells and \n\ncell structures and neurons as black-box models to be investigated, \n\nmeasured and reverse-engineered, there is a large contemporary body of \n\nwork on doing the same with neural networks. For instance the work by \n\nBoettiger .\n\nWe can explore this idea with minimal effort by taking our \n\npattern-generating NCA and exploring what happens if we task it to enter\n\n a state that excites a given neuron in Inception. One of the common \n\nresulting NCAs we notice is eye and eye-related shapes - such as the \n\nvideo below - likely as a result of having to detect various animals in \n\nImageNet. In the same way that cells form eye patterns on the wings of \n\nbutterflies to excite neurons in the brains of predators, our NCA’s \n\npopulation of cells has learned to collaborate to produce a pattern that\n\n excites certain neurons in an external neural network.\n\nAn NCA trained to excite **mixed4a\\_472** in Inception.\n\n### NCA with Inception\n\n#### Model:\n\nWe use a model identical to the one used for exploring pattern \n\ngeneration, but with a different discriminator network: Imagenet-trained\n\n Inception v1 network .\n\n#### Loss function:\n\nOur loss maximizes the activations of chosen neurons, when evaluated \n\non the output of the NCA. We add an auxiliary loss to encourage the \n\n#### Dataset:\n\nThere is no explicit dataset for this task. Inception is trained on \n\nImageNet. The layers and neurons we chose to excite are chosen \n\nqualitatively using OpenAI Microscope.\n\n#### Results:\n\nSimilar to the pattern generation experiment, we see quick \n\nconvergence and a tendency to find temporally dynamic solutions. In \n\nother words, resulting NCAs do not stay still. We also observe that the \n\nmajority of the NCAs learn to produce solitons of various kinds. We \n\ndiscuss a few below, but encourage readers to explore them in the demo. \n\nAn NCA trained to excite **mixed4c\\_439** in Inception.\n\nSolitons in the form of regular circle-like shapes with internal \n\nstructure are quite commonly observed in the inception renderings. Two \n\nsolitons approaching each other too closely may cause one or both of \n\nthem to decay. We also observe that solitons can divide into two new \n\nsolitons.\n\nAn NCA trained to excite **mixed3b\\_454** in Inception.\n\nIn textures that are composed of threads or lines, or in certain \n\nexcitations of Inception neurons where the resulting NCA has a \n\n“thread-like” quality, the threads grow in their respective directions \n\nand will join other threads, or grow around them, as required. This \n\nbehaviour is similar to the regular lines observed in the striped \n\npatterns during pattern generation.\n\nOther interesting findings\n\n--------------------------\n\n### Robustness\n\n#### Switching manifolds\n\nWe encode local information flow within the NCA using the same fixed \n\nLaplacian and gradient filters. As luck would have it, these can be \n\ndefined for most underlying manifolds, giving us a way of placing our \n\ncells on various surfaces and in various configurations without having \n\nto modify the learned model. Suppose we want our cells to live in a \n\nhexagonal world. We can redefine our kernels as follows:\n\n![](Self-Organising%20Textures_files/hex_kernels.svg)\n\nHexagonal grid convolutional filters.\n\nOur model, trained in a purely square environment, works out of the \n\nbox on a hexagonal grid! Play with the corresponding setting in the demo\n\n to experiment with this. Zooming in allows observation of the \n\nindividual hexagonal or square cells. As can be seen in the demo, the \n\ncells have no problem adjusting to a hexagonal world and producing \n\nidentical patterns after a brief period of re-alignment.\n\n \n\n![](Self-Organising%20Textures_files/coral_square.png)\n\n![](Self-Organising%20Textures_files/coral_hex.png)\n\nThe same texture evaluated on a square and hexagonal grid, respectively.\n\n#### Rotation\n\n![](Self-Organising%20Textures_files/mond_rot0.png)\n\n![](Self-Organising%20Textures_files/mond_rot1.png)\n\n![](Self-Organising%20Textures_files/mond_rot2.png)\n\n![](Self-Organising%20Textures_files/mond_rot3.png)\n\neralises to this new rotated paradigm without issue.\n\nIn theory, the cells can be evaluated on any manifold where one can \n\ndefine approximations to the Sobel kernel and the Laplacian kernel. We \n\ndemonstrate this in our demo by providing an aforementioned “hexagonal” \n\nworld for the cells to live in. Instead of having eight equally-spaced \n\nneighbours, each cell now has six equally-spaced neighbours. We further \n\ndemonstrate this versatility by rotating the Sobel and Laplacian \n\nkernels. Each cell receives an innate global orientation based on these \n\nkernels, because they are defined with respect to the coordinate system \n\nof the state. Redefining the Sobel and Laplacian kernel with a rotated \n\ncoordinate system is straightforward and can even be done on a per-cell \n\nlevel. Such versatility is exciting because it mirrors the extreme \n\nrobustness found in biological cells in nature. Cells in most tissues \n\nwill generally continue to operate whatever their location, direction, \n\nor exact placement relative to their neighbours. We believe this \n\nversatility in our model could even extend to a setting where the cells \n\nare placed on a manifold at random, rather than on an ordered grid.\n\n#### Time-synchronization\n\nTwo NCAs running next to each other, at different speeds, \n\nwith some stochasticity in speed. They can communicate through their \n\nshared edge; the vertical boundary between them in the center of the \n\nstate space.\n\nStochastic updates teach the cells to be robust to asynchronous \n\nupdates. We investigate this property by taking it to an extreme and \n\n The result is surprisingly stable; the CA is still able to construct \n\nand maintain a consistent texture across the combined manifold. The time\n\n discrepancy between the two CAs sharing the state is far larger than \n\nanything the NCA experiences during training, showing remarkable \n\nrobustness of the learned behaviour. Parallels can be drawn to organic \n\nmatter self repairing, for instance a fingernail can regrow in adulthood\n\n despite the underlying finger already having fully developed; the two \n\ndo not need to be sync. This result also hints at the possibility of \n\ndesigning distributed systems without having to engineer for a global \n\nclock, synchronization of compute units or even homogenous compute \n\ncapacity. \n\nAn NCA is evaluated for a number of steps. The surrounding \n\nborder of cells are then also turned into NCA cells. The cells have no \n\ndifficulty communicating with the “finished” pattern and achieving \n\nconsistency. \n\nAn even more drastic example of this robustness to time \n\nasynchronicity can be seen above. Here, an NCA is iterated until it \n\nachieves perfect consistency in a pattern. Then, the state space is \n\nexpanded, introducing a border of new cells around the existing state. \n\nThis border quickly interfaces with the existing cells and settles in a \n\nconsistent pattern, with almost no perturbation to the already-converged\n\n inner state.\n\n#### Failure cases\n\nThe failure modes of a complex system can teach us a great deal about\n\n its internal structure and process. Our model has many quirks and \n\nsometimes these prevent it from learning certain patterns. Below are \n\nsome examples.\n\n![](Self-Organising%20Textures_files/fail_mondrian.jpeg)\n\n![](Self-Organising%20Textures_files/fail_sprinkle.jpeg)\n\n![](Self-Organising%20Textures_files/fail_chequerboard.jpeg)\n\nThree failure cases of the NCA. Bottom row shows target \n\ntexture samples, top row are corresponding NCA outputs. Failure modes \n\ninclude incorrect colours, chequerboard artefacts, and incoherent image \n\nstructure.\n\nSome patterns are reproduced somewhat accurately in terms of \n\nstructure, but not in colour, while some are the opposite. Others fail \n\ncompletely. It is difficult to determine whether these failure cases \n\nhave their roots in the parametrization (the NCA), or in the \n\nhard-to-interpret gradient signals from VGG, or Inception. Existing work\n\n with style transfer suggests that using a loss on Gram matrices in VGG \n\ncan introduce instabilities ,\n\n that are similar to the ones we see here. We hypothesize that this \n\neffect explains the failures in reproducing colors. The structural \n\nfailures, meanwhile, may be caused by the NCA parameterization, which \n\nmakes it difficult for cells to establish long-distance communication \n\nwith one another.\n\n### Hidden states\n\nWhen biological cells communicate with each other, they do so through\n\n a multitude of available communication channels. Cells can emit or \n\nabsorb different ions and proteins, sense physical motion or “stiffness”\n\n of other cells, and even emit different chemical signals to diffuse \n\nover the local substrate . \n\nThere are various ways to visualize communication channels in real \n\ncells. One of them is to add to cells a potential-activated dye. Doing \n\nso gives a clear picture of the voltage potential the cell is under with\n\n respect to the surrounding substrate. This technique provides useful \n\ninsight into the communication patterns within groups of cells and helps\n\n scientists visualize both local and global communication over a variety\n\n of time-scales.\n\nAs luck would have it, we can do something similar with our Neural \n\nCellular Automata. Our NCA model contains 12 channels. The first three \n\nare visible RGB channels and the rest we treat as latent channels which \n\nare visible to adjacent cells during update steps, but excluded from \n\nloss functions. Below we map the first three principle components of the\n\n hidden channels to the R,G, and B channels respectively. Hidden \n\nchannels can be considered “floating,” to abuse a term from circuit \n\ntheory. In other words, they are not pulled to any specific final state \n\nor intermediate state by the loss. Instead, they converge to some form \n\nof a dynamical system which assists the cell in fulfilling its objective\n\n with respect to its visible channels. There is no pre-defined \n\nassignment of different roles or meaning to different hidden channels, \n\nand there is almost certainly redundancy and correlation between \n\ndifferent hidden channels. Such correlation may not be visible when we \n\nvisualize the first three principal components in isolation. But this \n\nconcern aside, the visualization yields some interesting insights \n\nanyways.\n\n \n\n \n\n An NCA trained to excite **mixed4b\\_70**\n\n in Inception. Notice the hidden states appear to encode information \n\nabout structure. “Threads” along the major diagonal (NW - SE) appear \n\nprimarily green, while those running along the anti-diagonal appear \n\nblue, indicating that these have differing internal states, despite \n\nbeing effectively indistinguishable in RGB space.\n\nIn the principal components of this coral-like texture, we see a \n\npattern which is similar to the visible channels. However, the “threads”\n\n pointing in each diagonal direction have different colours - one \n\ndiagonal is green and the other is a pale blue. This suggests that one \n\nof the things encoded into the hidden states is the direction of a \n\n“thread”, likely to allow cells that are inside one of these threads to \n\nkeep track of which direction the thread is growing, or moving, in. \n\n \n\n An NCA trained to produce a texture based on DTD image **cheqeuered\\_0121**.\n\n Notice the structure of squares - with a gradient occurring inside the \n\nstructure of each square, evidencing that structure is being encoded in \n\nhidden state.\n\nThe chequerboard pattern likewise lends itself to some qualitative \n\nanalysis and hints at a fairly simple mechanism for maintaining the \n\nshape of squares. Each square has a clear gradient in PCA space across \n\nthe diagonal, and the values this gradient traverses differ for the \n\nwhite and black squares. We find it likely the gradient is used to \n\nprovide a local coordinate system for creating and sizing the squares. \n\n \n\n An NCA trained to excite **mixed4c\\_208**\n\n in Inception. The visible body of the eye is clearly demarcated in the \n\nhidden states. There is also a “halo” which appears to modulate growth \n\nof any solitons immediately next to each other. This halo is barely \n\nvisible in the RGB channels.\n\nWe find surprising insight in NCA trained on Inception as well. In \n\nthis case, the structure of the eye is clearly encoded in the hidden \n\nstate with the body composed primarily of one combination of principal \n\ncomponents, and an halo, seemingly to prevent collisions of the eye \n\nsolitons, composed of another set of principal components.\n\nAnalysis of these hidden states is something of a dark art; it is not\n\n always possible to draw rigorous conclusions about what is happening. \n\nWe welcome future work in this direction, as we believe qualitative \n\nanalysis of these behaviours will be useful for understanding more \n\ncomplex behaviours of CAs. We also hypothesize that it may be possible \n\nto modify or alter hidden states in order to affect the morphology and \n\nbehaviour of NCA. \n\nConclusion\n\n----------\n\nIn this work, we selected texture templates and individual neurons as\n\n targets and then optimized NCA populations so as to produce similar \n\nexcitations in a pre-trained neural network. This procedure yielded NCAs\n\n that could render nuanced and hypnotic textures. During our analysis, \n\nwe found that these NCAs have interesting and unexpected properties. \n\nMany of the solutions for generating certain patterns in an image appear\n\n similar to the underlying model or physical behaviour producing the \n\npattern. For example, our learned NCAs seem to have a bias for treating \n\nobjects in the pattern as individual objects and letting them move \n\nfreely across space. While this effect was present in many of our \n\nmodels, it was particularly strong in the bubble and eye models. The NCA\n\n is forced to find algorithms that can produce such a pattern with \n\npurely local interaction. This constraint seems to produce models that \n\nfavor high-level consistency and robustness.\n\n![](Self-Organising%20Textures_files/multiple-pages.svg)\n\n This article is part of the\n\n an experimental format collecting invited short articles delving into\n\n differentiable self-organizing systems, interspersed with critical\n\n commentary from several experts in adjacent fields.\n\n \n\n[Self-classifying MNIST Digits](https://distill.pub/2020/selforg/mnist/)\n\n", "bibliography_bib": [{"title": "Growing Neural Cellular Automata"}, {"title": "Image segmentation via Cellular Automata"}, {"title": "Self-classifying MNIST Digits"}, {"title": "Differentiable Image Parameterizations"}, {"title": "The chemical basis of morphogenesis"}, {"title": "Turing patterns in development: what about the horse part?"}, {"title": "A unified design space of synthetic stripe-forming networks"}, {"title": "On the Formation of Digits and Joints during Limb Development"}, {"title": "Modeling digits. Digit patterning is controlled by a Bmp-Sox9-Wnt Turing network modulated by morphogen gradients"}, {"title": "Pattern formation mechanisms of self-organizing reaction-diffusion systems"}, {"title": "Bioelectric\n gene and reaction networks: computational modelling of genetic, \nbiochemical and bioelectrical dynamics in pattern regulation"}, {"title": "Turing-like patterns can arise from purely bioelectric mechanisms"}, {"title": "Dissipative structures in biological systems: bistability, oscillations, spatial patterns and waves"}, {"title": "Gene networks and liar paradoxes"}, {"title": "Texture Synthesis Using Convolutional Neural Networks"}, {"title": "The chemical basis of morphogenesis. 1953"}, {"title": "Pattern formation by interacting chemical fronts"}, {"title": "Complex patterns in a simple system"}, {"title": "Very Deep Convolutional Networks for Large-Scale Image Recognition"}, {"title": "Adam: A Method for Stochastic Optimization"}, {"title": "Describing Textures in the Wild"}, {"title": "The texture lexicon: Understanding the categorization of visual texture terms and their relationship to texture images"}, {"title": "Re-membering\n the body: applications of computational neuroscience to the top-down \ncontrol of regeneration of limbs and other complex organs"}, {"title": "Embryonic Development and Induction"}, {"title": "Communication, Memory, and Development"}, {"title": "WaveFunctionCollapse"}, {"title": "Texture Networks: Feed-forward Synthesis of Textures and Stylized Images"}, {"title": "TextureGAN: Controlling deep image synthesis with texture patches"}, {"title": "Interactive evolution of camouflage"}, {"title": "A parametric texture model based on joint statistics of complex wavelet coefficients"}, {"title": "Trainable Nonlinear Reaction Diffusion: A Flexible Framework for Fast and Effective Image Restoration"}, {"title": "The evolutionary significance of butterfly eyespots"}, {"title": "Live Cell Imaging of Butterfly Pupal and Larval Wings In Vivo"}, {"title": "Focusing on butterfly eyespot focus: uncoupling of white spots from eyespot bodies in nymphalid butterflies"}, {"title": "OpenAI Microscope"}, {"title": "The neural origins of shell structure and pattern in aquatic mollusks"}, {"title": "Emergent complexity in simple neural systems"}, {"title": "Going deeper with convolutions"}, {"title": "Stable and Controllable Neural Texture Synthesis and Style Transfer Using Histogram Losses"}, {"title": "Stem cell migration and mechanotransduction on linear stiffness gradient hydrogels"}], "filename": "Self-Organising Textures.html", "id": "47316f55761c6fc9976163f1b69b21e8"} {"url": "n/a", "source": "distill", "source_type": "html", "converted_with": "python", "title": "Sequence Modeling with CTC", "authors": ["Awni Hannun"], "date_published": "2017-11-27", "abstract": "Consider speech recognition. We have a dataset of audio clips and corresponding transcripts. Unfortunately, we don’t know how the characters in the transcript align to the audio. This makes training a speech recognizer harder than it might at first seem.", "journal_ref": "distill-pub", "doi": "https://doi.org/10.23915/distill.00008", "text": "\n\nIntroduction\n\n------------\n\nConsider speech recognition. We have a dataset of audio clips and\n\ncorresponding transcripts. Unfortunately, we don’t know how the characters in\n\nthe transcript align to the audio. This makes training a speech recognizer\n\nharder than it might at first seem.\n\nWithout this alignment, the simple approaches aren’t available to us. We\n\ncould devise a rule like “one character corresponds to ten inputs”. But\n\npeople’s rates of speech vary, so this type of rule can always be broken.\n\nAnother alternative is to hand-align each character to its location in the\n\naudio. From a modeling standpoint this works well — we’d know the ground truth\n\nfor each input time-step. However, for any reasonably sized dataset this is\n\nprohibitively time consuming.\n\nThis problem doesn’t just turn up in speech recognition. We see it in many\n\nother places. Handwriting recognition from images or sequences of pen strokes\n\nis one example. Action labelling in videos is another.\n\n![](Sequence%20Modeling%20with%20CTC_files/handwriting_recognition.svg)\n\n**Handwriting recognition:** The input can be\n\n (x,y)(x,y)(x,y) coordinates of a pen stroke or\n\n pixels in an image.\n\n \n\n![](Sequence%20Modeling%20with%20CTC_files/speech_recognition.svg)\n\n**Speech recognition:** The input can be a spectrogram or some\n\n other frequency based feature extractor.\n\n \n\nConnectionist Temporal Classification (CTC) is a way to get around not\n\nknowing the alignment between the input and the output. As we’ll see, it’s\n\nespecially well suited to applications like speech and handwriting\n\nrecognition.\n\n---\n\nTo be a bit more formal, let’s consider mapping input sequences\n\nWe want to find an accurate mapping from XXX’s to YYY’s.\n\nThere are challenges which get in the way of us\n\nusing simpler supervised learning algorithms. In particular:\n\n* Both XXX and YYY\n\n can vary in length.\n\n* The ratio of the lengths of XXX and YYY\n\n can vary.\n\n* We don’t have an accurate alignment (correspondence of the elements) of\n\n XXX and Y.Y.Y.\n\nThe CTC algorithm overcomes these challenges. For a given XXX\n\nit gives us an output distribution over all possible YYY’s. We\n\ncan use this distribution either to *infer* a likely output or to assess\n\nthe *probability* of a given output.\n\nNot all ways of computing the loss function and performing inference are\n\ntractable. We’ll require that CTC do both of these efficiently.\n\n**Loss Function:** For a given input, we’d like to train our\n\nmodel to maximize the probability it assigns to the right answer. To do this,\n\nwe’ll need to efficiently compute the conditional probability\n\np(Y∣X).p(Y \\mid X).p(Y∣X). The function p(Y∣X)p(Y \\mid X)p(Y∣X) should\n\nalso be differentiable, so we can use gradient descent.\n\n**Inference:** Naturally, after we’ve trained the model, we\n\nwant to use it to infer a likely YYY given an X.X.X.\n\nThis means solving\n\nY∗=argmaxYp(Y∣X).\n\nY∗=Yargmax​p(Y∣X).\n\nIdeally Y∗Y^\\*Y∗ can be found efficiently. With CTC we’ll settle\n\nfor an approximate solution that’s not too expensive to find.\n\nThe Algorithm\n\n-------------\n\nThe CTC algorithm can assign a probability for any YYY\n\ngiven an X.X.X. The key to computing this probability is how CTC\n\nthinks about alignments between inputs and outputs. We’ll start by looking at\n\nthese alignments and then show how to use them to compute the loss function and\n\nperform inference.\n\n### Alignment\n\nThe CTC algorithm is *alignment-free* — it doesn’t require an\n\nalignment between the input and the output. However, to get the probability of\n\nan output given an input, CTC works by summing over the probability of all\n\npossible alignments between the two. We need to understand what these\n\nalignments are in order to understand how the loss function is ultimately\n\ncalculated.\n\nTo motivate the specific form of the CTC alignments, first consider a naive\n\napproach. Let’s use an example. Assume the input has length six and Y=Y\n\n=Y= [c, a, t]. One way to align XXX and YYY\n\nis to assign an output character to each input step and collapse repeats.\n\n![](Sequence%20Modeling%20with%20CTC_files/naive_alignment.svg)\n\nThis approach has two problems.\n\n* Often, it doesn’t make sense to force every input step to align to\n\n some output. In speech recognition, for example, the input can have stretches\n\n of silence with no corresponding output.\n\n* We have no way to produce outputs with multiple characters in a row.\n\n Consider the alignment [h, h, e, l, l, l, o]. Collapsing repeats will\n\n produce “helo” instead of “hello”.\n\nTo get around these problems, CTC introduces a new token to the set of\n\nallowed outputs. This new token is sometimes called the *blank* token. We’ll\n\nrefer to it here as ϵ.\\epsilon.ϵ. The\n\nϵ\\epsilonϵ token doesn’t correspond to anything and is simply\n\nremoved from the output.\n\nThe alignments allowed by CTC are the same length as the input. We allow any\n\nalignment which maps to YYY after merging repeats and removing\n\nϵ\\epsilonϵ tokens:\n\n![](Sequence%20Modeling%20with%20CTC_files/ctc_alignment_steps.svg)\n\nIf YYY has two of the same character in a row, then a valid\n\nalignment must have an ϵ\\epsilonϵ between them. With this rule\n\nin place, we can differentiate between alignments which collapse to “hello” and\n\nthose which collapse to “helo”.\n\nLet’s go back to the output [c, a, t] with an input of length six. Here are\n\na few more examples of valid and invalid alignments.\n\n![](Sequence%20Modeling%20with%20CTC_files/valid_invalid_alignments.svg)\n\nThe CTC alignments have a few notable properties. First, the allowed\n\nalignments between XXX and YYY are monotonic.\n\nIf we advance to the next input, we can keep the corresponding output the\n\nsame or advance to the next one. A second property is that the alignment of\n\nXXX to YYY is many-to-one. One or more input\n\nelements can align to a single output element but not vice-versa. This implies\n\na third property: the length of YYY cannot be greater than the\n\nlength of X.X.X.\n\n### Loss Function\n\nThe CTC alignments give us a natural way to go from probabilities at each\n\ntime-step to the probability of an output sequence.\n\n![](Sequence%20Modeling%20with%20CTC_files/full_collapse_from_audio.svg)\n\nTo be precise, the CTC objective\n\nfor a single (X,Y)(X, Y)(X,Y) pair is:\n\np(Y∣X)=p(Y \\mid X) \\;\\; =p(Y∣X)=\n\n∑A∈AX,Y\\sum\\_{A \\in \\mathcal{A}\\_{X,Y}}A∈AX,Y​∑​\n\n∏t=1Tpt(at∣X)\\prod\\_{t=1}^T \\; p\\_t(a\\_t \\mid X)t=1∏T​pt​(at​∣X)\n\n The CTC conditional **probability**\n\n**marginalizes** over the set of valid alignments\n\n \n\n computing the **probability** for a single alignment step-by-step.\n\n \n\nModels trained with CTC typically use a recurrent neural network (RNN) to\n\nestimate the per time-step probabilities, pt(at∣X).p\\_t(a\\_t \\mid X).pt​(at​∣X).\n\nAn RNN usually works well since it accounts for context in the input, but we’re\n\nfree to use any learning algorithm which produces a distribution over output\n\nclasses given a fixed-size slice of the input.\n\nIf we aren’t careful, the CTC loss can be very expensive to compute. We\n\ncould try the straightforward approach and compute the score for each alignment\n\nsumming them all up as we go. The problem is there can be a massive number of\n\nalignments.\n\n For a YYY of length UUU without any repeat\n\n characters and an XXX of length TTT the size\n\n of the set is (T+UT−U).{T + U \\choose T - U}.(T−UT+U​). For T=100T=100T=100 and\n\n U=50U=50U=50 this number is almost 1040.10^{40}.1040.\n\nFor most problems this would be too slow.\n\nThankfully, we can compute the loss much faster with a dynamic programming\n\nalgorithm. The key insight is that if two alignments have reached the same\n\noutput at the same step, then we can merge them.\n\n![](Sequence%20Modeling%20with%20CTC_files/all_alignments.svg)\n\n Summing over all alignments can be very expensive.\n\n \n\n![](Sequence%20Modeling%20with%20CTC_files/merged_alignments.svg)\n\n Dynamic programming merges alignments, so it’s much faster.\n\n \n\nSince we can have an ϵ\\epsilonϵ before or after any token in\n\nYYY, it’s easier to describe the algorithm\n\nusing a sequence which includes them. We’ll work with the sequence\n\nZ=[ϵ, y1, ϵ, y2, …, ϵ, yU, ϵ]\n\nZ=[ϵ, y1​, ϵ, y2​, …, ϵ, yU​, ϵ]\n\nwhich is YYY with an ϵ\\epsilonϵ at\n\nthe beginning, end, and between every character.\n\nLet’s let α\\alphaα be the score of the merged alignments at a\n\ngiven node. More precisely, αs,t\\alpha\\_{s, t}αs,t​ is the CTC score of\n\nthe subsequence Z1:sZ\\_{1:s}Z1:s​ after ttt input steps.\n\nAs we’ll see, we’ll compute the final CTC score, P(Y∣X)P(Y \\mid X)P(Y∣X),\n\nfrom the α\\alphaα’s at the last time-step. As long as we know\n\nthe values of α\\alphaα at the previous time-step, we can compute\n\nαs,t.\\alpha\\_{s, t}.αs,t​. There are two cases.\n\n**Case 1:**\n\n![](Sequence%20Modeling%20with%20CTC_files/cost_no_skip.svg)\n\nIn this case, we can’t jump over zs−1z\\_{s-1}zs−1​, the previous\n\ntoken in Z.Z.Z. The first reason is that the previous token can\n\nbe an element of YYY, and we can’t skip elements of\n\nY.Y.Y. Since every element of YYY in\n\nZZZ is followed by an ϵ\\epsilonϵ, we can\n\nidentify this when zs=ϵ. z\\_{s} = \\epsilon.zs​=ϵ. The second reason is\n\nthat we must have an ϵ\\epsilonϵ between repeat characters in\n\nY.Y.Y. We can identify this when\n\nzs=zs−2.z\\_s = z\\_{s-2}.zs​=zs−2​.\n\nTo ensure we don’t skip zs−1z\\_{s-1}zs−1​, we can either be there\n\nat the previous time-step or have already passed through at some earlier\n\ntime-step. As a result there are two positions we can transition from.\n\nαs,t=\\alpha\\_{s, t} \\; =αs,t​=\n\n The CTC probability of the two valid subsequences after\n\n t−1t-1t−1 input steps.\n\n \n\npt(zs∣X)p\\_t(z\\_{s} \\mid X)pt​(zs​∣X)\n\n The probability of the current character at input step t.t.t.\n\n![](Sequence%20Modeling%20with%20CTC_files/cost_regular.svg)\n\n**Case 2:**\n\nIn the second case, we’re allowed to skip the previous token in\n\nZ.Z.Z. We have this case whenever zs−1z\\_{s-1}zs−1​ is\n\nan ϵ\\epsilonϵ between unique characters. As a result there are\n\nthree positions we could have come from at the previous step.\n\nαs,t=\\alpha\\_{s, t} \\; =αs,t​=\n\n The CTC probability of the three valid subsequences after\n\n t−1t-1t−1 input steps.\n\n \n\npt(zs∣X)p\\_t(z\\_{s} \\mid X)pt​(zs​∣X)\n\n The probability of the current character at input step t.t.t.\n\nBelow is an example of the computation performed by the dynamic programming\n\nalgorithm. Every valid alignment has a path in this graph.\n\n output \n\nY=Y =Y= [a, b]\n\n \n\n input, XXX\n\n![](Sequence%20Modeling%20with%20CTC_files/ctc_cost.svg)\n\n Node (s,t)(s, t)(s,t) in the diagram represents\n\n αs,t\\alpha\\_{s, t}αs,t​ – the CTC score of\n\n the subsequence Z1:sZ\\_{1:s}Z1:s​ after\n\n ttt input steps.\n\n \n\nThere are two valid starting nodes and two valid final nodes since the\n\nϵ\\epsilonϵ at the beginning and end of the sequence is\n\noptional. The complete probability is the sum of the two final nodes.\n\nNow that we can efficiently compute the loss function, the next step is to\n\ncompute a gradient and train the model. The CTC loss function is differentiable\n\nwith respect to the per time-step output probabilities since it’s just sums and\n\nproducts of them. Given this, we can analytically compute the gradient of the\n\nloss function with respect to the (unnormalized) output probabilities and from\n\nthere run backpropagation as usual.\n\nFor a training set D\\mathcal{D}D, the model’s parameters\n\nare tuned to minimize the negative log-likelihood\n\n∑(X,Y)∈D−logp(Y∣X)\n\n\\sum\\_{(X, Y) \\in \\mathcal{D}} -\\log\\; p(Y \\mid X)\n\n(X,Y)∈D∑​−logp(Y∣X)\n\ninstead of maximizing the likelihood directly.\n\n### Inference\n\nAfter we’ve trained the model, we’d like to use it to find a likely output\n\nfor a given input. More precisely, we need to solve:\n\nY∗=argmaxYp(Y∣X)\n\nY∗=Yargmax​p(Y∣X)\n\nOne heuristic is to take the most likely output at each time-step. This\n\ngives us the alignment with the highest probability:\n\nA∗=argmaxA∏t=1Tpt(at∣X)\n\nA∗=Aargmax​t=1∏T​pt​(at​∣X)\n\nWe can then collapse repeats and remove ϵ\\epsilonϵ tokens to\n\nget Y.Y.Y.\n\nFor many applications this heuristic works well, especially when most of the\n\nprobability mass is alloted to a single alignment. However, this approach can\n\nsometimes miss easy to find outputs with much higher probability. The problem\n\nis, it doesn’t take into account the fact that a single output can have many\n\nalignments.\n\nHere’s an example. Assume the alignments [a, a, ϵ\\epsilonϵ]\n\nand [a, a, a] individually have lower probability than [b, b, b]. But\n\nthe sum of their probabilities is actually greater than that of [b, b, b]. The\n\nnaive heuristic will incorrectly propose Y=Y =Y= [b] as\n\nthe most likely hypothesis. It should have chosen Y=Y =Y= [a].\n\nTo fix this, the algorithm needs to account for the fact that [a, a, a] and [a,\n\na, ϵ\\epsilonϵ] collapse to the same output.\n\nWe can use a modified beam search to solve this. Given limited\n\ncomputation, the modified beam search won’t necessarily find the\n\nmost likely Y.Y.Y. It does, at least, have\n\nthe nice property that we can trade-off more computation\n\n(a larger beam-size) for an asymptotically better solution.\n\nA regular beam search computes a new set of hypotheses at each input step.\n\nThe new set of hypotheses is generated from the previous set by extending each\n\nhypothesis with all possible output characters and keeping only the top\n\ncandidates.\n\n![](Sequence%20Modeling%20with%20CTC_files/beam_search.svg)\n\n A standard beam search algorithm with an alphabet of\n\n {ϵ,a,b}\\{\\epsilon, a, b\\}{ϵ,a,b} and a beam size\n\n of three.\n\n \n\nWe can modify the vanilla beam search to handle multiple alignments mapping to\n\nthe same output. In this case instead of keeping a list of alignments in the\n\nbeam, we store the output prefixes after collapsing repeats and removing\n\nϵ\\epsilonϵ characters. At each step of the search we accumulate\n\nscores for a given prefix based on all the alignments which map to it.\n\n The CTC beam search algorithm with an output alphabet\n\n {ϵ,a,b}\\{\\epsilon, a, b\\}{ϵ,a,b}\n\n and a beam size of three.\n\n \n\nA proposed extension can map to two output prefixes if the character is a\n\nrepeat. This is shown at T=3T=3T=3 in the figure above\n\nwhere ‘a’ is proposed as an extension to the prefix [a]. Both [a] and [a, a] are\n\nvalid outputs for this proposed extension.\n\nWhen we extend [a] to produce [a,a], we only want include the part of the\n\nprevious score for alignments which end in ϵ.\\epsilon.ϵ. Remember, the\n\nϵ\\epsilonϵ is required between repeat characters. Similarly,\n\nwhen we don’t extend the prefix and produce [a], we should only include the part\n\nof the previous score for alignments which don’t end in ϵ.\\epsilon.ϵ.\n\nGiven this, we have to keep track of two probabilities for each prefix\n\nin the beam. The probability of all alignments which end in\n\nϵ\\epsilonϵ and the probability of all alignments which don’t\n\nend in ϵ.\\epsilon.ϵ. When we rank the hypotheses at\n\neach step before pruning the beam, we’ll use their combined scores.\n\nThe implementation of this algorithm doesn’t require much code, but it is\n\ndense and tricky to get right. Checkout this\n\n[gist](https://gist.github.com/awni/56369a90d03953e370f3964c826ed4b0)\n\nfor an example implementation in Python.\n\nIn some problems, such as speech recognition, incorporating a language model\n\nover the outputs significantly improves accuracy. We can include the language\n\nmodel as a factor in the inference problem.\n\nY∗=argmaxYY^\\* \\enspace = \\enspace {\\mathop{\\text{argmax}}\\limits\\_{Y}} \n\n Y∗=Yargmax​\n\np(Y∣X)⋅p(Y \\mid X) \\quad \\cdotp(Y∣X)⋅\n\n The CTC conditional probability.\n\n \n\np(Y)α⋅p(Y)^\\alpha \\quad \\cdotp(Y)α⋅\n\n The language model probability.\n\n \n\nL(Y)βL(Y)^\\betaL(Y)β\n\n The “word” insertion bonus.\n\n \n\nThe function L(Y)L(Y)L(Y) computes the length of\n\nYYY in terms of the language model tokens and acts as a word\n\ninsertion bonus. With a word-based language model L(Y)L(Y)L(Y)\n\ncounts the number of words in Y.Y.Y. If we use a character-based\n\nlanguage model then L(Y)L(Y)L(Y) counts the number of characters\n\nin Y.Y.Y. The language model scores are only included when a\n\nprefix is extended by a character (or word) and not at every step of the\n\nalgorithm. This causes the search to favor shorter prefixes, as measured by\n\nL(Y)L(Y)L(Y), since they don’t include as many language model\n\nupdates. The word insertion bonus helps with this. The parameters\n\nα\\alphaα and β\\betaβ are usually set by\n\ncross-validation.\n\nThe language model scores and word insertion term can be included in the\n\nbeam search. Whenever we propose to extend a prefix by a character, we can\n\ninclude the language model score for the new character given the prefix so\n\nfar.\n\nProperties of CTC\n\n-----------------\n\nWe mentioned a few important properties of CTC so far. Here we’ll go\n\ninto more depth on what these properties are and what trade-offs they offer.\n\n### Conditional Independence\n\nOne of the most commonly cited shortcomings of CTC is the conditional\n\nindependence assumption it makes.\n\n![](Sequence%20Modeling%20with%20CTC_files/conditional_independence.svg)\n\n Graphical model for CTC.\n\n \n\nThe model assumes that every output is conditionally independent of\n\nthe other outputs given the input. This is a bad assumption for many\n\nsequence to sequence problems.\n\nSay we had an audio clip of someone saying “triple A”.\n\n Another valid transcription could\n\nbe “AAA”. If the first letter of the predicted transcription is ‘A’, then\n\nthe next letter should be ‘A’ with high probability and ‘r’ with low\n\nprobability. The conditional independence assumption does not allow for this.\n\n![](Sequence%20Modeling%20with%20CTC_files/triple_a.svg)\n\n If we predict an ‘A’ as the first letter then the suffix ‘AA’ should get much\n\n more probability than ‘riple A’. If we predict ‘t’ first, the opposite\n\n should be true.\n\n \n\nIn fact speech recognizers using CTC don’t learn a language model over the\n\noutput nearly as well as models which are conditionally dependent.\n\n However, a separate language model can\n\nbe included and usually gives a good boost to accuracy.\n\nThe conditional independence assumption made by CTC isn’t always a bad\n\nthing. Baking in strong beliefs over output interactions makes the model less\n\nadaptable to new or altered domains. For example, we might want to use a speech\n\nrecognizer trained on phone conversations between friends to transcribe\n\ncustomer support calls. The language in the two domains can be quite different\n\neven if the acoustic model is similar. With a CTC acoustic model, we can easily\n\nswap in a new language model as we change domains.\n\n### Alignment Properties\n\nThe CTC algorithm is *alignment-free*. The objective function\n\nmarginalizes over all alignments. While CTC does make strong assumptions about\n\nthe form of alignments between XXX and YYY, the\n\nmodel is agnostic as to how probability is distributed amongst them. In some\n\nproblems CTC ends up allocating most of the probability to a single alignment.\n\nHowever, this isn’t guaranteed.\n\nWe could force the model to choose a single\n\nalignment by replacing the sum with a max in the objective function,\n\np(Y∣X)=maxA∈AX,Y∏t=1Tp(at∣X).\n\np(Y∣X)=A∈AX,Y​max​t=1∏T​p(at​∣X).\n\nAs mentioned before, CTC only allows *monotonic* alignments. In\n\nproblems such as speech recognition this may be a valid assumption. For other\n\nproblems like machine translation where a future word in a target sentence\n\ncan align to an earlier part of the source sentence, this assumption is a\n\ndeal-breaker.\n\nAnother important property of CTC alignments is that they are\n\n*many-to-one*. Multiple inputs can align to at most one output. In some\n\ncases this may not be desirable. We might want to enforce a strict one-to-one\n\ncorrespondence between elements of XXX and\n\nY.Y.Y. Alternatively, we may want to allow multiple output\n\nelements to align to a single input element. For example, the characters\n\n“th” might align to a single input step of audio. A character based CTC model\n\nwould not allow that.\n\nThe many-to-one property implies that the output can’t have more time-steps\n\nthan the input.\n\n If YYY has rrr consecutive\n\n repeats, then the length of YYY must be less than\n\n the length of XXX by 2r−1.2r - 1.2r−1.\n\nThis is usually not a problem for speech and handwriting recognition since the\n\ninput is much longer than the output. However, for other problems where\n\nYYY is often longer than XXX, CTC just won’t\n\nwork.\n\nCTC in Context\n\n--------------\n\nIn this section we’ll discuss how CTC relates to other commonly used\n\nalgorithms for sequence modeling.\n\n### HMMs\n\nAt a first glance, a Hidden Markov Model (HMM) seems quite different from\n\nCTC. But, the two algorithms are actually quite similar. Understanding the\n\nrelationship between them will help us understand what advantages CTC has over\n\nHMM sequence models and give us insight into how CTC could be changed for\n\nvarious use cases.\n\nLet’s use the same notation as before,\n\nXXX is the input sequence and YYY\n\nis the output sequence with lengths TTT and\n\nUUU respectively. We’re interested in learning\n\np(Y∣X).p(Y \\mid X).p(Y∣X). One way to simplify the problem is to apply\n\nBayes’ Rule:\n\np(Y∣X)∝p(X∣Y)p(Y).\n\np(Y \\mid X) \\; \\propto \\; p(X \\mid Y) \\; p(Y).\n\np(Y∣X)∝p(X∣Y)p(Y).\n\nThe p(Y)p(Y)p(Y) term can be any language model, so let’s focus on\n\np(X∣Y).p(X \\mid Y).p(X∣Y). Like before we’ll let\n\nA\\mathcal{A}A be a set of allowed alignments between\n\nXXX and Y.Y.Y. Members of\n\nA\\mathcal{A}A have length T.T.T.\n\nLet’s otherwise leave A\\mathcal{A}A unspecified for now. We’ll\n\ncome back to it later. We can marginalize over alignments to get\n\np(X∣Y)=∑A∈Ap(X,A∣Y).\n\np(X \\mid Y)\\; = \\; \\sum\\_{A \\in \\mathcal{A}} \\; p(X, A \\mid Y).\n\np(X∣Y)=A∈A∑​p(X,A∣Y).\n\nTo simplify notation, let’s remove the conditioning on YYY, it\n\nwill be present in every p(⋅).p(\\cdot).p(⋅). With two assumptions we can\n\nwrite down the standard HMM.\n\np(X)=p(X) \\quad =p(X)=\n\n The probability of the input\n\n \n\n∑A∈A∏t=1T\\sum\\_{A \\in \\mathcal{A}} \\; \\prod\\_{t=1}^T∑A∈A​∏t=1T​\n\n Marginalizes over alignments\n\n \n\np(xt∣at)⋅p(x\\_t \\mid a\\_t) \\quad \\cdotp(xt​∣at​)⋅\n\n The emission probability\n\n \n\np(at∣at−1)p(a\\_t \\mid a\\_{t-1})p(at​∣at−1​)\n\n The transition probability\n\n \n\nThe first assumption is the usual Markov property. The state\n\nata\\_tat​ is conditionally independent of all historic states given\n\nthe previous state at−1.a\\_{t-1}.at−1​. The second is that the observation\n\nxtx\\_txt​ is conditionally independent of everything given the\n\ncurrent state at.a\\_t.at​.\n\n![](Sequence%20Modeling%20with%20CTC_files/hmm.svg)\n\n The graphical model for an HMM.\n\n \n\nNow we can take just a few steps to transform the HMM into CTC and see how\n\nthe two models relate. First, let’s assume that the transition probabilities\n\np(at∣at−1)p(a\\_t \\mid a\\_{t-1})p(at​∣at−1​) are uniform. This gives\n\np(X)∝∑A∈A∏t=1Tp(xt∣at).\n\np(X)∝A∈A∑​t=1∏T​p(xt​∣at​).\n\nThere are only two differences from this equation and the CTC loss function.\n\nThe first is that we are learning a model of XXX given\n\nYYY as opposed to YYY given X.X.X.\n\nThe second is how the set A\\mathcal{A}A is produced. Let’s deal\n\nwith each in turn.\n\nTo do this, we apply Bayes’ rule and rewrite the model as\n\np(X)∝∑A∈A∏t=1Tp(at∣xt)p(xt)p(at)\n\np(X)∝A∈A∑​t=1∏T​p(at​)p(at​∣xt​)p(xt​)​\n\n∝∑A∈A∏t=1Tp(at∣xt)p(at). \n\n∝A∈A∑​t=1∏T​p(at​)p(at​∣xt​)​.\n\nIf we assume a uniform prior over the states aaa and condition on all of\n\nXXX instead of a single element at a time, we arrive at\n\np(X)∝∑A∈A∏t=1Tp(at∣X).\n\np(X)∝A∈A∑​t=1∏T​p(at​∣X).\n\nThe above equation is essentially the CTC loss function, assuming the set\n\nA\\mathcal{A}A is the same. In fact, the HMM framework does not specify what\n\nA\\mathcal{A}A should consist of. This part of the model can be designed on a\n\nper-problem basis. In many cases the model doesn’t condition on YYY and the\n\nset A\\mathcal{A}A consists of all possible length TTT sequences from the\n\noutput alphabet. In this case, the HMM can be drawn as an *ergodic* state\n\ntransition diagram in which every state connects to every other state. The\n\nfigure below shows this model with the alphabet or set of unique hidden states\n\nas {a,b,c}.\\{a, b, c\\}.{a,b,c}.\n\nIn our case the transitions allowed by the model are strongly related to\n\nY.Y.Y. We want the HMM to reflect this. One possible model could\n\nbe a simple linear state transition diagram. The figure below shows this with\n\nthe same alphabet as before and Y=Y =Y= [a, b]. Another commonly\n\nused model is the *Bakis* or left-right HMM. In this model any\n\ntransition which proceeds from the left to the right is allowed.\n\n![](Sequence%20Modeling%20with%20CTC_files/ergodic_hmm.svg)\n\n**Ergodic HMM:** Any node can be either a starting or\n\n final state.\n\n \n\n![](Sequence%20Modeling%20with%20CTC_files/linear_hmm.svg)\n\n**Linear HMM:** Start on the left, end on the right.\n\n \n\n![](Sequence%20Modeling%20with%20CTC_files/ctc_hmm.svg)\n\n**CTC HMM:** The first two nodes are the starting\n\n states and the last two nodes are the final states.\n\n \n\nIn CTC we augment the alphabet with ϵ\\epsilonϵ and the HMM model allows a\n\nsubset of the left-right transitions. The CTC HMM has two start\n\nstates and two accepting states.\n\nOne possible source of confusion is that the HMM model differs for any unique\n\nY.Y.Y. This is in fact standard in applications such as speech recognition. The\n\nstate diagram changes based on the output Y.Y.Y. However, the functions which\n\nestimate the observation and transition probabilities are shared.\n\nLet’s discuss how CTC improves on the original HMM model. First, we can think\n\nof the CTC state diagram as a special case HMM which works well for many\n\nproblems of interest. Incorporating the blank as a hidden state in the HMM\n\nallows us to use the alphabet of YYY as the other hidden states. This model\n\nalso gives a set of allowed alignments which may be a good prior for some\n\nproblems.\n\nPerhaps most importantly, CTC is discriminative. It models p(Y∣X)p(Y \\mid\n\n X)p(Y∣X) directly, an idea that’s been important in the past with other\n\ndiscriminative improvements to HMMs.\n\nDiscriminative training let’s us apply powerful learning algorithms like the\n\nRNN directly towards solving the problem we care about.\n\n### Encoder-Decoder Models\n\nThe encoder-decoder is perhaps the most commonly used framework for sequence\n\nmodeling with neural networks. These models have an encoder\n\nand a decoder. The encoder maps the input sequence XXX into a\n\nhidden representation. The decoder consumes the hidden representation and\n\nproduces a distribution over the outputs. We can write this as\n\nH=encode(X)p(Y∣X)=decode(H).\n\n\\begin{aligned}\n\nH\\enspace &= \\enspace\\textsf{encode}(X) \\\\[.5em]\n\np(Y \\mid X)\\enspace &= \\enspace \\textsf{decode}(H).\n\n\\end{aligned}\n\nHp(Y∣X)​=encode(X)=decode(H).​\n\nThe encode(⋅)\\textsf{encode}(\\cdot)encode(⋅) and\n\ndecode(⋅)\\textsf{decode}(\\cdot)decode(⋅) functions are typically RNNs. The\n\ndecoder can optionally be equipped with an attention mechanism. The hidden\n\nstate sequence HHH has the same number of time-steps as the\n\ninput, T.T.T. Sometimes the encoder subsamples the input. If the\n\nencoder subsamples the input by a factor sss then\n\nHHH will have T/sT/sT/s time-steps.\n\nWe can interpret CTC in the encoder-decoder framework. This is helpful to\n\nunderstand the developments in encoder-decoder models that are applicable to\n\nCTC and to develop a common language for the properties of these\n\nmodels.\n\n**Encoder:** The encoder of a CTC model can be just about any\n\nencoder we find in commonly used encoder-decoder models. For example the\n\nencoder could be a multi-layer bidirectional RNN or a convolutional network.\n\nThere is a constraint on the CTC encoder that doesn’t apply to the others. The\n\ninput length cannot be sub-sampled so much that T/sT/sT/s\n\nis less than the length of the output.\n\n**Decoder:** We can view the decoder of a CTC model as a simple\n\nlinear transformation followed by a softmax normalization. This layer should\n\nproject all TTT steps of the encoder output\n\nHHH into the dimensionality of the output alphabet.\n\nWe mentioned earlier that CTC makes a conditional independence assumption over\n\nthe characters in the output sequence. This is one of the big advantages that\n\nother encoder-decoder models have over CTC — they can model the\n\ndependence over the outputs. However in practice, CTC is still more commonly\n\nused in tasks like speech recognition as we can partially overcome the\n\nconditional independence assumption by including an external language model.\n\nPractitioner’s Guide\n\n--------------------\n\nSo far we’ve mostly developed a conceptual understanding of CTC. Here we’ll go\n\nthrough a few implementation tips for practitioners.\n\n**Software:** Even with a solid understanding of CTC, the\n\nimplementation is difficult. The algorithm has several edge cases and a fast\n\nimplementation should be written in a lower-level programming language.\n\nOpen-source software tools make it much easier to get started:\n\n* Baidu Research has open-sourced\n\n [warp-ctc](https://github.com/baidu-research/warp-ctc). The\n\n package is written in C++ and CUDA. The CTC loss function runs on either\n\n the CPU or the GPU. Bindings are available for Torch, TensorFlow and\n\n [PyTorch](https://github.com/awni/warp-ctc).\n\n* TensorFlow has built in\n\n [CTC loss](https://www.tensorflow.org/api_docs/python/tf/nn/ctc_loss)\n\n functions for the CPU.\n\n* Nvidia also provides a GPU implementation of CTC in\n\n [cuDNN](https://developer.nvidia.com/cudnn) versions 7 and up.\n\n**Numerical Stability:** Computing the CTC loss naively is\n\nnumerically unstable. One method to avoid this is to normalize the\n\nα\\alphaα’s at each time-step. The original publication\n\nhas more detail on this including the adjustments to the gradient.\n\n In practice this works well enough\n\nfor medium length sequences but can still underflow for long sequences.\n\nA better solution is to compute the loss function in log-space with the\n\nlog-sum-exp trick.\n\nWhen computing the sum of two probabilities in log space use the identity\n\nlog(ea+eb)=max{a,b}+log(1+e−∣a−b∣)\n\n\\log(e^a + e^b) = \\max\\{a, b\\} + \\log(1 + e^{-|a-b|})\n\nlog(ea+eb)=max{a,b}+log(1+e−∣a−b∣)\n\nMost programming languages have a stable function to compute\n\nlog(1+x)\\log(1 + x)log(1+x) when\n\nxxx is close to zero.\n\nInference should also be done in log-space using the log-sum-exp trick.\n\n**Beam Search:** There are a couple of good tips to know about\n\nwhen implementing and using the CTC beam search.\n\nThe correctness of the beam search can be tested as follows.\n\n1. Run the beam search algorithm on an arbitrary input.\n\n2. Save the inferred output Y¯\\bar{Y}Y¯ and the corresponding\n\n score c¯.\\bar{c}.c¯.\n\n3. Compute the actual CTC score ccc for\n\n Y¯.\\bar{Y}.Y¯.\n\n4. Check that c¯≈c\\bar{c} \\approx cc¯≈c with the former being no\n\n greater than the latter. As the beam size increases the inferred output\n\n Y¯\\bar{Y}Y¯ may change, but the two numbers should grow\n\n closer.\n\nA common question when using a beam search decoder is the size of the beam\n\nto use. There is a trade-off between accuracy and runtime. We can check if the\n\nbeam size is in a good range. To do this first compute the CTC score for the\n\ninferred output ci.c\\_i.ci​. Then compute the CTC score for the ground\n\ntruth output cg.c\\_g.cg​. If the two outputs are not the same, we\n\nshould have cg